Pages

Monday, November 12, 2018

The (Inconvenient) Truth About the FMS and Injury Rates

The Functional Movement Screen (FMS) is a series of seven bodyweight tests designed to rate human movement quality/competency. Each test on the FMS is scored on a 0-3 scale. A perfect total or "composite score" would be 21 points.

The FMS's relationship with injury has been studied extensively in athletes. The thinking is that scoring low on the FMS (usually 14 points or lower) puts an athlete at increased risk for injury, which the research bears out to an extent.


When all the studies were pooled together (in 2015), scoring 14 or below did increase injury risk by about 50%. However, for every 100 athletes that got injured, the FMS only managed to correctly identify about 25 of them as being at risk. For this reason, the FMS should not be used to predict which athletes will get injured on an individual basis.


Despite the fairly conclusive evidence against the FMS's ability to predict injury, new studies keep coming out investigating this same tired research question. In the past two years alone there have been at least nine such studies on rugby, soccer, cricket, handball, volleyball, and basketball players.


Based on this continued proliferation of research, clearly not everyone has gotten the memo about the FMS's inability to predict injury. For this reason - and in an attempt to help put this issue to rest - my colleagues and I just published a critical review of the FMS. I break down the results of our review below.






Objectives of Our Critical Review

Instead of rehashing the same tired arguments and analyses over again, we approached the relationship between the FMS and injury from a slightly different angle. We wanted to know whether athletes who suffer more injuries also perform worse on the FMS.


After all, if scoring low on the FMS means that an athlete is at increased risk of injury, then being at increased risk of injury should correspondingly mean that an athlete will score low on the FMS.


It's well known that injury rates increase as level of play increases. College athletes suffer more injuries than high school athletes, and professional athletes get injured more than college athletes. If the FMS reflected this pattern, then college athletes would score lower on the FMS than high school athletes, and pros would score lowest.

We set out two objectives for our critical review:


1) To determine if FMS composite scores differ across high school, college, and professional athletic populations.


2) To determine if lower FMS composite scores are associated with higher injury rates just within college sports.

What We Did


In September of 2017, we searched three online databases for studies with the words "Functional Movement Screen" or just "movement screen." We included studies that reported a FMS composite score for a group of high school, college, or pro athletes.


From each study, we noted the following:

  • The number of athletes, along with their age and sex
  • The sport(s) the athletes played and their level of play
  • The athletes' average composite score on the FMS

What We Found

A total of 36 studies met the criteria for inclusion in our critical review. These studies provided composite scores for 62 different groups of athletes and over 3000 athletes in total.


Objective #1


Our first objective was to determine if FMS composite scores differed across high school, college, and professional athletic populations. Across all three levels of play, 61% of the scores fell between 14 and 16. Here was the level-by-level breakdown for average composite score:


Level of Play
Average FMS Composite Score
High school athletes
14.1
College athletes
14.8
Professional athletes
15.7

As you can see, average composite scores went up slightly from level to level, which would oppose the notion that athletes who suffer more injuries (college and pro) perform worse on the FMS.


It's important to note here, though, that the above differences in composite scores (1.6 points from high school to pro) probably don't exceed the FMS's measurement error. Previous research has shown that you need at least a 2-point differenceif not 3, to be certain you're not just looking at noise in the data. Therefore, what we conclude from this data is that there's no difference between FMS scores across levels of play.


Objective #2


Our second objective was to determine if lower FMS composite scores were associated with higher injury rates in college sports only. To our surprise, we found the exact opposite relationship.


Across 13 sports, higher FMS scores were actually strongly associated with higher injury rates. In other words, as college athletes' FMS scores go up, so do their injury rates. This finding clearly undermines the FMS's proposed relationship with injury.

Things to Keep in Mind


There were a couple of important limitations of our analyses. First, there were several instances in which the composite scores we used for a sport were based on a small number of athletes. There's no guarantee that these data points reflect all athletes who play those sports.


Second, the injury rates we used for high school and college sports were based on national averages. Again, there's no guarantee that those national average injury rates apply to the specific athletes that the composite scores we used in this study were based on.


Despite these limitations, if athletes who suffer more injuries also perform worse on the FMS, then our results would reflect that relationship. Instead, we found no difference across levels of play (Objective #1) and the exact opposite within college sports only (Objective #2).


Practical Implications


The astute reader probably isn't surprised that the FMS doesn't reflect injury rates within and across levels of play. After all, there are a boatload of things besides the way a person moves that can influence injury risk (e.g. age, sex, previous injuries, psychology, training load, playing conditions, opponent behavior). Attempting to predict injury based on any one factor is a fool's errand (yet researchers continue to try!).



Thus, the results of our critical review provide further evidence against the use of the FMS composite score for injury purposes in athletes. This doesn't mean that the FMS is useless, though. In fact, the creators of the FMS themselves expressly advised against the composite score four years ago. Instead, they recommend looking for asymmetries and low scores on a test-by-test basis.

Instead of trying to predict injury, the FMS may be used for several purposes: 


  • To identify folks with painful movements (and make sure they receive proper medical attention).
  • To establish a person's baseline for bodyweight movement (for comparison in the event of a future injury).
  • To identify movement patterns that can be trained with added external load.

Now, one could argue that the FMS isn't the best approach for the above purposes. There are certainly concerns regarding the FMS's validity for measuring movement quality/competency, which I discuss in the critical review. Nevertheless, at present the FMS is one of the most popular screening/assessment tools available. I challenge the reader to come up with something better. I'm working on doing the same.



If you'd like to read the full-text version of our critical review, feel free to request it on ResearchGate.