Lack of physical activity is a major health issue with nearly half of American adults report no leisure time activity. With smartphones in a majority of these individuals’ pockets plus growing smorgasbord of activity trackers, a growing number of mobile health interventions and products are focusing on getting people moving more.
Many interventions focused on changing behaviors including physical activity use validated questionnaires to collect self-reported assessments of things like how much a person exercised in the past week. These tools are limited by the not only the biases of the person answering the questions but also the granularity of the data they can capture. Alternatively, wearable activity trackers can capture a lot of data at a very high level of granularity. A well recognized problem here, however, is the tendency of activity trackers to end up at the bottom of a sock drawer.
A group of researchers from the University of Nebraska – Kearney and Iowa State University looked at whether they could use the data from an activity trackers to make questionnaires about physical activity more useful.
In this study, they started with the PAQ questionnaire, which assesses physical activity in kids over the preceding seven day period. They recruited 150 kids and had two-thirds wear a validated activity tracker, in this case an Actigraph, for those seven days. Using the activity tracker, the calculated how much time each kid spent exercising (or percent time in moderate-vigorous physical activity).
They then tried to derive a simple equation based on age, gender, and the PAQ score to predict time spent exercising. To test it, they applied it to the other third of the group (the cross-validation group). In that cross validation group, they found no significant difference between exercise time using their predictive model and the exercise time measured by the activity tracker (mean difference 25.3 +/- 18.1 minutes, p=0.17). Overall, the correlation was fair (r = 0.63).
Overall, what most caught my attention about this work was the general idea rather than the actual results; clearly, this model would need to be refined further to be useful clinically. Generally though, the approach of using more intensive, granular monitoring (the type of stuff patients are likely to ditch in time) and correlating it with measures that can be collected in less onerous or even passive ways so that the latter can be followed longitudinally. In other words, patients could get a shorter, defined time when they have to remember to put on that pedometer or count every calorie so that we can calibrate other simple or passive measures of activity and diet.
Perhaps one approach to the stickiness problem in digital health is to skip trying to convince patients to wear their pedometer every single day indefinitely and start looking more at how we can combine multiple different measures of, say, activity to create long-term behavior change interventions that are a little less tedious for our patients.