[This is a preview of some of the exciting mHealth research being presented at the this week’s Medicine 2.0 Congress on September 15-16. This abstract and others are candidates for the iMedicalApps-Medicine 2.0 mHealth Research Award]

By: Wendy Nilsen, PhD
iMedicalApps- Medicine 2.0 Research Award Finalist

How do you know when mobile health (mHealth) technology works? In the consumer world, success is determined by sales. In health, the outcomes are not so simple.

Despite the tremendous promise of mHealth, researchers using these technologies (e.g., cell phones, sensors and other wireless devices) are often faced with moving the science into areas that challenge traditional evaluation models.

Conducting lengthy randomized, controlled trials, which are the mainstay of most health research, becomes a challenge when using consumer electronics that have a shelf life of less than two years. To address these issues, the National Institutes of Health, in conjunction with the Robert Wood Johnson Foundation, the McKesson Foundation and the National Science Foundation held a meeting to address methodological issues and gaps in generating evidence for the efficacy of mHealth tools and interventions.

Reliability and Validity in an mHealth World

When exploring whether mHealth does what it should, it is clear that some of the standards of traditional health research, like reliability and validity are crucial, but challenging. For example, in the area of validity, mobile tools often run into trouble because they try to measure processes for which there is no readily accepted gold standard (e.g., measures of stress). To demonstrate validity, a researcher would have to compare the new tools to measures that, after thorough evaluation, may not be as good (or even comparable) to the mHealth measure.

Similarly, in reliability, mHealth often is based on the idea that things should be changing over time; demonstrating reliability is a challenge when variability is actually the target of the research, rather than something to be avoided.

Do we really need on RCT?

In the area of interventions, there has been some confusion as to when a randomized, controlled trial (RCT) is needed for mHealth interventions.  It is clear that mHealth interventions in development do not need the expense and rigor of a RCT. Instead, these projects would greatly benefit from quasi-experimental designs, such as:

  • Pre-post tests that compare data collected before and after an intervention;
  • N-of-1 where people are compared against their own scores before and after intervention;
  • Interrupted time series, which looks at whether people’s trajectories on a target variable change as a result of intervention.

All of these provide important information and do not require the time and expense of control groups, large samples, and randomization. As in all health research, RCTs should be saved for treatments that are well-described and that have been shown to have promise of efficacy.

The time frame that accompanies many randomized controls trials has caused some to say that this design is problematic for mobile health. That said, it is important to not throw out the baby with the bathwater! Although RCTs can be time-consuming and expensive, mHealth actually offers some opportunities that may change that. For example, the ability to generate high volume data may make it easier to reduce the sample size and to render results faster because trends in the data are obvious earlier.

Further, because mobile tools can track and enhance adherence to treatment, the effect of a given intervention may be higher than it would be without using mobile devices.  If research shows that these factors do have a positive impact, RCTs with mobile devices may be faster and more efficient than traditional health research.

Research beyond the RCT

mHealth doesn’t only have to rely on just a standard RCT. Other designs, like the regression dyscontinuity and stepped-wedge have many benefits and still have the same methodological rigor. In regression dyscontinuity, participants are grouped based on where they score on a pre-test assessment. Results are then analyzed based on the differences between the two groups, which are initially assigned based on a predetermined score. In the stepped-wedge design, groups are randomized and then staggered as to when the intervention is deployed.

In the stepped-wedge, everyone eventually receives the treatment, but the timing allows for some groups to serve as controls for those who received the treatment before them. Again, these designs allow consumers to feel safe in receiving an intervention that has been rigorously evaluated.

In addition to the regular designs employed in health research, there are methods that are especially well suited to mHealth. For example, most mHealth projects can be developed using modular components, such as a module for monitoring and feedback of a health indicator(s), skill development for modifying behavior and one for marshalling social support. Designs can then be employed that test each component separately, as well as in combination. These Multiphase Optimization Strategy Trials (MOST) allow researchers to judiciously combine modules in the most effective way, instead of just using “everything but the kitchen sink.”

Similarly, models like Sequential Multiple Assignment Randomized Trial (SMART) let researchers add decision points in the design that will allow the trial to test more than a one-size-fits-all model. For example, in a MOST trial, a researcher would decide before they begin that people who reach the intervention goal be moved to a new schedule for intervention. So, once you have lost eight pounds, you do not need the same intervention intensity as people who have lost no  weight or even gained some. Because the decisions are specified ahead of time, researchers still know that the intervention is the cause of observed changes.

Open Source mHealth

Modularity also allows many mHealth tools to be developed in a way that capitalizes on the basic software development work done by others. An example of this is Open mHealth (Estrin & Sim, 2012; http://openmHealth.org), which uses a platform to share software and generate data standards that will reduce the cost and redundancy inherent in siloed software development.  Ultimately, this should  make evaluation easier by using common languages and data collection tools so that users can compare one mHealth tool to another.

Next Steps

mHealth also allows researchers to discover new patterns in these data by using data mining and modeling techniques. Although they are not a trial design, these mathematical techniques will allow researchers to develop a person’s unique biobehavioral “signature” that, in the future, should allow providers to personalize someone’s care using r one’s own complex data as a baseline for observation or treatment.

Although of all of this highlights the value of mobile in improving health, the take-away message is clearly that we need more high quality research to make sure we are protecting and serving the health community and to ensure that the mHealth hype translates into better health.

Author

Wendy Nilsen, Ph.D. is a Health Scientist Administrator at the NIH Office of Behavioral and Social Sciences Research (OBSSR). Her focus is on the science of human behavior,  the mechanisms of behavior change, behavioral interventions in complex primary care patients including utilizing mobile technology to better understand and improve health &  treatment adherence.