Satish Misra MD contributed to this article

We recently talked about strategies that clinicians and developers can use to make apps focused on behavioral change more successful.

In many ways, though, these lessons are based on a trial-by-error approach to development.

More needs to be done, we’d argue, to improve the current research body of behavior change. At the recent mHealth Summit, there were a number of events that touched on this need.

Taking a step back, there were three prominent themes:

  • Researchers need ways to study technology
  • Researchers need to work in interdisciplinary teams
  • We need new ways of capturing information

Here, we’ll explore how each of these ideas can help drive the development of a stronger evidence base for the use of mhealth tools to drive behavior change.

Researchers need new ways to study technology

The problem, according to Stephen Antille PhD from Northeastern University, is that a lot of psychological theories that look at mobile technology focus on cognition — what people think of mobile technology. Instead, we should focus on what people actually do.

“Things like habit and timing are very quick and we can’t measure that so well,” Antille says. “Our theories are largely on cognition because it might be the easiest to measure with surveys and tools we have, but it might not be the most relevant.”

During the NIH research training at the mHealth Summit, Inbal “Billie” Nahum-Shani PhD from the University of Michigan introduced a way to evaluate the different sequences of intervention components to allow researchers to answer questions like Should I include text messages? and Should I include social networking aspects, like buddy training? as well as Should I include a meal replacement? in their apps.

“Sometimes, we have questions concerning efficacy of individual components [of an app]: which components are effective?” Nahum-Shani says. “It’s possible that SMS [messages] are highly effective for younger adults, but what about older adults? Sometimes I don’t know … if more intense or less frequent [coaching better]? And which components work well together? … SMART can address critical questions about sequencing of components.”

SMART — or Sequential Multiple Assignment Randomized Trial — designs are a special case of randomized trial with multiple stages of randomization. The technique is described in the American Psychological Association’s Psychological Methods paper Experimental Design and Primary Data Analysis Methods for Comparing Adaptive Interventions by Nahum-Shani et al.:

A SMART is a multistage randomized trial in which each stage corresponds to a critical decision. Each participant progresses through the stages and is randomly assigned to one of several intervention options at each stage… The SMART was designed specifically to aid in the development of adaptive interventions.

Similar techniques have been used for the STAR*D trial for treatment of depression, trials of retreating cancer at the University of Texas MD Anderson Cancer Center, the NIMH Clinical Antipsychotic Trials of Intervention Effectiveness (CATIE), and Adaptive Interventions for Children with ADHD at SUNY-Buffalo.

x---Thinking-of-apps---behavioral-change

Researchers need to work in interdisciplinary teams

“The study of factors influencing behavior has been isolated in different fields,” Antille also notes. “People are studying diet & nutrition, but are not studying things like eating patterns throughout the day or TV watching — [factors] which will clearly influence how we snack. There are major silos in the field.”

Edward Boyer MD PhD of the University of Massachusetts Medical School also concurs. He says there’s already a disconnect between engineers and clinicians.

“Clinicians say they want to study a problem and want an mHealth solution, but they don’t understand the enormous difference between off the shelf [solutions] that can be modified, or something that needs to be largely created. … When engineers say ‘We can do that,’ docs think ‘[It can happen] right now!'”

Boyer also wants to see designers and clinicians create something that people will actually want to use.

“Clinicians do in-depth interviews & assessments. So [clinicians often try to implement] 106-item surveys [in an app]! … Interventions only work if [patients] want to use it. Don’t create an mHealth intervention that increases [their workload]  by 20%.”

Newer ways of capturing information, like sensors

Rochelle Rosen, PhD, from Brown Medical School’s Center for Behavioral & Preventive Medicine, says that although there is a tidal wave of information coming from sensors, there are still barriers to good research. For instance, gaining access to sensors has been an issue and is likely due to business and legal barriers. Additionally getting a large amount of data has been a problem. The data has to be trustworthy and reusable. Because there is no set standard with regulated sensors, data from different sensors can vary greatly.

As we covered in our previous article on mental health technologies at the mHealth Summit, API’s are steadily becoming available for researchers to actually use such data, such as Purple Robot.

Conclusion

In sum, researchers cited three issues as hampering efforts to study mobile health technologies:

  • the need for new ways to study technology
  • the need to improve team communication within interdisciplinary teams
  • the need for newer standardized ways of capturing information, from sensors

 

Steven Chan, M.D., M.B.A., is a resident physician at the University of California, Davis Health System. He is researching psychiatry, telemedicine, mobile technology, & human behavior. Steve previously worked as a software and web engineer as well as creative designer at Microsoft & UC Berkeley. Visit him at www.stevenchanMD.com and @StevenChanMD.