A recent study published in the Journal of Stroke looked at apps available on iTunes and Google Play regarding stroke. They found 93 relevant apps. The apps were analyzed on the basis of cost, target audience, type of information, validity, type of publisher, and user ratings.

With regard to cost, the apps were compared based on whether they were free or not. Among the paid apps, there was no comparison done between less and more expensive apps. About half of the apps fell into each category, but within the categories, the number of apps that showed evidence-based validity was similar. There was no association between cost and validity. So, even if you paid for an app, it didn’t guarantee you were getting one that included better content.

Again, this echoes another study we recently commented on in regards to paid fitness apps apps versus free ones.

When comparing apps targeted towards healthcare workers versus the general public, there was a significant difference in validity. 90% of those intended for healthcare workers demonstrated evidence-based validity whereas the ones for the general public only had 27% validity.

Apps that were intended as tools for evaluation and management of stroke for healthcare workers were given great ratings (72% received more than 3 stars) and 92% were scientifically valid. For the general public though, 60% received moderate ratings (2 to 3 stars) despite only 27% having validity.

Of the apps reviewed, about half of them were published by healthcare agencies. Interestingly, there was no statistically significant difference when comparing whether the app was published by a healthcare agency or not. One would think an app produced by a larger healthcare organization would have fared better than one that is not.

When looking at the group of apps as a whole, 59.1% were scientifically valid. Scientific validity was based on whether they adhered to current stroke literature and guidelines. Of those considered to be scientifically valid, 96% were given moderate to high ratings in the app stores (11 apps had 2 to 3 out of 5 stars; 42 had more than 3). On the other hand, 87% of non-valid apps were also given moderate to high ratings (24 apps had 2 to 3 out of 5 stars; 9 had more than 3).

This study showed a very interesting dichotomy when it comes to patient-centric versus physician-centric stroke apps. Apps that are patient-centric clearly need more work to make sure they are using validated principles, whereas apps made for healthcare providers have room to improve, but are overall doing a solid job.

The evidence supports the idea that patients would benefit from a discussion of what kind of apps they are using to get information about stroke. Patients (and providers) should know that cost, rating, and type of publisher are not reliable ways to judge whether or not an app maybe useful. Providers must be aware that patients could be using apps as resources that are mostly not scientifically valid.

Equally as important as the results of this study was that it was done in the first place. We need more studies to evaluate the validity of mobile health apps as they highlight trends that have the opportunity to be reversed.

Source: Journal of Stroke