NHS Health AppsThe United Kingdom’s National Health Service (NHS) launched the Health Apps Library as part of their NHS Choices program in 2013. It was pitched as a pilot program, one that would guide patients and clinicians to safe, effective health apps. And now after a troubled two years, the NHS Health Apps Library is shutting down later this week.

The Health Apps Library has had a run of challenges and critiques. Earlier this year, a privacy advocacy group raised concerns about the screening of nearly half of the 200+ apps included. In particular, they highlighted concerns about the security standards of many apps.

That security concern was further validated in a recent study from the Imperial College London. In this study, researchers tested 79 apps using dummy data. They found that none of the apps encrypted data on the device (including PHI). Of the 35 apps that sent identifying information via the Internet, two thirds did not use any encryption. And four apps sent PHI without any encryption.

There have also been concerns about the effectiveness of the apps included. In a recent editorial, another pair of researchers looked at 14 apps in the Health Apps Library for depression and anxiety. Only 4 of the apps included references or data to back up claims of effectiveness and only 2 used validated (and readily available) assessment metrics.

The NHS is currently undergoing a large scale review and overhaul generally focused on the idea of personalized medicine that includes re-evaluating their approach to digital health. The NHS is reportedly plans to launch a series of disease-focused sites that will include a variety of digital health tools. They’ve launched a mental health apps library pilot that appears to use a different review and approval process and is presumably the model being considered for other health conditions.

Its worth taking a step back, though, and thinking a bit about the approval process used by the NHS and what we can learn from it. According to the NHS, this was the process:

All apps submitted to the Health Apps Library are checked to make sure that they are relevant to people living in England, comply with data protection laws and comply with trusted sources of information, such as NHS Choices. Once an app has met these minimum requirements, we then check to see whether the app could potentially cause harm to a person’s health or condition…Our clinical assurance team – which is made up of doctors, nurses and safety specialists, work with the developer to make sure the app adheres to our safety standards. During this process any potential safety concerns are identified and either designed out or dealt with so that any remaining risk is at an acceptable level.

In many ways, that sounds generally reasonable. One critique that’s been frequently leveled is that the NHS should have been much more hands-on in terms of detailed app testing, particularly from a security standpoint. Keep in mind though that Happtique tried that, contracting out to a company that supposedly specialized in that kind of testing. And within days of launching their first suite of “certified” apps, they collapsed after a security researcher found problems with apps they approved.

In my opinion, a fundamental problem with both the NHS approach and the Happtique approach was that they applied a one-size-fits all solution to the problem of finding good health apps. By some estimates, there are over 165,000 health apps on the market now. If we say that 1% are “good,” whatever that means, then there are 1650 needles to find in that haystack. No central certification or in-depth evaluation system will be able to scale to accomplish that task.

There will be a number of ways in which this enormous market is curated to help patients and clinicians find good health apps. Depending on the risk profile and purpose of the app, contributors to this process will include the FDA, public and private healthcare systems (e.g. NHS, VA, Kaiser, etc), peer-review resources like iMedicalApps, and consumer reviews facilitated by groups like PatientsLikeMe or HealthTap.

Certainly many apps will warrant a very high level of scrutiny through some centralized evaluation model administered by well-resourced organizations. But it would be impractical to say that all health apps will get that degree of scrutiny. So in many cases, it will be left to the end-users, patients & clinicians, to assess an app themselves and make an informed decision.

And perhaps thats an area in which we should be putting more effort – empowering individuals to be informed consumers with knowledge about what they should look for in a health or medical app. If an app asks for your name, date of birth, address, and a lot of other identifying information, that should trigger more scrutiny about (1) whether the app really needs that information (2) whether its protecting that information and (3) whether you’re willing to trade that information for whatever the app is offering. If an app claims to help you manage a specific health condition, look for an explanation about why you should believe that using the app will help you and consider talking to your doctor about what sorts of self-management strategies make sense.

A central tenet of digital health is empowering individuals but we often just talk about that in the context of new devices or apps. But empowering patients & clinicians needs to go a step further, not just providing them with new tools but also providing them with the resources and information they need to pick the right tools.