mHealth apps have been in the news recently, but not exactly favourably. A team of researchers at University College London, headed by Kit Huckvale, found that many of the apps in the carefully curated NHS Health App Library have clear privacy and security gaps.
This is not an isolated finding. Earlier this year, Konstantin Knorr of the University of Applied Sciences, Trier, Germany, David Aspinall, University of Edinburgh, and I published a paper where we examined the privacy and security of the 154 most popular diabetes and hypertension apps for Android.
We focused on these two conditions because they require regular monitoring, can be monitored through a single parameter (diabetes: blood glucose; hypertension: blood pressure), and are very common. What is more, monitoring their blood pressure at home can help people lower their blood pressure, if the self-monitoring is linked with appropriate clinical support.
Monitoring your own blood pressure works – but can apps deliver the security you need?
In order to assess privacy and security aspects, Konstantin and David developed a comprehensive methodology that looked at four key aspects of an mHealth app:
- How safe is the code? Questions included: Does the app allow ads, which means that ad companies be able to see the data that is being tracked? What permissions does the app require?
- What happens when the app is used? Questions included: Does the app accept illegal input? Does the app safely export data? What about log files and backups?
- What happens when the app contacts its dedicated web server? Questions included: Are data encrypted? Are strong passwords encouraged?
Needless to say, none of the findings were particularly encouraging.
You can find the full data set at http://tinyurl/mhealthapps.
In this initial post, I would like to highlight three key results:
74 of the 154 apps use advertising add ons, and 27 use analytics that track how people use the app. Both types of add ons transfer data out, and these data can include IDs such as a the user’s Google Account ID, and might even contain some of the user’s health data – not necessarily encrypted. Analytics are not necessarily evil; they are an important way for app developers to get feedback on their app and find out whether people actually use it regularly. The question is what data is transmitted, and how.
All apps need to have a certificate before they can be installed on a mobile phone, be it Apple or Android, to indicate who is responsible for the app. While there are external certificate providers, all of the apps we tested had self-signed certificates, and only 40 provided more information than the name of the developer. 83% used the most basic SHA1 encryption to secure their certificate, and only 17% the safer option, SHA256.
Apps that store data on a dedicated web server are not necessarily safe, either. 20 of the apps we tested used their own web portal, and a third of these transferred medical data to it without encryption. Almost none of the apps that allowed people to send email reports to family, carers, and health professionals encrypted the content of the emails, which makes them easy targets for people who happen to be listening in via the Internet connection at your favourite coffee shop.
Does it Matter?
That depends on whether you want to share the details of your blood pressure or blood glucose readings with the world or not. Note I said “world” – not a carefully selected list of friends, family, and health care professionals. This world may use your data for purposes that you haven’t thought of, from personalized in-app advertising to selling your data to insurance providers.
Many research centres, including my colleagues at the University of Edinburgh Centre for Advanced Studies in Cybersecurity and Privacy, are currently looking at ways of changing this situation.
If you were to use an app to log an aspect of your health and wellbeing, how secure should it be? Who should be able to access your data?