I’ve written a number of times about the growing importance of data in healthcare, and especially of the need to liberate data so it’s more effectively shared, not only between patient and doctor, but also with the research community.
It hopefully goes without saying that as data becomes more liberalized, security is therefore crucial to ensure not only that crooks don’t prosper, but that patients remain confident that their data is treated properly.
That was certainly one of the conclusions of a recent EU paper on the policy implications of using data in healthcare. Alas, a recent study from Michigan State University suggests the health industry has some way to go before we can be confident that our data is being treated properly.
The study found around 1,800 large data breaches in patient information over a seven-year period in the United States alone.
“Our findings underscore the critical need for increased data protection in the health care industry,” the authors say. “While the law requires health care professionals and systems to cross-share patient data, the more people who can access data, the less secure it is.”
The data for the study came from the Department of Health and Human Services. Hospitals are duty bound by the Health Insurance Portability and Accountability Act, or HIPPA, to notify HHS of any serious data breach that affects over 500 individuals.
Alongside the number of breaches, a few other interesting findings emerged. For instance, only 2/3 of breaches were reported by the hospitals themselves, with business associates, health plans and healthcare clearinghouses reporting the rest.
What’s more, 33 hospitals reported more than one breach during that time, with the majority of those large, teaching hospitals.
It underlines the challenges the industry faces, and perhaps unsurprisingly there have been a few attempts to improve matters in recent months.
Secure health data
For instance, I wrote last year about a study from researchers at Radboud University, which suggested a better approach to data security.
The researchers developed a technique they refer to as Polymorphic Encryption and Pseudonymisation (PEP). PEP allows data to remain secure by pseudonymising and encrypting the data. This means it can’t be accessed even by the people that store the data.
What’s more, the access to the data is regulated and monitored, thus the team believe it makes it possible to analyze data whilst ensuring that patient privacy is maintained.
It’s first outing is a 650 person study into Parkinson’s that will see patients monitored over a two year period via wearable devices. The PEP method will allow the data, collected in the Netherlands, to then be shared with researchers around the world.
“In the context of international medical research, personal information is worth its weight in gold. So it’s important for the government to invest in an infrastructure that guarantees the protection of this information,” the researchers say. “Especially to ensure that people will remain willing to participate in future studies of this sort.”
Another approach was unveiled by Google DeepMind recently, with the launch of a ledger technology designed for health data.
The aim is to automatically record every interaction with patient data in a secure manner. The nature of blockchain supports this, as every interaction with a piece of data is recorded.
“[An] entry will record the fact that a particular piece of data has been used, and also the reason why, for example, that blood test data was checked against the NHS national algorithm to detect possible acute kidney injury,” DeepMind say.
Whilst the technology used by DeepMind has many similarities to blockchain, they believe their approach is more efficient, as participants aren’t required to use vast quantities of energy to ensure the integrity of the ledger remains strong.
They go on to suggest that health related data doesn’t need to be decentralized, with most data currently stored by a relatively small number of healthcare providers and data processors. This then makes the distributed nature of blockchain rather redundant. To provide the audit trail, the company use a Merkle tree, which is an efficient way of providing a complex audit trail.
Technically, that’s superb, but what is less clear is who will actually own the data that’s recorded, or who will have overall control of the data. DeepMind make the right noises about sound data management in the pledge they request each member of their independent oversight committee signs.
“Patients need to be certain that all their health data is handled with the utmost care and respect, and that their privacy and security are protected at all times. We have strived, and will always strive, to hold ourselves to the highest possible standards of patient data protection and we’re clear about what this looks like,” they say.
But the pledge omits any mention of ownership and access to the data. Indeed, the announcement pointedly says that patients may have direct oversight over their data, but falls some way short of saying they will have ownership of their data.
There is undoubtedly a huge need for data to be better managed within the healthcare industry, both in terms of better synchronizing the various forms of patient data that are being generated, but also the responsible sharing of that data with researchers and clinicians.
Having an industry wide standard and process for data governance may help to achieve that, but having one company appearing to take such a central role does ring alarm bells, especially as the company have been so quiet around the work of their AI ethics committee.