The digital development in medicine has been rapid and is reminiscent of Huxley’s “Brave new world”. Be it the digital health card or file, data donations for cancer research, the various health apps, the many digital prostheses that communicate with nerve tracts or even medication that can be interfaced with our smartphone. One thing is certain: The medical field is being transformed. But just how secure are these systems? What risk do hackers pose?
In the latest of her Duet interviews, Dr Caldarola, author of Big Data and Law, and IT-Expert Sebastian Welke talk about information security in medicine, its chances and the risks involved.
Medical data related to a person (“health data”) is a particularly sensitive type of data which, due to the European Data Protection Directive (GDPR), has placed increased demands on data protection and information security. What is the nature of these requirements?
Sebastian Welke: The GDPR defines “health data” as personal data relating to the physical or mental health of a natural person, including the provision of health services, and revealing information about his or her health status.
Recital 35 of the GDPR helps with the interpretation. It states that information on the past, present and future health status of a person is to be covered; for example, information derived from the examination or testing of a body part or bodily substance, including from genetic data and biological samples, and information about, for instance, diseases, disabilities, risks of disease, pre-existing conditions, clinical treatments or the physiological or biomedical condition of the data subject, regardless of the origin of the data, whether it comes from a doctor or other health professional, a hospital, a medical device or an in vitro diagnostic device.
A few borderline issues are currently being debated among legal experts. Among legal experts, such as the question of whether a passport photo constitutes a health record because a pair of glasses visible on the photo suggests defective vision. Personally, this interpretation goes too far for me. Nevertheless, I recommend that, in practice, the concept of “health data” be interpreted broadly as a precaution.
But now let’s be practical: My health data is any data that can somehow provide information about my health status at any point in my life. It is all the information that my family doctor enters into his digital patient file when I consult him about a flu-like infection. It’s the data that an MRI generates when I have an accident and a cervical spine injury needs to be ruled out. It’s the data I generate myself by wearing a smartwatch or fitness tracker.
The exchange of medical information is of great interest. Medical practices, hospitals, nursing homes, medical research institutions, emergencies, health and life insurances and, of course, the patients themselves want to exchange test results and other information which might be pertinent to their and their patient’s health. Which organisational and technical measures must be observed so that the exchange, perhaps via a digital health record or a digital medical record, can be secure? Are these being taken into account and implemented successfully?
That is a tough question! The success of digital data exchange in the healthcare sector depends first and foremost on the consent of all those involved and, in particular, on their trust in security.
But let’s start at the beginning. As the controller of personal data, I must ensure that the data has not been manipulated or even destroyed, lost or disclosed to third parties. It makes no difference whether this happens accidentally, for example, during a failed system update of the practice software, or, unlawfully, such as through hacker attacks. To avoid such incidents, I have to set up technical and organisational measures. Technical measures include the use of encryption technologies or regular data backups. Examples of organisational measures include obliging employees to maintain data secrecy and regular training measures so that they recognise suspicious phishing mails and react accordingly.
A very important technical measure for the secure exchange of data between the “players” you mentioned in the German healthcare system is the “telematics infrastructure” (TI). On the one hand, the TI ensures that data is exchanged in encrypted form and thus cannot be intercepted and read by unauthorised persons “in transit”. On the other hand, the TI also ensures, among other things, that participants are authenticated, i.e., that a communication partner is really the participant he or she claims to be. Within the TI, participants can access certain services, such as an electronic patient file.
Authentication of communication partners and encrypted communication are necessary and important, but they do not go far enough, because they only consider the data exchange itself. The infrastructures at the participants’ sites, i.e., in the clinics, homes, practices and at the insurers, must also be secured. Furthermore, the level of protection should ideally be the same for all participants in the healthcare information network, so that there is no “weakest link in the chain” that then becomes the preferred target of possible attackers.
My experience has shown that large statutory health insurers regularly use internal IT service providers that have information security management systems (ISMS) with a high level of sophistication. Similar structures are often already in place at large hospital groups. However, the smaller the organisations become, the more often incomplete information security structures are found. In smaller hospitals, there is often a lack of resources, such as qualified staff and, last but not least, money. This does not necessarily mean that the IT infrastructures there are insecure and vulnerable, but information security is less often practiced there according to an “orderly” and documented process. This may be an indication that one or the other gap in the infrastructure is being overlooked.
A lot will happen here in the next few years because in view of increasing digitisation in healthcare, hospitals have been obligated by the German Social Code (SGB V), among other things, to take appropriate protective measures for IT security as of Jan. 1, 2022. We should also not forget that; the Hospital Future Act makes the approval of subsidies for digitisation measures dependent on the existence of an ISMS in accordance with the motto “promote and demand” and also holds out the prospect of subsidies for its introduction. I assume that many hospitals will make use of this. For hospitals, the so-called “Industry-Specific Security Standard for Healthcare in Hospitals” (abbreviated to “B3S”) provides good guidance on how to introduce an ISMS in a hospital environment. In my opinion, the IT-Grundschutz Compendium of the German Federal Office for Information Security (BSI) is also very helpful. Using clearly structured and modular language, it describes which measures need to be taken to secure the IT infrastructure and keep it secure on an ongoing basis, starting at the organisational level and going down to the infrastructure. Alternatively, the ISO 2700x family of standards can be used as a guide. In theory, it works in a very similar way, but provides somewhat less guidance, allowing, however, more degrees of freedom.
In medical practices, the picture is similarly heterogeneous. In the past, the quality of information security there often depended on the technical abilities and corresponding risk awareness of the practice owners in question. The German Association of Statutory Health Insurance Physicians (Kassenärztliche Bundesvereinigung), on behalf of the legislature and in cooperation with the BSI, has since published an IT security guideline that is binding for all physicians in private practice and also provides good orientation for the systematic and continuous safeguarding of the IT infrastructure in practices.
In the end, the focus should still be on the patients, and here I have a somewhat differentiated view. From the standpoint of data protection, they are the “data subjects” whose data must be protected. On the other hand, they should also be the biggest beneficiaries of digitisation in the healthcare sector because ultimately it is always about the quality of their medical care. Be it concretely in the case of an acute disease or even an emergency case or also abstractly when a research institution evaluates anonymised data in order to research new drugs or therapies that will help heal people again in the long term, digitisation can be of great assistance in all of these scenarios.
In this context, I am occasionally surprised at how carelessly some people communicate on social networks, but at the same time express the greatest concerns regarding their data when visiting a doctor. A couple of security incidents that have been reported in the press have certainly helped create some uncertainty. There will probably always be such cases. They will only become less frequent and hopefully less serious as information security becomes more developed in the healthcare sector. And so, hopefully, patient confidence will also increase and acceptance of digitisation in healthcare will advance. In any case, I myself registered for the electronic patient file with my own health insurance company with great interest and was somewhat amazed at all the information I was able to read about myself there. With regard to one doctor, there was even a suspicion of billing fraud, which is currently being investigated. But that’s another story…
Many hospitals are using more and more digital examination devices, which usually provide an evaluation via algorithms and big data. The press recently reported that there had been hacker attacks in hospitals in Munich and Düsseldorf and that hospitals had even been shut down. What exactly happened there? Is the medical field prepared for hacker attacks? What is actively being done and are there still challenges which need to be faced?
I read about those attacks. In both cases, they were caused by ransomware, a new form of blackmail. It works by infiltrating malware that spreads through a computer network and specifically exploits technical vulnerabilities on systems to encrypt the data stored there so that it can no longer be used. This is usually followed by an e‑mail with a ransom demand to be paid in an internet currency, such as Bitcoin. If you agree to the demand and pay, at best you will receive a key to decrypt the encrypted data again. In Düsseldorf, the hackers intended to attack Heinrich Heine University, but mistakenly attacked the university hospital attached to it instead. It is reported that when they realised their mistake, they voluntarily handed over the key to decrypt the data. I don’t want to be cynical, but, obviously, there seems to be an ethical code even among this type of criminals. Nevertheless, it still took several hours before the hospital was fully operational. According to press reports, an emergency case even had to be turned away and the patient had to be taken to another hospital, where s/he later died. Whether s/he could have been saved, if the University Hospital had been operational and able to admit him or her must remain speculation. But in any case, this is a truly tragic situation and highlights the extent of our vulnerability.
Ransomware attackers specifically exploit the human factor in addition to the technical vulnerabilities of the systems they encrypt. Often, the malware enters the network via email with an attachment. The fictitious facts presented in the email to entice the recipient to open the attachment and thus launch the malware are becoming increasingly sophisticated. Only recently, at a presentation at the BSI’s 18th German IT Security Congress, I heard that hackers are now pulling all the psychological stops to get mail recipients to double-click on the file attachment. There are a couple of measures which can help here. Firstly, well-planned and well-implemented training courses for the wider workforce to raise awareness of the dangers and tricks of hackers. In addition, the systems in the hospital networks must be kept as up-to-date as possible with security updates. At this point, one should keep in mind, that once a security update has been publicly released, the hackers also know about the vulnerabilities and specifically try to exploit them on systems which haven’t yet been updated. Perhaps you have heard or read about the so-called “Hafnium Exploit”, which affected thousands of Microsoft Exchange based mail servers not that long ago. Moreover, there need to be emergency plans for business continuity which have to be trained under close to reality conditions, as well as being validated and refined on a regular basis. Finally, if you have implemented a good backup strategy and the amount of data that has changed between the last backup and the attack (the loss) isn’t too big, you should consider restoring the data instead of paying the ransom. (“We do not negotiate with blackmailers!”)
There should be a new form of donation. We know of blood and organ donations but data donations for research projects would also be valuable. I assume that data donations can only be made with the consent of the patient. How is the revocation – i.e., the deletion – handled if a person revokes their former consent for whatever reason? Will research then come to a standstill?
The best example, which I also participate in, is the Corona data donation app from the Robert Koch Institute. There, as a user, you have to scroll through pages and pages of privacy notices before you can check a box and finally agree to the processing of your personal data. That’s not particularly user friendly, but it is GDPR- compliant. I think I belong to the one hundredth of all users who also read the privacy notices. An occupational hazard…
The app doesn’t even know my real name. Instead, it assigns me a 64-character, random, unique pseudonym. This allows me to assert my data subject rights. If I withdraw my consent and request that all my data be deleted, all data associated with this pseudonym (postal code, height, weight, gender, age and data about my vital signs from the smartwatch) will be deleted. The data will only flow into the research project in anonymised form. This means that the pseudonym is removed so that it is no longer possible to draw a clear conclusion about the person. As a result, my data can no longer be removed from the studies because they can no longer be traced unambiguously. Apart from that, it makes little sense if studies, once published, have to be continuously corrected because test persons have requested that their data be deleted. In addition, research institutions are generally not interested in an individual data set of a single person if their focus is on broad studies, but rather need a sample which is as large as possible. Over time, one or two participants drop out and their data must be excluded in the future, but new participants join. In this respect, I don’t think that research comes to a standstill when individuals revoke their consent under data protection law.
Can medical data be anonymised prior to a research project? Or, expressed differently, is a person so distinctive that they can still be identified even after the personal reference has been removed because of the unique nature of their medical history or their biometric data – especially if they suffer from a rare disease? From other areas it is known that 3 to 4 characteristics of non-personal data – such as the manufacturer of the smartphone, the storage capacity and the charging times – are sufficient to identify a person. Is anonymisation in the medical environment an illusion?
Yes, I think so. Unfortunately! Let’s take the Corona data donation app example again. There aren’t that many men my age living in my postal code area who also happen to have my height, my weight, and an elevated resting heart rate. Personally, though, I have no concerns about shenanigans with my data at the RKI. As I said, I have read the data privacy notice.
In theory, however, I think that we need to think about this question more from the perspective of the result. After all, it is admirable that we are able to process large amounts of data in a way that allows us to recognise patterns, derive hypotheses, falsify them and ultimately develop revolutionary forms of diagnosis and therapy. Just take the mRNA method, which brought us vaccines for a previously unknown virus in a historically unprecedented time frame in the Corona pandemic. I think the original research idea behind mRNA is even more exciting. The mRNA is supposed to “train” the immune system of a patient suffering from cancer to find and destroy the cancer cells. I find that kind of approach fascinating.
So now, if personal data is to be processed for research projects like this, there are a few principles in the GDPR that, if followed, can build a lot of trust with the subjects. First of all, there is once again the purpose limitation principle which has already been mentioned. As a research institution, I should think carefully about what the purpose of the processing is and inform the subjects as precisely as possible about what I am planning to do with their data throughout the data lifecycle. Next, consider the principles of data minimisation and data economy. If, for example, I do not need the entire human genome for a research project, but only a sequence, then I only process this sequence and discard the rest. Furthermore, I may process this data only as long as it is absolutely necessary. As a research institution, I must be clear that I really see this data donation as such and, in return, take all possible and reasonable measures to protect this “gift”. If I cannot guarantee the security of the data, I must either take further measures or inform the data subjects quite openly and honestly about the residual risks (transparency principle). It is then up to the individuals to decide whether to take the risk and donate their data or not.
There are more and more medical devices or medicines that a patient absorbs into the body by swallowing. Be it a camera that takes pictures/videos of the swallowing, eating and digestive system or the blood pressure pill that reports if the blood pressure is not at the desired level. All of these devices/drugs communicate with computers or smartphones outside of the body. How are these protected against hacker attacks? Is it possible for a hacker to create malware that disrupts the human circulatory system? In other words, could a hacker get hold of a patient’s body?
To be honest, the increasing miniaturisation and the accompanying digitisation “into the body” still seems rather strange to me. Your examples of a camera and a blood pressure probe describe sensors, i.e., elements that supply information from their environment. There are already tablets with tiny built-in sensors that emit an electrical pulse as soon as they come into contact with stomach acid. The pulse is registered by a special patch and forwarded via NFC technology (near field communication) to the patient’s cell phone. From there, a message about the ingestion is sent to the attending physician. Then again, there are advanced ideas about nanoparticles of iron oxide (rust!) that attach themselves to tumour cells in a very targeted way. Once attached to the tumour cell, they generate heat when the human body is exposed to a magnetic field and thus specifically burn the cancer cell. So, anything is possible in theory.
Let’s leave miniaturisation aside for the moment and think of an insulin pump. In this case, it is quite conceivable that someone could manipulate the pump with the intention of killing so that insulin is overdosed and the diabetic patient dies. Whether the criminal does this via a manipulated app or “only” by deliberately changing the electronics is completely irrelevant. We just have to be aware that as soon as there is the possibility of a cell phone establishing a connection to the pump, there is another attack vector. Incidentally, this new attack vector could also enable a criminal to act covertly. The victim might not even notice the attack, and subsequent prosecution would also be considerably more difficult. So, from a risk management perspective, this new attack vector is a game-changer, which must be secured. At the current state of research, I would therefore strongly advise using the cell phone only for displaying and transferring measured values, but not for initiating the application of substances or the activation of already applied ones. For the latter, I would only (!) rely on closed environments, which are under total control of their respective manufacturers. Or would you want “Angry Birds” to take control of your blood sugar level?
If we now add progressive miniaturisation to the equation, it’s hard to imagine what future developments will bring.
My opinion is:
Sebastian Welke
“Some consultants thrive on spreading fear among the clueless.
I would rather spread clues among the fearless”
Drugs and medical devices require special approvals, such as from authorities (e.g., EMA), for example before they can be put up for sale on the market. Do these authorities also check information security? Are special requirements for information security necessary for product approval in the medical field? If so, what are they?
This topic is becoming increasingly important globally and the EU has also created the legal framework for this with the Medical Device Regulation (MDR) and the In Vitro Diagnostic Regulation (IVDR). The respective annexes to the regulations already contain basic requirements and principles for the cyber security of medical devices. In order to specify these, the MDCG (Medical Device Coordination Group), a group consisting of expert representatives of the member states, has published a guideline: the “Guidance on Cybersecurity for medical devices”. In theory, a risk assessment is applied over the entire life cycle of the product. This assessment is to evaluate which threats might be affected or combined with which possible weaknesses of the product. Here, the reasonably foreseeable possibilities of misuse and also the risk of unauthorised access are to be explicitly considered in order to counter these scenarios with effective measures. The risk assessment shall not end with the product release, but shall be carried out on an ongoing basis. If necessary, a software update must be effected to counter new threats and the associated risks. Software testing, including so-called penetration tests in which “good hackers” attempt to identify and exploit device vulnerabilities, is also of central importance here. These findings are valuable for the (further) development of products. New products that do not comply with these principles do not receive a CE mark and may not be distributed.
Mr Welke, thank you for sharing your reflections on information security in medicine.
Thank you, Dr Caldarola, and I look forward to reading your upcoming interviews with recognized experts, delving even deeper into this fascinating topic.