How secure are the dig­i­tal devices used in medicine?

Sebas­t­ian Welke 

The dig­i­tal devel­op­ment in med­i­cine has been rapid and is rem­i­nis­cent of Hux­ley’s “Brave new world”. Be it the dig­i­tal health card or file, data dona­tions for can­cer research, the var­i­ous health apps, the many dig­i­tal pros­the­ses that com­mu­ni­cate with nerve tracts or even med­ica­tion that can be inter­faced with our smart­phone. One thing is cer­tain: The med­ical field is being trans­formed. But just how secure are these sys­tems? What risk do hack­ers pose?

In the lat­est of her Duet inter­views, Dr Cal­daro­la, author of Big Data and Law, and IT-Expert Sebas­t­ian Welke talk about infor­ma­tion secu­ri­ty in med­i­cine, its chances and the risks involved.

Med­ical data relat­ed to a per­son (“health data”) is a par­tic­u­lar­ly sen­si­tive type of data which, due to the Euro­pean Data Pro­tec­tion Direc­tive (GDPR), has placed increased demands on data pro­tec­tion and infor­ma­tion secu­ri­ty. What is the nature of these requirements?

Sebas­t­ian Welke: The GDPR defines “health data” as per­son­al data relat­ing to the phys­i­cal or men­tal health of a nat­ur­al per­son, includ­ing the pro­vi­sion of health ser­vices, and reveal­ing infor­ma­tion about his or her health status.

Recital 35 of the GDPR helps with the inter­pre­ta­tion. It states that infor­ma­tion on the past, present and future health sta­tus of a per­son is to be cov­ered; for exam­ple, infor­ma­tion derived from the exam­i­na­tion or test­ing of a body part or bod­i­ly sub­stance, includ­ing from genet­ic data and bio­log­i­cal sam­ples, and infor­ma­tion about, for instance, dis­eases, dis­abil­i­ties, risks of dis­ease, pre-exist­ing con­di­tions, clin­i­cal treat­ments or the phys­i­o­log­i­cal or bio­med­ical con­di­tion of the data sub­ject, regard­less of the ori­gin of the data, whether it comes from a doc­tor or oth­er health pro­fes­sion­al, a hos­pi­tal, a med­ical device or an in vit­ro diag­nos­tic device.

A few bor­der­line issues are cur­rent­ly being debat­ed among legal experts. Among legal experts, such as the ques­tion of whether a pass­port pho­to con­sti­tutes a health record because a pair of glass­es vis­i­ble on the pho­to sug­gests defec­tive vision. Per­son­al­ly, this inter­pre­ta­tion goes too far for me. Nev­er­the­less, I rec­om­mend that, in prac­tice, the con­cept of “health data” be inter­pret­ed broad­ly as a precaution.

But now let’s be prac­ti­cal: My health data is any data that can some­how pro­vide infor­ma­tion about my health sta­tus at any point in my life. It is all the infor­ma­tion that my fam­i­ly doc­tor enters into his dig­i­tal patient file when I con­sult him about a flu-like infec­tion. It’s the data that an MRI gen­er­ates when I have an acci­dent and a cer­vi­cal spine injury needs to be ruled out. It’s the data I gen­er­ate myself by wear­ing a smart­watch or fit­ness tracker.

The exchange of med­ical infor­ma­tion is of great inter­est. Med­ical prac­tices, hos­pi­tals, nurs­ing homes, med­ical research insti­tu­tions, emer­gen­cies, health and life insur­ances and, of course, the patients them­selves want to exchange test results and oth­er infor­ma­tion which might be per­ti­nent to their and their patient’s health. Which organ­i­sa­tion­al and tech­ni­cal mea­sures must be observed so that the exchange, per­haps via a dig­i­tal health record or a dig­i­tal med­ical record, can be secure? Are these being tak­en into account and imple­ment­ed successfully?

That is a tough ques­tion! The suc­cess of dig­i­tal data exchange in the health­care sec­tor depends first and fore­most on the con­sent of all those involved and, in par­tic­u­lar, on their trust in security.

But let’s start at the begin­ning. As the con­troller of per­son­al data, I must ensure that the data has not been manip­u­lat­ed or even destroyed, lost or dis­closed to third par­ties. It makes no dif­fer­ence whether this hap­pens acci­den­tal­ly, for exam­ple, dur­ing a failed sys­tem update of the prac­tice soft­ware, or, unlaw­ful­ly, such as through hack­er attacks. To avoid such inci­dents, I have to set up tech­ni­cal and organ­i­sa­tion­al mea­sures. Tech­ni­cal mea­sures include the use of encryp­tion tech­nolo­gies or reg­u­lar data back­ups. Exam­ples of organ­i­sa­tion­al mea­sures include oblig­ing employ­ees to main­tain data secre­cy and reg­u­lar train­ing mea­sures so that they recog­nise sus­pi­cious phish­ing mails and react accordingly.

A very impor­tant tech­ni­cal mea­sure for the secure exchange of data between the “play­ers” you men­tioned in the Ger­man health­care sys­tem is the “telem­at­ics infra­struc­ture” (TI). On the one hand, the TI ensures that data is exchanged in encrypt­ed form and thus can­not be inter­cept­ed and read by unau­tho­rised per­sons “in tran­sit”. On the oth­er hand, the TI also ensures, among oth­er things, that par­tic­i­pants are authen­ti­cat­ed, i.e., that a com­mu­ni­ca­tion part­ner is real­ly the par­tic­i­pant he or she claims to be. With­in the TI, par­tic­i­pants can access cer­tain ser­vices, such as an elec­tron­ic patient file.

Authen­ti­ca­tion of com­mu­ni­ca­tion part­ners and encrypt­ed com­mu­ni­ca­tion are nec­es­sary and impor­tant, but they do not go far enough, because they only con­sid­er the data exchange itself. The infra­struc­tures at the par­tic­i­pants’ sites, i.e., in the clin­ics, homes, prac­tices and at the insur­ers, must also be secured. Fur­ther­more, the lev­el of pro­tec­tion should ide­al­ly be the same for all par­tic­i­pants in the health­care infor­ma­tion net­work, so that there is no “weak­est link in the chain” that then becomes the pre­ferred tar­get of pos­si­ble attackers.

My expe­ri­ence has shown that large statu­to­ry health insur­ers reg­u­lar­ly use inter­nal IT ser­vice providers that have infor­ma­tion secu­ri­ty man­age­ment sys­tems (ISMS) with a high lev­el of sophis­ti­ca­tion. Sim­i­lar struc­tures are often already in place at large hos­pi­tal groups. How­ev­er, the small­er the organ­i­sa­tions become, the more often incom­plete infor­ma­tion secu­ri­ty struc­tures are found. In small­er hos­pi­tals, there is often a lack of resources, such as qual­i­fied staff and, last but not least, mon­ey. This does not nec­es­sar­i­ly mean that the IT infra­struc­tures there are inse­cure and vul­ner­a­ble, but infor­ma­tion secu­ri­ty is less often prac­ticed there accord­ing to an “order­ly” and doc­u­ment­ed process. This may be an indi­ca­tion that one or the oth­er gap in the infra­struc­ture is being overlooked.

A lot will hap­pen here in the next few years because in view of increas­ing digi­ti­sa­tion in health­care, hos­pi­tals have been oblig­at­ed by the Ger­man Social Code (SGB V), among oth­er things, to take appro­pri­ate pro­tec­tive mea­sures for IT secu­ri­ty as of Jan. 1, 2022. We should also not for­get that; the Hos­pi­tal Future Act makes the approval of sub­si­dies for digi­ti­sa­tion mea­sures depen­dent on the exis­tence of an ISMS in accor­dance with the mot­to “pro­mote and demand” and also holds out the prospect of sub­si­dies for its intro­duc­tion. I assume that many hos­pi­tals will make use of this. For hos­pi­tals, the so-called “Indus­try-Spe­cif­ic Secu­ri­ty Stan­dard for Health­care in Hos­pi­tals” (abbre­vi­at­ed to “B3S”) pro­vides good guid­ance on how to intro­duce an ISMS in a hos­pi­tal envi­ron­ment. In my opin­ion, the IT-Grund­schutz Com­pendi­um of the Ger­man Fed­er­al Office for Infor­ma­tion Secu­ri­ty (BSI) is also very help­ful. Using clear­ly struc­tured and mod­u­lar lan­guage, it describes which mea­sures need to be tak­en to secure the IT infra­struc­ture and keep it secure on an ongo­ing basis, start­ing at the organ­i­sa­tion­al lev­el and going down to the infra­struc­ture. Alter­na­tive­ly, the ISO 2700x fam­i­ly of stan­dards can be used as a guide. In the­o­ry, it works in a very sim­i­lar way, but pro­vides some­what less guid­ance, allow­ing, how­ev­er, more degrees of freedom.

In med­ical prac­tices, the pic­ture is sim­i­lar­ly het­ero­ge­neous. In the past, the qual­i­ty of infor­ma­tion secu­ri­ty there often depend­ed on the tech­ni­cal abil­i­ties and cor­re­spond­ing risk aware­ness of the prac­tice own­ers in ques­tion. The Ger­man Asso­ci­a­tion of Statu­to­ry Health Insur­ance Physi­cians (Kassenärztliche Bun­desvere­ini­gung), on behalf of the leg­is­la­ture and in coop­er­a­tion with the BSI, has since pub­lished an IT secu­ri­ty guide­line that is bind­ing for all physi­cians in pri­vate prac­tice and also pro­vides good ori­en­ta­tion for the sys­tem­at­ic and con­tin­u­ous safe­guard­ing of the IT infra­struc­ture in practices.

In the end, the focus should still be on the patients, and here I have a some­what dif­fer­en­ti­at­ed view. From the stand­point of data pro­tec­tion, they are the “data sub­jects” whose data must be pro­tect­ed. On the oth­er hand, they should also be the biggest ben­e­fi­cia­ries of digi­ti­sa­tion in the health­care sec­tor because ulti­mate­ly it is always about the qual­i­ty of their med­ical care. Be it con­crete­ly in the case of an acute dis­ease or even an emer­gency case or also abstract­ly when a research insti­tu­tion eval­u­ates anonymised data in order to research new drugs or ther­a­pies that will help heal peo­ple again in the long term, digi­ti­sa­tion can be of great assis­tance in all of these scenarios.

In this con­text, I am occa­sion­al­ly sur­prised at how care­less­ly some peo­ple com­mu­ni­cate on social net­works, but at the same time express the great­est con­cerns regard­ing their data when vis­it­ing a doc­tor. A cou­ple of secu­ri­ty inci­dents that have been report­ed in the press have cer­tain­ly helped cre­ate some uncer­tain­ty. There will prob­a­bly always be such cas­es. They will only become less fre­quent and hope­ful­ly less seri­ous as infor­ma­tion secu­ri­ty becomes more devel­oped in the health­care sec­tor. And so, hope­ful­ly, patient con­fi­dence will also increase and accep­tance of digi­ti­sa­tion in health­care will advance. In any case, I myself reg­is­tered for the elec­tron­ic patient file with my own health insur­ance com­pa­ny with great inter­est and was some­what amazed at all the infor­ma­tion I was able to read about myself there. With regard to one doc­tor, there was even a sus­pi­cion of billing fraud, which is cur­rent­ly being inves­ti­gat­ed. But that’s anoth­er story…

Many hos­pi­tals are using more and more dig­i­tal exam­i­na­tion devices, which usu­al­ly pro­vide an eval­u­a­tion via algo­rithms and big data. The press recent­ly report­ed that there had been hack­er attacks in hos­pi­tals in Munich and Düs­sel­dorf and that hos­pi­tals had even been shut down. What exact­ly hap­pened there? Is the med­ical field pre­pared for hack­er attacks? What is active­ly being done and are there still chal­lenges which need to be faced?

I read about those attacks. In both cas­es, they were caused by ran­somware, a new form of black­mail. It works by infil­trat­ing mal­ware that spreads through a com­put­er net­work and specif­i­cal­ly exploits tech­ni­cal vul­ner­a­bil­i­ties on sys­tems to encrypt the data stored there so that it can no longer be used. This is usu­al­ly fol­lowed by an e‑mail with a ran­som demand to be paid in an inter­net cur­ren­cy, such as Bit­coin. If you agree to the demand and pay, at best you will receive a key to decrypt the encrypt­ed data again. In Düs­sel­dorf, the hack­ers intend­ed to attack Hein­rich Heine Uni­ver­si­ty, but mis­tak­en­ly attacked the uni­ver­si­ty hos­pi­tal attached to it instead. It is report­ed that when they realised their mis­take, they vol­un­tar­i­ly hand­ed over the key to decrypt the data. I don’t want to be cyn­i­cal, but, obvi­ous­ly, there seems to be an eth­i­cal code even among this type of crim­i­nals. Nev­er­the­less, it still took sev­er­al hours before the hos­pi­tal was ful­ly oper­a­tional. Accord­ing to press reports, an emer­gency case even had to be turned away and the patient had to be tak­en to anoth­er hos­pi­tal, where s/he lat­er died. Whether s/he could have been saved, if the Uni­ver­si­ty Hos­pi­tal had been oper­a­tional and able to admit him or her must remain spec­u­la­tion. But in any case, this is a tru­ly trag­ic sit­u­a­tion and high­lights the extent of our vulnerability.

Ran­somware attack­ers specif­i­cal­ly exploit the human fac­tor in addi­tion to the tech­ni­cal vul­ner­a­bil­i­ties of the sys­tems they encrypt. Often, the mal­ware enters the net­work via email with an attach­ment. The fic­ti­tious facts pre­sent­ed in the email to entice the recip­i­ent to open the attach­ment and thus launch the mal­ware are becom­ing increas­ing­ly sophis­ti­cat­ed. Only recent­ly, at a pre­sen­ta­tion at the BSI’s 18th Ger­man IT Secu­ri­ty Con­gress, I heard that hack­ers are now pulling all the psy­cho­log­i­cal stops to get mail recip­i­ents to dou­ble-click on the file attach­ment. There are a cou­ple of mea­sures which can help here. First­ly, well-planned and well-imple­ment­ed train­ing cours­es for the wider work­force to raise aware­ness of the dan­gers and tricks of hack­ers. In addi­tion, the sys­tems in the hos­pi­tal net­works must be kept as up-to-date as pos­si­ble with secu­ri­ty updates. At this point, one should keep in mind, that once a secu­ri­ty update has been pub­licly released, the hack­ers also know about the vul­ner­a­bil­i­ties and specif­i­cal­ly try to exploit them on sys­tems which haven’t yet been updat­ed. Per­haps you have heard or read about the so-called “Hafni­um Exploit”, which affect­ed thou­sands of Microsoft Exchange based mail servers not that long ago. More­over, there need to be emer­gency plans for busi­ness con­ti­nu­ity which have to be trained under close to real­i­ty con­di­tions, as well as being val­i­dat­ed and refined on a reg­u­lar basis. Final­ly, if you have imple­ment­ed a good back­up strat­e­gy and the amount of data that has changed between the last back­up and the attack (the loss) isn’t too big, you should con­sid­er restor­ing the data instead of pay­ing the ran­som. (“We do not nego­ti­ate with blackmailers!”)

There should be a new form of dona­tion. We know of blood and organ dona­tions but data dona­tions for research projects would also be valu­able. I assume that data dona­tions can only be made with the con­sent of the patient. How is the revo­ca­tion – i.e., the dele­tion – han­dled if a per­son revokes their for­mer con­sent for what­ev­er rea­son? Will research then come to a standstill?

The best exam­ple, which I also par­tic­i­pate in, is the Coro­na data dona­tion app from the Robert Koch Insti­tute. There, as a user, you have to scroll through pages and pages of pri­va­cy notices before you can check a box and final­ly agree to the pro­cess­ing of your per­son­al data. That’s not par­tic­u­lar­ly user friend­ly, but it is GDPR- com­pli­ant. I think I belong to the one hun­dredth of all users who also read the pri­va­cy notices. An occu­pa­tion­al hazard…

The app does­n’t even know my real name. Instead, it assigns me a 64-char­ac­ter, ran­dom, unique pseu­do­nym. This allows me to assert my data sub­ject rights. If I with­draw my con­sent and request that all my data be delet­ed, all data asso­ci­at­ed with this pseu­do­nym (postal code, height, weight, gen­der, age and data about my vital signs from the smart­watch) will be delet­ed. The data will only flow into the research project in anonymised form. This means that the pseu­do­nym is removed so that it is no longer pos­si­ble to draw a clear con­clu­sion about the per­son. As a result, my data can no longer be removed from the stud­ies because they can no longer be traced unam­bigu­ous­ly. Apart from that, it makes lit­tle sense if stud­ies, once pub­lished, have to be con­tin­u­ous­ly cor­rect­ed because test per­sons have request­ed that their data be delet­ed. In addi­tion, research insti­tu­tions are gen­er­al­ly not inter­est­ed in an indi­vid­ual data set of a sin­gle per­son if their focus is on broad stud­ies, but rather need a sam­ple which is as large as pos­si­ble. Over time, one or two par­tic­i­pants drop out and their data must be exclud­ed in the future, but new par­tic­i­pants join. In this respect, I don’t think that research comes to a stand­still when indi­vid­u­als revoke their con­sent under data pro­tec­tion law.

Can med­ical data be anonymised pri­or to a research project? Or, expressed dif­fer­ent­ly, is a per­son so dis­tinc­tive that they can still be iden­ti­fied even after the per­son­al ref­er­ence has been removed because of the unique nature of their med­ical his­to­ry or their bio­met­ric data – espe­cial­ly if they suf­fer from a rare dis­ease? From oth­er areas it is known that 3 to 4 char­ac­ter­is­tics of non-per­son­al data – such as the man­u­fac­tur­er of the smart­phone, the stor­age capac­i­ty and the charg­ing times – are suf­fi­cient to iden­ti­fy a per­son. Is anonymi­sa­tion in the med­ical envi­ron­ment an illusion?

Yes, I think so. Unfor­tu­nate­ly! Let’s take the Coro­na data dona­tion app exam­ple again. There aren’t that many men my age liv­ing in my postal code area who also hap­pen to have my height, my weight, and an ele­vat­ed rest­ing heart rate. Per­son­al­ly, though, I have no con­cerns about shenani­gans with my data at the RKI. As I said, I have read the data pri­va­cy notice.

In the­o­ry, how­ev­er, I think that we need to think about this ques­tion more from the per­spec­tive of the result. After all, it is admirable that we are able to process large amounts of data in a way that allows us to recog­nise pat­terns, derive hypothe­ses, fal­si­fy them and ulti­mate­ly devel­op rev­o­lu­tion­ary forms of diag­no­sis and ther­a­py. Just take the mRNA method, which brought us vac­cines for a pre­vi­ous­ly unknown virus in a his­tor­i­cal­ly unprece­dent­ed time frame in the Coro­na pan­dem­ic. I think the orig­i­nal research idea behind mRNA is even more excit­ing. The mRNA is sup­posed to “train” the immune sys­tem of a patient suf­fer­ing from can­cer to find and destroy the can­cer cells. I find that kind of approach fascinating.

So now, if per­son­al data is to be processed for research projects like this, there are a few prin­ci­ples in the GDPR that, if fol­lowed, can build a lot of trust with the sub­jects. First of all, there is once again the pur­pose lim­i­ta­tion prin­ci­ple which has already been men­tioned. As a research insti­tu­tion, I should think care­ful­ly about what the pur­pose of the pro­cess­ing is and inform the sub­jects as pre­cise­ly as pos­si­ble about what I am plan­ning to do with their data through­out the data life­cy­cle. Next, con­sid­er the prin­ci­ples of data min­imi­sa­tion and data econ­o­my. If, for exam­ple, I do not need the entire human genome for a research project, but only a sequence, then I only process this sequence and dis­card the rest. Fur­ther­more, I may process this data only as long as it is absolute­ly nec­es­sary. As a research insti­tu­tion, I must be clear that I real­ly see this data dona­tion as such and, in return, take all pos­si­ble and rea­son­able mea­sures to pro­tect this “gift”. If I can­not guar­an­tee the secu­ri­ty of the data, I must either take fur­ther mea­sures or inform the data sub­jects quite open­ly and hon­est­ly about the resid­ual risks (trans­paren­cy prin­ci­ple). It is then up to the indi­vid­u­als to decide whether to take the risk and donate their data or not.

There are more and more med­ical devices or med­i­cines that a patient absorbs into the body by swal­low­ing. Be it a cam­era that takes pictures/videos of the swal­low­ing, eat­ing and diges­tive sys­tem or the blood pres­sure pill that reports if the blood pres­sure is not at the desired lev­el. All of these devices/drugs com­mu­ni­cate with com­put­ers or smart­phones out­side of the body. How are these pro­tect­ed against hack­er attacks? Is it pos­si­ble for a hack­er to cre­ate mal­ware that dis­rupts the human cir­cu­la­to­ry sys­tem? In oth­er words, could a hack­er get hold of a patien­t’s body?

To be hon­est, the increas­ing minia­tur­i­sa­tion and the accom­pa­ny­ing digi­ti­sa­tion “into the body” still seems rather strange to me. Your exam­ples of a cam­era and a blood pres­sure probe describe sen­sors, i.e., ele­ments that sup­ply infor­ma­tion from their envi­ron­ment. There are already tablets with tiny built-in sen­sors that emit an elec­tri­cal pulse as soon as they come into con­tact with stom­ach acid. The pulse is reg­is­tered by a spe­cial patch and for­ward­ed via NFC tech­nol­o­gy (near field com­mu­ni­ca­tion) to the patien­t’s cell phone. From there, a mes­sage about the inges­tion is sent to the attend­ing physi­cian. Then again, there are advanced ideas about nanopar­ti­cles of iron oxide (rust!) that attach them­selves to tumour cells in a very tar­get­ed way. Once attached to the tumour cell, they gen­er­ate heat when the human body is exposed to a mag­net­ic field and thus specif­i­cal­ly burn the can­cer cell. So, any­thing is pos­si­ble in theory.

Let’s leave minia­tur­i­sa­tion aside for the moment and think of an insulin pump. In this case, it is quite con­ceiv­able that some­one could manip­u­late the pump with the inten­tion of killing so that insulin is over­dosed and the dia­bet­ic patient dies. Whether the crim­i­nal does this via a manip­u­lat­ed app or “only” by delib­er­ate­ly chang­ing the elec­tron­ics is com­plete­ly irrel­e­vant. We just have to be aware that as soon as there is the pos­si­bil­i­ty of a cell phone estab­lish­ing a con­nec­tion to the pump, there is anoth­er attack vec­tor. Inci­den­tal­ly, this new attack vec­tor could also enable a crim­i­nal to act covert­ly. The vic­tim might not even notice the attack, and sub­se­quent pros­e­cu­tion would also be con­sid­er­ably more dif­fi­cult. So, from a risk man­age­ment per­spec­tive, this new attack vec­tor is a game-chang­er, which must be secured. At the cur­rent state of research, I would there­fore strong­ly advise using the cell phone only for dis­play­ing and trans­fer­ring mea­sured val­ues, but not for ini­ti­at­ing the appli­ca­tion of sub­stances or the acti­va­tion of already applied ones. For the lat­ter, I would only (!) rely on closed envi­ron­ments, which are under total con­trol of their respec­tive man­u­fac­tur­ers. Or would you want “Angry Birds” to take con­trol of your blood sug­ar level?

If we now add pro­gres­sive minia­tur­i­sa­tion to the equa­tion, it’s hard to imag­ine what future devel­op­ments will bring.

My opin­ion is:

“Some con­sul­tants thrive on spread­ing fear among the clue­less.
I would rather spread clues among the fearless”

Sebas­t­ian Welke

Drugs and med­ical devices require spe­cial approvals, such as from author­i­ties (e.g., EMA), for exam­ple before they can be put up for sale on the mar­ket. Do these author­i­ties also check infor­ma­tion secu­ri­ty? Are spe­cial require­ments for infor­ma­tion secu­ri­ty nec­es­sary for prod­uct approval in the med­ical field? If so, what are they?

This top­ic is becom­ing increas­ing­ly impor­tant glob­al­ly and the EU has also cre­at­ed the legal frame­work for this with the Med­ical Device Reg­u­la­tion (MDR) and the In Vit­ro Diag­nos­tic Reg­u­la­tion (IVDR). The respec­tive annex­es to the reg­u­la­tions already con­tain basic require­ments and prin­ci­ples for the cyber secu­ri­ty of med­ical devices. In order to spec­i­fy these, the MDCG (Med­ical Device Coor­di­na­tion Group), a group con­sist­ing of expert rep­re­sen­ta­tives of the mem­ber states, has pub­lished a guide­line: the “Guid­ance on Cyber­se­cu­ri­ty for med­ical devices”. In the­o­ry, a risk assess­ment is applied over the entire life cycle of the prod­uct. This assess­ment is to eval­u­ate which threats might be affect­ed or com­bined with which pos­si­ble weak­ness­es of the prod­uct. Here, the rea­son­ably fore­see­able pos­si­bil­i­ties of mis­use and also the risk of unau­tho­rised access are to be explic­it­ly con­sid­ered in order to counter these sce­nar­ios with effec­tive mea­sures. The risk assess­ment shall not end with the prod­uct release, but shall be car­ried out on an ongo­ing basis. If nec­es­sary, a soft­ware update must be effect­ed to counter new threats and the asso­ci­at­ed risks. Soft­ware test­ing, includ­ing so-called pen­e­tra­tion tests in which “good hack­ers” attempt to iden­ti­fy and exploit device vul­ner­a­bil­i­ties, is also of cen­tral impor­tance here. These find­ings are valu­able for the (fur­ther) devel­op­ment of prod­ucts. New prod­ucts that do not com­ply with these prin­ci­ples do not receive a CE mark and may not be distributed.

Mr Welke, thank you for shar­ing your reflec­tions on infor­ma­tion secu­ri­ty in medicine.

Thank you, Dr Cal­daro­la, and I look for­ward to read­ing your upcom­ing inter­views with rec­og­nized experts, delv­ing even deep­er into this fas­ci­nat­ing topic.

About me and my guest

Dr Maria Cristina Caldarola

Dr Maria Cristina Caldarola, LL.M., MBA is the host of “Duet Interviews”, co-founder and CEO of CU³IC UG, a consultancy specialising in systematic approaches to innovation, such as algorithmic IP data analysis and cross-industry search for innovation solutions.

Cristina is a well-regarded legal expert in licensing, patents, trademarks, domains, software, data protection, cloud, big data, digital eco-systems and industry 4.0.

A TRIUM MBA, Cristina is also a frequent keynote speaker, a lecturer at St. Gallen, and the co-author of the recently published Big Data and Law now available in English, German and Mandarin editions.

Sebastian Welke

Sebastian Welke studied business law and considers himself fortunate to have been blessed with a talent for technology. Throughout his professional life, Mr Welke has been involved in the management of large IT infrastructures, complex IT-supported business services as well as issues involving outsourcing, risk management and IT compliance. At handz.on, he is responsible for a team of experts specialising in information security, data protection and IT risk management.

Dr Maria Cristina Caldarola

Dr Maria Cristina Caldarola, LL.M., MBA is the host of “Duet Interviews”, co-founder and CEO of CU³IC UG, a consultancy specialising in systematic approaches to innovation, such as algorithmic IP data analysis and cross-industry search for innovation solutions.

Cristina is a well-regarded legal expert in licensing, patents, trademarks, domains, software, data protection, cloud, big data, digital eco-systems and industry 4.0.

A TRIUM MBA, Cristina is also a frequent keynote speaker, a lecturer at St. Gallen, and the co-author of the recently published Big Data and Law now available in English, German and Mandarin editions.