Big Data or the curse of efficiency

Dr Hajo Schu­mach­er – Pho­to: Markus Hurek

Stan­dards are a way to reduce risks, improve effi­cien­cy and opti­mize key oper­a­tions. Com­pa­nies try­ing to cre­ate stand-alone solu­tions and monop­o­lies tend to rely increas­ing­ly on stan­dards, uni­ty, stream­lin­ing and com­pli­ance. Does Big Data help when it comes to assim­i­la­tion, stan­dard­i­s­a­tion, har­mon­i­sa­tion and egal­i­tar­i­an­ism and thus increas­ing efficiency?

Dr Cal­daro­la chats with jour­nal­ist Dr Hajo Schu­mach­er, author of the book “Kein Netz – wie wir unser wirk­lich­es Leben zurück­gewin­nen” , on the risks and oppor­tu­ni­ties of exces­sive stan­dard­i­s­a­tion in the dig­i­tal age.

As a result of the Indus­tri­al Rev­o­lu­tion many process­es were sim­pli­fied and par­tial­ly auto­mat­ed while tra­di­tion­al crafts and indi­vid­ual and tai­lor-made work became much less sig­nif­i­cant. In this way, prod­ucts could be sold more cheap­ly but still be of high­er qual­i­ty. Now we can observe a sim­i­lar devel­op­ment tak­ing place with dig­i­tal­i­sa­tion. A vari­ety of ser­vices are being stan­dard­ised by inter­act­ing with an app, a ques­tion­naire or a frame- thus show­ing us that here too automa­tion is pro­gress­ing. Machines are doing the work of humans in these areas as well while indi­vid­u­als claims and per­son­al con­sult­ing are becom­ing less impor­tant. Exam­ples for this trend are Legal­Tech, Fin­Tech, online orders, online mon­ey trans­fers etc. What does stan­dard­i­s­a­tion mean for us as indi­vid­u­als? Secu­ri­ty, cer­tain­ty in plan­ning, pre­dictabil­i­ty…? Sim­ply stat­ed, is there still a place for indi­vid­u­al­i­ty, cre­ativ­i­ty, human inter­ac­tion, cus­tom-made prod­ucts and, final­ly, what about peo­ple them­selves or employees?

Dr Hajo Schu­mach­er: Here we can see var­i­ous types of log­ic com­ing togeth­er: On the one hand, we have the desire of the cus­tomer to be treat­ed as an indi­vid­ual togeth­er with a desire for cus­tom-made prod­ucts. That wish still exists – but it costs a lot of mon­ey, whether we are talk­ing about fur­ni­ture, trav­el, clothes or cars.

A com­plete­ly dif­fer­ent type of log­ic rules over low prices. In this regard, tech providers and cus­tomers as a group are fair­ly eye-to-eye on this one: It has to be cheap and con­ve­nient. Stan­dard­i­s­a­tion makes for bar­gains, even when prod­ucts have to be deliv­ered, as we have seen with Ama­zon and Lieferan­do, to name two examples.

Let us con­sid­er the finan­cial ser­vices provider who has been after me for months because s/he real­ly wants to invest my mon­ey for me. The the­o­ry is a sim­ple one: A soft­ware invests the mon­ey in funds which have been doing well. Anoth­er soft­ware fig­ures out what sort of investor I am. A per­son­al meet­ing is only con­sid­ered nec­es­sary if I as the cus­tomer am still hes­i­tant. And then the human touch would come into play: The abil­i­ty of the finan­cial con­sul­tant to charm and manip­u­late me and pos­si­bly per­suade me. These prop­er­ties are cost­ly, mean­ing an algo­rithm tells the com­pa­ny how many times it is worth try­ing before resort­ing to actu­al human intervention.

In this way, even the human fac­tor has become elec­tron­i­cal­ly quan­ti­fied. Both sides can thus enjoy the advan­tage of dig­i­tal­i­sa­tion: The finan­cial insti­tu­tion in ques­tion has a high­er mar­gin because per­son­nel costs have been opti­mized while I as the cus­tomer pay low­er fees owing to robot­ic invest­ments and pos­si­bly low­er risks, as if the bank employ­ees had their eye on the commission.

But, unfor­tu­nate­ly, there are some very dif­fer­ent sto­ries. We know, for exam­ple, that in the US wait­ing times on cus­tomer hot­lines are being opti­mized by AI. This means that when I call a telecom­mu­ni­ca­tions com­pa­ny in order to com­plain about an incor­rect bill, I will get the mes­sage that approx­i­mate­ly 34 cus­tomers are in front of me and I will have to wait at least a half an hour.

Is it real­ly like that? I have no idea but real­ly the point is fig­ur­ing out at what moment pre­cise­ly I will get tired of wait­ing and give up. At the same time, I will be told to have a look in the FAQ to see if the solu­tion to my prob­lem can be found there. Of course, it can’t. Com­pa­nies are real­ly inter­est­ed in hir­ing as lit­tle cus­tomer ser­vice per­son­nel as pos­si­ble because they cost a lot of money

AI is in a posi­tion to cal­cu­late when I will give up but, at the same time, knows where my per­son­al break­ing point is: When do I get so fed up that I swear on the life of my grand­moth­er that I will nev­er ever turn to that com­pa­ny again? Every per­son has their own indi­vid­ual break­ing point, but an algo­rithm can cal­cu­late when it will become rel­e­vant for the firm in ques­tion. It is nev­er about offer­ing the best cus­tomer ser­vice, but about receiv­ing the most accept­able yet worst ser­vice pos­si­ble. Real­ly, the main issue is about max­i­miz­ing prof­it. If you want to under­stand dig­i­tal­i­sa­tion, you actu­al­ly only need to under­stand capitalism.

As part of many job appli­ca­tion process­es, a num­ber of appli­ca­tion por­tals and pre-con­fig­ured frames are being used when one wants to sub­mit one’s appli­ca­tion. But now per­son­al­i­ty pro­files of appli­cants are to be cre­at­ed with the help of AI by using short videos. If an appli­cant changes their out­fit from a blouse and a blaz­er to a t‑shirt, changes their hair­style, or takes their glass­es off, to name but a few exam­ples, then the results of the eval­u­a­tion of the can­di­date’s per­son­al­i­ty changes as well. Will indi­vid­u­al­ists or espe­cial­ly intel­li­gent peo­ple be over­looked? This would mean the very peo­ple need­ed for com­plete­ly auto­mat­ed process­es, dig­i­tal­i­sa­tion and so on might not be hired. Are these sorts of process­es even com­pat­i­ble with our notion of human dignity?

Here is a counter-ques­tion: Are job inter­views ever com­pat­i­ble with human dig­ni­ty? Yuval Noah Harari says: Our dig­i­talised life is a life-long job inter­view. The fact that appli­ca­tion para­me­ters change with the clothes you wear mere­ly demon­strates how bad­ly soft­ware of this type is being pro­grammed and how pathet­ic the cri­te­ria of the poten­tial employ­er are. All the bias­es you can think of con­cern­ing men wear­ing suits and women wear­ing pearls seem to have been incor­po­rat­ed into these stan­dards. Have fun with that! Well, we can’t blame that on the tech­nol­o­gy but instead on the pro­gram­mers who all seem to suf­fer from the same bias prob­lems, regard­less of where they come from. This means that the soft­ware mere­ly reflects all the things that have been going wrong at the HR man­age­ment lev­el. If we believe in a high­er jus­tice, then all the com­pa­nies who use this type of soft­ware will get exact­ly the sort of employ­ees they deserve- which will hope­ful­ly become notice­able when it comes to the suc­cess of their business.

Why are employ­ers rely­ing on eval­u­a­tions gen­er­at­ed by machines, or rather soft­ware, and not on their own judge­ment? Is it a ques­tion of respon­si­bil­i­ty and, if it is, are we sac­ri­fic­ing our free­dom by giv­ing up/avoiding our responsibility?

First of all, I would con­tra­dict the the­sis that peo­ple have a big prob­lem giv­ing up their free­dom. Well, as long as we are talk­ing about showy lib­er­ties, like hav­ing two SUVs, going on three cruis­es and look­ing down on minori­ties with­out restraint, peo­ple fight for their sup­posed rights in these cas­es. But, at the very least by the time we look at free­dom of speech, there we see the lim­its to free­dom: First let’s can­cel cul­ture. As soon as respon­si­bil­i­ty comes into play or we need to fight for our con­vic­tions, then that usu­al­ly leaves most peo­ple by the way­side. Which brings us back to the orig­i­nal sub­ject. The sup­pos­ed­ly objec­tive nature of soft­ware is real­ly prac­ti­cal because the man­ag­er of per­son­nel can sim­ply give up all sense of respon­si­bil­i­ty and hand it over to tech­nol­o­gy. Ide­al­ly, we even mys­ti­fy the pro­gramme a lit­tle to make it ultra-objec­tive and supe­ri­or to peo­ple in every respect. Once this is done, I don’t have to answer for any of my deci­sions anymore.

In this regard, we are also expe­ri­enc­ing what we know con­cern­ing the polar oppo­sites: cheap ver­sus cus­tom-made. As soon as we are deal­ing with cost­ly expert staff, com­pa­nies are will­ing to hire expen­sive per­son­nel con­sul­tants. If, how­ev­er, we are talk­ing about mass jobs, which are eas­i­ly inter­change­able, a stan­dard­ised soft­ware will take over the whole selec­tion process, includ­ing all the red tape, meal tick­ets and invi­ta­tion to the Christ­mas par­ty. And why? Because it’s cheap.

Do we need an aver­age per­son for dig­i­tal­i­sa­tion to work or for dig­i­tal sys­tems to work? Or will a selec­tion process lead­ing to a cer­tain stan­dard take place owing to the greater reliance on dig­i­tal systems?

It prob­a­bly works the oth­er way round. Dig­i­tal­i­sa­tion is cre­at­ing the aver­age per­son. AI might cal­cu­late that an air­plane pas­sen­ger is allowed to car­ry 2.43 kg of hand lug­gage, so that as much lug­gage as pos­si­ble has to be trans­port­ed at addi­tion­al cost. Peo­ple might then start weigh­ing their bags with a jeweller’s scale.

Some­times you can also observe rec­i­p­ro­cal effects if you look at Insta­gram pho­tos, for exam­ple: Peo­ple post motifs to receive as many likes as pos­si­ble. Then the soft­ware in ques­tion will tell me that food, sun­sets, and cars go down espe­cial­ly well. And what do you find on Insta­gram? Plates full of food, beach­es at sun­set and peo­ple in front of cars. Is there any­thing less orig­i­nal than that? Or activ­i­ty apps. Since some ran­dom per­son decid­ed 10 000 steps was the goal we should all be aspir­ing to, peo­ple all over the world are now look­ing at their dis­plays and feel bad if they only man­aged 9876 steps in one day.

Tech­nol­o­gy cre­ates norms which are based on human behav­iour, which in turn deter­mines human behav­iour. It’s an inter­ac­tion which only has one goal in the end: Con­trol­ling behav­iour so that it can be mon­e­tarised. You haven’t man­aged your 10 000 steps? Well then try get­ting these amaz­ing shoes which were recent­ly worn for some world record. Or this par­tic­u­lar pro­tein pow­der. Or a more gen­er­ous app.

The goal of dig­i­tal­i­sa­tion is to turn peo­ple in con­sum­ing machines, who can order every bit of rub­bish they want 247 with­out leav­ing the com­fort of their homes, and with the appro­pri­ate loan to match. It can hard­ly be a coin­ci­dence that all dig­i­tal con­cerns are forc­ing their way into bank­ing. Those are the clos­est trea­sures to be appropriated.

If so much is being sim­pli­fied or stan­dard­ised, what will hap­pen to social behav­iour in this context?

Social behav­iour is not a pri­ma­ry goal of the algo­rithms in ques­tion – at least not in West­ern soci­ety. The big tech com­pa­nies are con­fig­ured so that they make as much mon­ey as pos­si­ble by using as much automa­tion as pos­si­ble: Google and Face­book have appro­pri­at­ed the adver­tis­ing mar­kets for them­selves while Ama­zon has tak­en over trade and, of course, we have Net­flix, Spo­ti­fy, Airbnb and oth­er com­pa­nies in oth­er domains. Social behav­iour is thus being reduced, if we con­tin­ue using this com­mer­cial log­ic, to shar­ing, which is real­ly rec­om­mend­ing, or obtain­ing prod­ucts for free. Look, Dr Cal­daro­la has been watch­ing this TV series and would­n’t this show work for you as well? And you’ll get a 20% discount.

In Chi­na, how­ev­er, social behav­iour is being steered in a more insid­i­ous way by means of the Social Cred­it Sys­tem. If you brake at the cross­walk, you’ll get some points added to your score, but if you speak against the polit­i­cal par­ty, that will def­i­nite­ly mean points tak­en away. If you hap­pen to be a Uighur, then you’ll be in a point deficit for your whole life, sim­ply by virtue of the fact that you are a Uighur. It’s par­tic­u­lar­ly treach­er­ous if you meet some­one who does­n’t have many points because then you’ll have points tak­en away just for that. Mere­ly talk­ing to a Uighur could cost you your vaca­tion. In this way, an entire soci­ety is being seg­re­gat­ed, not in a geo­graph­ic or social sense, but by being placed in dig­i­tal ghet­tos. In fact, this sys­tem is high­ly effec­tive, at least for those in pow­er. For who decides what social behav­iour is desir­able if not the Com­mu­nist Party?

The TV series “Black Mir­ror” illus­trat­ed these devel­op­ments in a par­tic­u­lar­ly clever fash­ion. But let’s go back to the top­ic of free­dom: If a Social Cred­it Sys­tem were to be offered in Europe which had been adapt­ed to West­ern char­ac­ter­is­tics, and the sys­tem promised to low­er the crime rate by 50%, how many peo­ple would choose free­dom and how many would opt for the Chi­nese ver­sion of secu­ri­ty? In the end, the sur­veil­lance cap­i­tal­ism of Sil­i­con Val­ley and Chi­nese sur­veil­lance social­ism start to resem­ble one anoth­er to a dis­turb­ing degree.

Robots, algo­rithms, AI are sup­posed to replace peo­ple and to make pro­duc­tion more effec­tive and more effi­cient with­in the frame­work of Indus­try 4.0. They all use Big Data to find cor­re­la­tions, to devel­op algo­rithms, to train staff and to enable AI. Big Data can find out quan­ti­ta­tive aspects, such as the fastest, great­est and the most often and so on and so forth. But what is hap­pen­ing to qual­i­ta­tive aspects, such as West­ern val­ues, human dig­ni­ty, a per­son­’s ambi­tion, ethics, sol­i­dar­i­ty, pri­va­cy, defin­ing your own goals, emo­tions, strate­gies? Or does the dig­i­tal age no longer need such things any­more because every­one is the same or will be the same?

The val­ues of the Occi­dent are not of much con­cern to soft­ware devel­op­ers and their advo­cates. I recent­ly read an inter­view with a game pro­gram­mer, who wished to remain anony­mous, and he was quite hon­est about what is impor­tant as far as dig­i­tal­i­sa­tion goes: rev­enues, of course.

Any­one who works in the games sec­tor is well aware that com­put­er games are addic­tive because the play­ers are con­stant­ly being del­uged with dopamine. Dopamine is pret­ty much as addic­tive as nico­tine, but you can’t for­bid it because your own body pro­duces it.

It’s an amaz­ing busi­ness mod­el: Deal­ing is legal, and the junkies are more or less social indi­vid­u­als, if you ignore the tem­per tantrums exhib­it­ed by chil­dren and adults as soon as you try to stop them from play­ing- in oth­er words, a clas­sic case of with­draw­al symptoms.

Every sec­ond which a cus­tomer spends on his or her com­put­er trans­lates into a ver­i­ta­ble mine of infor­ma­tion: Which obsta­cles annoy play­ers, where and with which fig­ures do they like to spend time with, how much mon­ey are they pre­pared to spend on spe­cial fea­tures, at which point do they stop play­ing? The goal of every dig­i­tal prod­uct, whether it be Fort­nite, Tik­Tok, Net­flix or Face­book is to keep the cus­tomer hooked for as long as possible.

And you’re talk­ing about dig­ni­ty, moral­i­ty, sol­i­dar­i­ty, pri­va­cy. Does­n’t that make you feel nos­tal­gic for those old-fash­ioned values?

Inno­va­tion thrives on mis­takes and learn­ing from these mis­takes. Chil­dren fall down and try to get up again and then keep on prac­tic­ing until they have fig­ured it out. How­ev­er, from the moment chil­dren begin going to school, they are taught to make as few mis­takes as pos­si­ble. Fail­ing is judged neg­a­tive­ly, even though it is pos­si­ble to learn some­thing new, some­thing bet­ter from it. Inno­va­tion is the most sig­nif­i­cant, if not the only tool to increase our GDP. Why do we forego this process? Or is stan­dard­i­s­a­tion the new innovation?

Stan­dard­i­s­a­tion com­bines mar­ket and planned economies. It does away with com­pe­ti­tion and only favours inno­va­tion which is of use to one’s own advancement.

Ama­zon has suc­ceed­ed to do what com­mu­nism could nev­er man­age: Name­ly, to take con­trol of a mar­ket and all its process­es and, at the same time, it has done away with the mech­a­nisms of a free market.

All the para­me­ters in ques­tion, from the needs of cus­tomers to the weath­er dur­ing trans­porta­tion, from stor­age to the mood of the deliv­ery per­son, they have all been fac­tored in. A small-scale sup­pli­er has to be part of Ama­zon and be squeezed for a 30% com­mis­sion. In return, Ama­zon knows all my trade secrets, costs, returns. If my prod­uct is in high demand, Ama­zon can under­bid me any­time and cur­tail my online pres­ence. The future is being deter­mined by plat­forms, so-called pro­pri­etary mar­kets, which have can­celled com­pe­ti­tion in the clas­sic sense of the word. There is a lot of inno­va­tion tak­ing place, but only if it serves the Ama­zon cartel.

Robert Musil wrote in his unfin­ished nov­el, The Man with­out Qual­i­ties, “Com­plete­ly fine is as if it were the ruin of all progress and plea­sure.” Dig­i­tal­i­sa­tion is bring­ing us order, uni­for­mi­ty, stan­dards, con­sis­ten­cy… Will this be the ruin of humans, the loss of our human diver­si­ty- a process which, at the same time, is tak­ing place quite dif­fer­ent­ly if we look at the cur­rent drop in species in the plant and ani­mal world?

At this point, I would like to say some­thing in favour of stan­dard­i­s­a­tion. When the indus­tri­al world pow­er Great Britain was still print­ing “Made in Ger­many” on Ger­man prod­ucts in order to dis­cred­it things com­ing from the pota­to state, it was pre­cise­ly the stan­dard nature of Ger­man pro­duc­tion that brought an end to British hege­mo­ny and favoured the rise of Ger­many becom­ing an indus­tri­al nation. For, all of sud­den, nuts and bolts and screws all had to fit because they had all been stan­dard­ised. If we are talk­ing about mass pro­duc­tion and pre­ci­sion, then hav­ing a cer­tain norm is a bless­ing. How stan­dar­d­is­ing human behav­iour works, is some­thing we are try­ing out as we speak.

In your opin­ion, will “dig­i­tal progress” con­tin­ue to gain ground, with­out any con­sid­er­a­tion for the con­comi­tant loss­es, while adher­ing to the mot­to: What­ev­er can be done, will be done?

I think the real ques­tion here is: Do coun­tries still have the pow­er to set them­selves against glob­al play­ers like Google?

Let’s assume Google wants to pre­vent a politi­cian who is lob­by­ing hard for heavy tax­a­tion of dig­i­tal com­pa­nies. Can we exclude the pos­si­bil­i­ty that this politi­cian will now find that his name is being list­ed in con­nec­tion with prob­lem­at­ic search items, that neg­a­tive sto­ries, in par­tic­u­lar, are at the top of the list, that one day cer­tain details from his life will pos­si­bly come to light.

What would hap­pen if some­one put togeth­er a list of all of your search queries from the last twen­ty years with­out any sort of com­ment and then pub­lished it? What if they include your move­ment pro­file, cred­it card bills, tele­phone con­ver­sa­tions, chats- and let’s not leave out our press which is very hun­gry for sen­sa­tion­al­is­tic news.

Who is so naive as to think that a com­mer­cial busi­ness would not manip­u­late its search algo­rithm if its image or stock mar­ket price were at risk?

Google is in a posi­tion to destroy every sin­gle per­son in the world and every pub­lic fig­ure knows that. Does a coun­try have the pow­er to fight that, does the chan­cel­lor have the courage to go to war against dig­i­tal war machines? Have fun with that thought. The bat­tle has already been lost if we do not soon use the Rock­e­feller solu­tion: Crush it. And do it thor­ough­ly. Eco­nom­ic monop­o­lies are bad enough, but infor­ma­tion monop­o­lies are much more dangerous.

Do algo­rithms, AI and robots ask crit­i­cal ques­tions and are they tak­ing over our capac­i­ty for thought and inno­va­tion? Can we depend on them? Are we say­ing: “Don’t wor­ry, this is just a nor­mal evo­lu­tion­ary process at work”? In oth­er words, peo­ple on this plan­et have always been on the look-out for extra-ter­res­tri­al beings. Are we cre­at­ing this fan­ta­sy world for our­selves through dig­i­tal­i­sa­tion, avatars, dig­i­tal twins, robots … and is that going to rep­re­sent our new inno­va­tion and creativity?

If we have learned one thing in the past years it is that the harm­less nut­cas­es who were fit­ting­ly called evan­ge­lists and who cre­at­ed some ran­dom fan­tasies at the behest of data com­pa­nies which had to do with eter­nal life and out­er space sta­tions and all the oth­er sci­ence fic­tion stuff which, by the way, we have always been hap­py to hear about ever since Jules Verne, all of it served one pur­pose: These sto­ries were meant to pro­vide ruth­less and exceed­ing­ly unre­mark­able mon­ey machines with a meta-plane, a high­er mean­ing, some­thing pseudo-religious.

The fact is that the so-called alpha­bet com­pa­ny, which sup­pos­ed­ly pro­motes med­i­cine, gene sequenc­ing, mod­ern trans­port and a vari­ety of oth­er trendy top­ics, is actu­al­ly just an emp­ty shell being fuelled from a sin­gle pipeline which hap­pens to be the huge amount of cash being earned by the search engine known as Google with its glob­al monopoly.

It’s a sim­i­lar sit­u­a­tion with Face­book. These com­pa­nies are not exact­ly gifts from the gods. Rather, they work at a very triv­ial lev­el, just like Big­To­bac­co, BigAl­co­hol, Big­Petrol, Big­Sug­ar or BigO­pi­oid: They mere­ly make you addict­ed to their prod­uct, defend their monop­oly by all avail­able means, bribe research so it is in their favour, make mon­ey as long as pol­i­tics allows this to hap­pen, leave the ensu­ing dam­age to soci­ety, but, by all means, let’s donate a few muse­ums or libraries so as to make the com­pa­ny sound nicer for the gen­er­a­tions to come.

I do have one final thought con­cern­ing, as you put it, “our nor­mal evo­lu­tion­ary process”: The inter­net is like cli­mate change: We know what awaits us. And it won’t go away if we sim­ply ignore it. We know all the mech­a­nisms of addic­tion, the emp­ty promis­es, the tricks of influ­encers, the help­less­ness of pol­i­tics. How­ev­er, when peo­ple pre­fer to stare at their dis­plays instead of look­ing at peo­ple’s faces, then we have to face the fact that this devel­op­ment is quick­ly becom­ing real­i­ty. But is any­one real­ly going to call this dis­turb­ing trend “progress” and, more to the point, what are we pro­gress­ing to?

My opin­ion is:

A good life is still pos­si­ble in dig­i­tal times

Dr Hajo Schumacher

Dr Schu­mach­er, thank you so much for shar­ing your opin­ion, your thoughts and your view on stan­dard­iza­tion ver­sus individuality.

Thank you, Dr Cal­daro­la, and I look for­ward to read­ing your upcom­ing inter­views with rec­og­nized experts, delv­ing even deep­er into this fas­ci­nat­ing topic.

About me and my guest

Dr Maria Cristina Caldarola

Dr Maria Cristina Caldarola, LL.M., MBA is the host of “Duet Interviews”, co-founder and CEO of CU³IC UG, a consultancy specialising in systematic approaches to innovation, such as algorithmic IP data analysis and cross-industry search for innovation solutions.

Cristina is a well-regarded legal expert in licensing, patents, trademarks, domains, software, data protection, cloud, big data, digital eco-systems and industry 4.0.

A TRIUM MBA, Cristina is also a frequent keynote speaker, a lecturer at St. Gallen, and the co-author of the recently published Big Data and Law now available in English, German and Mandarin editions.

Dr Hajo Schumacher

Journalist, father, author, podcaster, husband, inhabitant of Berlin, cyclist, reveller and moderator.

Dr Maria Cristina Caldarola

Dr Maria Cristina Caldarola, LL.M., MBA is the host of “Duet Interviews”, co-founder and CEO of CU³IC UG, a consultancy specialising in systematic approaches to innovation, such as algorithmic IP data analysis and cross-industry search for innovation solutions.

Cristina is a well-regarded legal expert in licensing, patents, trademarks, domains, software, data protection, cloud, big data, digital eco-systems and industry 4.0.

A TRIUM MBA, Cristina is also a frequent keynote speaker, a lecturer at St. Gallen, and the co-author of the recently published Big Data and Law now available in English, German and Mandarin editions.