Government plans to use your phone and online data to police your lifestyle and predict ‘threats’ to your health

“Artificial Intelligence in healthcare is currently geared towards improving patient outcomes, aligning the interests of various stakeholders, and reducing healthcare costs.” CB Insights.

healthcare_AI_map_2016_1

The Care.Data scandal

Back in 2014, public concerns rose because drug and insurance companies were able to buy information about patients – including mental health conditions and diseases such as cancer, as well as smoking and drinking habits – from a single English database of medical data that had been created.

Harvested from GP and hospital records, medical data covering the entire population was uploaded to the repository controlled by an arms-length NHS information centre. Never before had the entire medical history of the nation been digitised and stored in one place. Advocates said that sharing data will make medical advances easier and ultimately save lives because it will allow researchers to investigate drug side effects or the performance of hospital surgical units by tracking the impact of interventions on patients. 

However, data protection campaigners warned at the time that individuals were at risk of being identified and claimed notes were often inaccurate. The Department of Health was also criticised for failing to inform people they would be automatically opted into the scheme and would need to fill in a form if they wanted their medical records removed. 

All 26 million households in England were notified about the Care.data scheme, so that individuals could choose to opt out, but many didn’t know they had that choice. Those behind the £50million data-sharing plan said it would “improve healthcare and help medical research.”

Many doctors were so incensed about the failure to protect patients’ data that they opted out their entire surgeries from the database, and the roll-out was eventually aborted in 2014. Privacy experts warned there will be no way for the public to work out who has their medical records or to what use their data will be put. The extracted information contained NHS numbers, date of birth, postcode, ethnicity and gender.

The controversial £7.5 million NHS database (Care.data) was scrapped very quietly on same day as Chilcot Report was released.

Phil Booth of medConfidential – campaigning for medical data privacy – said: “The toxic brand may have ended, but government policy continues to be the widest sharing of every patient’s most private data.” 

Personal data is now used not only to deliver but to deny services, so it’s more important than ever to check what’s on your records.” 

He goes on to say: “Quite apart from the appalling mistreatment of generations of people, the Windrush scandal highlighted two deep problems about government’s handling of personal data. It confirms the government’s default position is one of disbelief – “guilty until proven innocent”, for some groups at least.

And it also confirms that – despite years of experience of the consequences, the government remains utterly cavalier in its stewardship of your data.

From the Home Office hunting people down through their NHS data and their children’s school records, to Google DeepMind’s secret deal intending to feed 1.6 million Royal Free Hospital patient records to its Artificial Intelligence project to Job Centre bosses interfering in medical records, and the Department for Education packaging up students’ personal data for private exploitation – as many have learned, “the power of data” is not always benign. Whether destroying the Windrush generation’s vital records or losing 25 million people’s records in the post, the consequences of poor information handling practices by Departments of the database state are always damaging to citizens.”

He’s right of course. The principle is one of private profits while the public carry the burden of risks every time. 

There is widespread concern over insurance and marketing companies getting access to our personal health data. The bottom line is that patients must have clear information about what happens to their data, how it may be used, and must be given a clearly stated opportunity to opt out.

The ‘business friendly’ government: deja Vu and AI

Theresa May has again pledged millions of pounds to use Artificial Intelligence (AI) to “improve early diagnosis of cancer and chronic disease.” In a speech delivered earlier this year, May also called for the industry and charities to “join the NHS in creating algorithms that can predict a patient’s care requirements based on their medical records and lifestyle information.” 

The government believes that early intervention would provide “less invasive, more affordable and more successful care than late intervention,” which they claim “often fails.”  

While the government has assumed that the unmatched size of the NHS’s collection of data makes it ideal for implementing AI, many are concerned about data privacy.

Importantly, May’s proposal would once again allow commercial firms to access NHS data for profit.

In April 2018, a £1bn AI sector deal between UK Government and industry was announced, including £300million towards AI research. AI is lauded as having the potential to help address important health challenges, such as meeting the care needs of an ageing population. 

Major technology companies – including Google, Microsoft, and IBM – are investing in the development of AI for healthcare and research. The number of AI start-up companies has also been steadily increasing. There are several UK based companies, some of which have been set up in collaboration with UK universities and hospitals.

Partnerships have been formed between NHS providers and AI developers such as IBM, DeepMind, Babylon Health, and Ultromics. Such partnerships have attracted controversy and wider concerns about AI have been the focus of several inquiries and initiatives within industry, and medical and policy communities. 

Last year, Sir John Bell, a professor of medicine at Oxford university, led a government-commissioned review. He said that NHS patient records are uniquely suited for driving the development of powerful algorithms that could “transform healthcare” and seed an “entirely new industry” in profitable AI-based diagnostics. 

Bell describes the recent controversy surrounding the Royal Free hospital in London granting Google DeepMind access to 1.6m patient records as the “canary in the coalmine”. “I heard that story and thought ‘Hang on a minute, who’s going to profit from that?’” he said.

Bell gave the hypothetical example of using an anonymised data for chest radiographs to develop an algorithm that eliminated the need for chest x-rays from the ‘analytical pathway’.

“That’s worth a fortune,” he said. “All the value is in the data and the data is owned by the UK taxpayer. There has to be really serious thought about protecting those interests as we go forward.”

However, Bell highlighted a “very urgent” need to review how private companies are given access to NHS data and the ownership of algorithms developed using these records.

Matt Hancock, the recently appointed health secretary, is now planning a “radical” and highly invasive system of “predictive prevention”, in which algorithms will use detailed data on citizens to send targeted “healthy living messages” to those flagged as having “propensities to health problems”, such as taking up smoking or becoming obese. 

Despite promises to safeguard data, the plans have already once again attracted privacy concerns among doctors and campaigners, who say that the project risks backfiring by scaring people or damaging public trust in NHS handling of sensitive information. People’s medical records will be combined with social and smartphone data to predict who will pick up bad habits and stop them getting ill, under radical government proposals. Of course this betrays a fundamnetal assumption of the government: that illness arises because of  bad “lifestyle choices.” 

Hancock said: “So far through history public health has essentially dealt with populations as a whole.

“The anti-smoking campaign on TV is targeted at everybody. But using data, both medical data — appropriately safeguarded, of course, for privacy reasons — and using other demographic data, you can work out that somebody might have a higher propensity to smoke and then you can target interventions much more closely.”

However, the historical evidence of the government “safeguarding” our data effectively isn’t particularly confidence-inspiring.

Public Health England is already looking at using demographic and smartphone health data to personalise messages on healthy living and plans to launch pilot projects next year.

Initially the data will be limited to broad categories, such as age or postcode. Ultimately, however, including detailed information on individual housing, employment and income or people’s internet use has not been ruled out. 

Hancock said: “We are now exploring digital services that will use information people choose to share, based on consent with only the highest standards on data privacy, to offer them precise and targeted health advice.”

Advice from whom? Unum and other private insurance companies? Businesses selling life style products? The pharma industry? Many campaigners are very concerned that the use of their data may lead to them being discriminated against by insurers or in the workplace.

Concerns

Another concern is how intrusive surveilance and data analytics is and how it may dehumanise patients. One NHS suicide prevention app, for example, that is currently in development, will monitor emails, texts and social media for signs that “people might be about to kill themselves.”

“Technology now allows us to offer people predictive prevention; tailored, intelligent advice on how to live longer, healthier lives,” Hancock said. 

“This used to happen within the brains of the GPs in the partnership when they really knew the community and had personal relationships with everyone in the community. As GPs’ practices have come under more pressure, that’s become harder and we can use data really effectively to target people who have propensities to health problems.”

Another danger is the ongoing demedicalisation of illness and the deprofessionalisation of trained doctors. Healthcare professionals may feel that their autonomy and authority is threatened if their expertise is challenged by AI. The ethical obligations of healthcare professionals towards individual patients might be affected by the use of AI decision support systems, given these might be guided by other priorities or interests, such as political ideology regarding cost efficiency or wider public health concerns.

AI systems could also have a negative impact on individual autonomy. For example, if they restrict choices based on calculations about risk or what is in the best interests of the user.

If AI systems are used to make a diagnosis or devise a treatment plan, but the healthcare professional is unable to explain how these were arrived at, this could also be seen as restricting the patient’s right to make free, informed decisions about their health. Applications that aim to imitate a human professional raise the possibility that the user will be unable to judge whether they are communicating with a real person or with technology.  

Although AI applications have the broad potential to reduce human bias and error, they can also reflect and reinforce biases in the data used to train them. Concerns have been raised about the potential of AI to lead to discrimination in ways that may be hidden or which may not align with legally protected characteristics, such as gender, ethnicity, disability, and age. 

The House of Lords Select Committee on AI has cautioned that datasets used to train AI systems are often poorly representative of the wider population and, as a result, could make unfair decisions that reflect wider prejudices in society. The Committee also found that biases can be embedded in the algorithms themselves, reflecting the beliefs and prejudices of AI developers.

Sam Smith, of the privacy group medconfidential, warned that a “ham-fisted” plan might backfire, given the government’s poor record on data-handing. He said: “Predictive intervention has to be done carefully and in the right context and with great empathy and care, as it’s easy to just look creepy and end up with a ‘Mark Zuckerberg problem’ [where a focus on the power of data leads to a neglect of the human problems it is trying to solve].”

Humans have attributes that AI systems might not be able to authentically possess, such as compassion and empathy. Clinical practice often involves very complex judgments and abilities that AI currently is unable to replicate, such as contexual knowledge (such as existing comorbidities) and and the ability to read social cues. There is also debate about whether some human knowledge is tacit and cannot be taught.

AI could also be used for malicious purposes. For example, there are fears that AI could be used for covert surveillance or screening by private companies and others. AI technologies that analyse motor behaviour, (such as the way someone types on a keyboard), and mobility patterns detected by tracking smartphones, could reveal information about a person’s health without their knowledge. 

Today it’s reported that NHS Digital is set to ignore the IT security recommendations of its own chief information officer, Will Smart, citing the estimated cost of between £800 million and £1 billion. It claims that the investment would not be “value for money”.

The recommendations were the result of a review, published in February, that was commissioned by government in response to the WannaCry ransomware attack, which affected one-fifth of all NHS trusts in the UK. The NHS was especially hard hit, not least due to a lack of up-to-date patching on Windows 7 workstations across the monolithic organisation, one of the biggest employers in the world.

The recommendations in Smart’s review had been endorsed by the National Cyber Security Centre (NCSC).

However, documents acquired under Freedom of Information by the Health Service Journal (HSJ), indicate that NHS Digital has opposed adoption of the recommendations on the grounds that they would not “be value for money”. 

NHS Digital’s response comes despite the organisation coming under sustained and continual cyber attacks, including one called Orangeworm that specifically targets sensitive healthcare data. HSJ adds that malicious phishing websites mimicking NHS trusts have also been found, while one NHS organisation was found to have exposed a sensitive database online.

A scan by NHS Digital, it adds, found 227 medical devices connected to the internet with a known vulnerability. And four out of five NHS trusts failed to even respond to a ‘high severity’ cyber alert issued in April.

The review of NHS IT security by CIO Will Smart came four months aftea damning report into the state of NHS IT security produced by the National Audit Office, which indicated that the NHS and Department of Health didn’t know how to respond to the outbreak.

With such a cavalier approach to basic IT security, it’s difficult to imagine how we can possibly trust the ‘business-friendly’ government and NHS with the stewardship of our personal health data. Personal data has become the currency by which society does business, but advances in technology should not mean organisations racing ahead of people’s basic rights. Individuals should be the ones in control and government and private organisations alike must do better to demonstrate their accountability to the public.

The use of AI in surgery

DA-VINCI-ROBOT

The Da Vinci surgicalrobot 

An inquest heard recently heard how a patient who underwent ”pioneering’ robotic heart valve surgery at the Freeman hospital in Newcastle died days later after the procedure went horribly wrong. 

Stephen Pettitt died because of multiple organ failure, yet it was expected that he was 98-99% chance of surviving the relatively low risk surgerical procedure.

Heart surgeon Sukumaran Nair had been offered training on the use of the robot with the hospital’s gynaecology department – but he refused.

He told a colleague later he could have done more “dry-run” training beforehand, the hearing heard.

The operation was planned to repair a mitral valve but damage was caused by the robot to the interatrial septum. The procedure had to be converted to an open heart operation where the chest was opened up to repair the tear.

Pathologist Nigel Cooper said: “By that time the operation had been going on for a considerable period of time. By the end of the surgery the heart was functioning very poorly.”

Medicines and a machine to help the heart function were brought in but Pettitt’s organs began to shut down and he didn’t recover.

The robot was so loud in use that the surgical team were shouting at one another and the same machine knocked a theatre nurse and destroyed the patient’s stitches, the Newcastle Coroner’s Court heard.

A leading heart surgeon, Professor David Anderson, told Newcastle Coroner’s Court the operation conducted by under-trained Sukumaran Nair, using the Da Vinci surgical robot, would not likely have ended that way had the robot not been used.

Anderson, a consultant cardiac surgeon at Guy’s and St Thomas’s Hospital, London, told the Newcastle hearing that Pettitt’s euroSCORE – the risk factor applied to heart surgery patients – was just 1-2% in normal circumstances.

Such was the concern at the completely botched six-hour-long procedure performed on Stephen Pettitt, that Northumbria police have launched a criminal inquiry.

Nair was fired from his job at Newcastle’s Freeman Hospital and their robotics heart programme ended. Newcastle coroner Karen Dilks recorded a narrative verdict into the retired music teacher’s death.

According to the US company behind the botched procedure, Intuitive Surgical Inc, “The surgeon is 100% in control of the da Vinci System at all times.”

However, there can be “serious complications and death” in any surgery, according to the business.

Risks during surgery include inadvertent cuts, tears, punctures, burns or injury to organs.

Don’t those stated risks negate the justification for using robotics to perform surgery – increased, not decreased, precision?

The company add: “It is advised the surgeon switch from minimally invasive surgery to open surgery (through a large incision) or hand-assisted surgery if problems occur.”

Related

Artificial intelligence (AI) in healthcare and research

GPrX – the company that sells NHS data to sales teams for pharma industry can market products and target the prescribers 

 


 

I don’t make any money from my work. I’m disabled through illness and on a very low income. But you can make a donation to help me continue to research and write free, informative, insightful and independent articles, and to provide support to others. The smallest amount is much appreciated – thank you.

DonatenowButton

18 thoughts on “Government plans to use your phone and online data to police your lifestyle and predict ‘threats’ to your health

  1. The surgery I use has sent me a letter saying my Opt out is now good for 10 years
    I have a 2capability for work” assessment tomorrow and i am terrified, by the way they use white envelopes to send the dreaded letters, guess they caught on!!!

    Liked by 1 person

  2. I received a curious letter from the NHS some time ago. I can’t remember it verbatum but basically it said that due to minor health problems I was 27% more prone to cardio-vascular problems than the average person and I should contact my GP. I did so and asked him if he agreed that everyone of my age is 27% more prone to such complaints. He staired at me for a while, became somewhat embarased and said “I can’t treat aging”.

    The point of this is that the letters sent out are statistical, of the kind produced by robot AI machines. They are totally useless because they have no connection with any particular individual, a mish-mash of the health problems of everyone. Applying these guidlines will help a small percentage of the population who happen by chance to need it and it will be declared a successful “predictive prevention” (statisically).

    Liked by 2 people

  3. We are sleepwalking into a police state
    Granting private security firms the power to arrest people shows that this grasping, constitutionally illiterate Tory government does not understand what the state should and should not be outsourcing
    The next step in the Tory Party’s slow suicide and abnegation of any remaining conservative principle: a leaked proposal to grant private security companies the power to arrest people, granting their employees the full suite of powers currently held by Civilian Enforcement Officers.
    Granting G4S And Serco The Power To Arrest Is Tory Madness https://semipartisansam.com/2017/10/01/granting-g4s-and-serco-the-power-to-arrest-is-tory-madness/

    Liked by 4 people

      1. We still have our rights as Noam Chomsky said We’ve been conditioned to believe we have no rights and scared catatonic so we don’t fight. Frightened hungry wage slaves too tired, hungry and afraid to fight – But not all of us aye – The really scary thing is when you tell some people they blast you with ‘They can’t/Wouldn’t do that’ and the conspiracy theorist tinfoil hat trolling, you offer more to substantiate what you’re saying and they block you and tell others to do the same

        Liked by 2 people

  4. Babylon Health !!
    who advised them that was a good name ?
    Babylon ;
    Rastafarian word referring to The State and The System: it refers equally to the British Empire which engineered the slave trade and to the modern oppressive governments of the USA and her allies, as they are considered to be one and the same imperialist evil. It’s believed that the Babylon actively seeks to exploit and oppress the people of the world, especially people of African descent. It’s believed that the Babylon forbids the smoking of ganja because this sacred herb opens men’s minds to the truth.
    The term is a reference to the fact that in the Torah (AKA the Old Testament) the Babylonian Empire forcibly relocated the Jews from the Land of Israel to the city of Babylon, where they were forced to stay for around 50 years. This situation is seen as analogous to the forced deportation of Africans to the Americas by Europeans.

    Liked by 2 people

  5. It’s time to start listening to the conspiracy theorists as what’s happening is what they predicted.
    BTW: “Conspiracy Theorist” is a derogatory meme first introduced by the CIA.

    Like

    1. My GP sent me for a psche evaluation believing me to be psychotic, why when I’d never shown any signs of psychosis or anything other than chronic clinical depression The psychiatrist reported back that I wasn’t, that I was very angry and afraid and many people held my beliefs after I took him to a website and proved it

      Liked by 2 people

      1. Yes, my idiot GP sent me to a shrink when I was ill after giving up smoking. Keep clear of the new crowd – the young doctors, they’ll find something wrong when you are perfectly healthy. I’ve just come to the end of seven years of tests after an optician said there was something wrong with my eyes. The last time I went there were two doctors arguing in front of me and it turned out there was never anything wrong. They still expect me to attend appointments. It’s crazy!

        Like

    2. its been said that the CIA are the architects of modern psychiatry as well. psychiatrists themselves are well meaning but too many of them cannot think for themselves or outside the diagnostic narrow box of tricks they are provided with. come across an authoritarian shrink and you are in trouble.

      Liked by 1 person

Leave a comment