[this piece covers the state of play as on Sunday 8th May. It may be updated or replaced as new facts emerge]
If you are unwell: seek medical attention. These issues should not prevent you getting the care you need. The below discussion only relates to one Trust, the Royal Free in London, for all patient hospital events since sometime in 2010.
Last summer, following medConfidential’s work on care.data, Dame Fiona Caldicott was asked to review consent in the NHS. That report has still not been published, and provides recommendations. Patients should be able to know every way data has been used, as a condition of using that data – contracts shouldn’t allow secrets from patients.
Following a New Scientist article, there’s been a lot of press discussion about google deepmind receiving over 5 years of detailed medical data from the Royal Free NHS Trust in London. This project is steeped in secrecy, hiding details from patients and the public.
Concerns have not been about the patients whose information would be displayed in this app. Concerns are solely the data of the patients whose data could never be displayed in the app, as they have never had any of the blood tests (etc) it displays. That is 5 in every 6 patients. For the other 1 in 6, there is a potential benefit.
When we were first approached, our initial question was “what are they doing with this?” – details were hidden and emerged only through press investigations.
It looked like what Deepmind were doing should have been a research project – but it had not followed any ethics or research processes. It was using a dataset for the “Secondary Uses Service” – which strongly suggested this was a secondary use.
Data can be used for direct care – the care given to you by a doctor or other clinician. It is also used for other purposes, called “secondary uses”. These include purposes such as research, and the design of models for calling people in for screening (including for detection of kidney problems).
The New Scientist published last Friday, and the question remained unanswered until Wednesday. In an appearance on Radio 4, it emerged that the reason they had followed none of the research processes was simple: it wasn’t research. It was claimed to be for direct care. The Professor speaking goes on to detail the limits that clinical rules and ethics put on who can access data for direct care.
As a result, on Wednesday afternoon, the question changed to Who is the direct care (ie clinical) relationship between?
Deepmind have made a case that they will look after the data – we’ve no reason to question that different point. This is not about losing data, it’s about whether they should have had most of it in the first place. What data should they have, and how should they have got it?
To answer that question, it has to be clear what they are doing. It is not.
More generally, to have confidence, patients should know how data about them has been used. What is Deepmind hiding in this case? And why? Will they give a full accounting of how they’ve used patient data, and what for, and what happened in direct care as a result?
Every data flow in the NHS should be consensual, safe, and transparent.
Why does google think what it does with the medical history of patients can be secretive, invasive, and possibly harmful?
Throughout most of medConfidential’s work, we are able to say “opting out will not affect the care you receive”, because large amounts of work have been done by all sides to make sure it does not. If you opt out of “secondary uses” of your data released by HSCIC, it does not affect care compared to someone who did not opt out. Due to the lack of process, and the corners cut by google deepmind avoiding all the relevant processes, that may not necessarily be true. We hope the Trust will clarify what their opt out does. If you didn’t want your data handed to google for speculative purposes, what happens if you get injured and show up at the Royal Free’s A&E? How is your care affected? Did they cut that corner too?
Patients should not be punished for deepmind’s cut corners.
Scalpels Save Lives
Our friends in the research world promote that #datasaveslives, and it does, just like scalpels do.
To be completely clear, deepmind have said that their project is “not research”. That’s why they didn’t follow any research processes. There are 1500 projects which followed the proper processes and appear on the “approved data releases” register – the Deepmind project is not one of them.
Data, and good data hygiene, is as much a requirement of a modern hospital as sterile scalpels. Following the right processes to provide sterile instruments is not seen as an “unnecessary burden”, even if accountants may wish to cut costs due to the expense. Scalpels have to be sterile for a very good reason.
Similarly, processes put in place to protect data are around the same level of importance as adequate cleaning. It may seem like an unnecessary burden to some. Just as too little cleaning will cause problems that clearly demonstrate the necessity of what was previously decried as too much. Those who cut corners are rarely the ones who suffer from the decision. There is a fundamental difference between causation and correlation…
Deepmind seem to be a powerful new tool.
Were it was an instrument to be used in surgery, it would not be enough for it to be powerful and new, it must also be safe. Otherwise the harm can be significant.
Rather than clean and safe, if seems deepmind is covered in toxic waste.
It’s not that deepmind couldn’t go through the processes to ensure safety. We don’t know why they didn’t.
Deepmind might be a better instrument, or it might be the new nightmare drug. Technology tools aren’t a panacea. Have lessons been learnt after the “epic failure” of “Google flu trends”?
Research, testing, and regulatory oversight is designed to prove that changes are safe. They also correct any unintended harms to patients as the process proceeds.
How much of that happened in this case?
If Google DeepMind publish attributable and citable comments in response to these questions, we’ll link to them.