Extraordinarily, the New Scientist has quoted Google as having used as part of an unregulated algorithm in the direct care of patients[1].
This follows up on previous news that Google Deepmind had acquired millions of detailed patient histories for unclear purposes[2]. Google Deepmind’s response was to focus that they were keeping the data safely[3], and to ignore questions over what they were doing with it, and whether they should have had it in the first place[4].
MedConfidential has long argued that every patient should be able to know how data about them has been used. If there had been a Ministerial commitment to do that, this mess of unanswered questions would not have happened.[5]
Announced yesterday, it is Government policy to “encourage and support data-driven techniques in policy and service delivery”. Innovation is welcome and vital, but it should be grounded in medical ethics and a clinical relationship, and not ride roughshod over processes in place to protect all involved.[6]
Responding to the latest information, MedConfidential coordinator Phil Booth said:
“Deepmind has spent a fortnight hiding behind the NHS. It’s now clear that this was a unregulated “development” project for deepmind, but a patient care project for the NHS.
“These algorithms evolve: errors get fixed, improvements get made. What approvals did Deepmind have from the medical regulators at the early stages? As the provider of a tool used in direct care, they are responsible for ensuring it meets all safety standards.
“Training doctors to make safe decisions takes years, and requires many exams to be passed. Have Google shown that each version used in direct care met all relevant grades, standards, and regulations?
-ends-
For immediate or future interview, please email coordinator@medconfidential.org
Notes to editors:
- See https://www.newscientist.com/article/2088056-exclusive-googles-nhs-deal- does-not-have-regulatory-approval/ “We [Deepmind] and our partners at the Royal Free are in touch with MHRA regarding our development work.”
- See https://www.newscientist.com/article/2086454-revealed-google-ai-has-access- to-huge-haul-of-nhs-patient-data/ and http://techcrunch.com/2016/05/04/concerns- raised-over-broad-scope-of-deepmind-nhs-health-data-sharing-deal/
- Google’s self-defence https://www.theguardian.com/technology/2016/ may/06/deepmind-best-privacy-infrastructure-handling-nhs-data-says-co-founder refers to their self-reported scores in the IG Toolkit https://www.igt.hscic.gov.uk/AssessmentReportCriteria.aspx?tk=424999242358961&lnv=3&cb=e8c1aaf1-c40d-45af-9bb9-adc46c712924&sViewOrgId=49979&sDesc=8JE14 . Those scores have not yet been audited by the HSCIC.
- The question of why Google Deepmind had the histories of people who never had a blood test at the relevant hospital, and who may never return to the hospital, remains unanswered.
- Much like a bank statement, every patient should be able to see a data usage report, which tells them where data about them has been used, and why, and what the benefits of that usage were. A commitment to investigate implementation was made in late 2014, but remains delayed by the Caldicott Review of Consent. For more, see https://medconfidential.org/2014/what-is-a-data-usage-report/
- MHRA rules require medical devices to have appropriate pre-approved procedures in place to confirm they’re working as expected, and to ensure any conceivable failures have mitigations considered in advance. The New Scientist article confirms they do not have those approvals as algorithms in their software develop.