In late 2015, Google DeepMind and the Royal Free Hospital in London signed a deal to secretly copy 1.6 million medical records and allow them to be fed to a DeepMind AI.
In April 2016, the New Scientist revealed this deal and – two years later, at the time of writing – it is still in the death throes of collapse, since the lawyers cannot agree who is to get the blame. What Google DeepMind told the Research Ethics Committee that approved the project at the start is now demonstrably all untrue:
The following is medConfidential’s public repository of links and documents, sequenced chronologically and around the issues that most make sense in retrospect. Our website’s DeepMindRFH tag has all the contemporaneous posts we made.
N.B. Documents are included in the order in which they first became public, so any media coverage before that date will likely not have included any reference to them.
In order of substantive events:
Initial news in April/May 2016:
- New Scientist – piece 1 by Hal Hodson reveals Google’s AI division has access to a huge haul of NHS patients’ data;
- DeepMind puts up an academic to Radio 4 to defend itself;
- New Scientist – piece 2 asks whether DeepMind had the ethical approvals it required… (No, it didn’t);
- medConfidential writes a complaint to the ICO and NDG – including a timeline of events up to that point. (We subsequently update the timeline, when more information becomes available.)
- The original UK ethics approval request by DeepMind
- November 2016: Google DeepMind / Royal Free rewrite and re-announce their deal; the data being processed does not change.
- In March 2017, Google DeepMind actively tries to undermine academic inquiry;
- In May 2017, Sky News publishes a letter from the NDG telling Google DeepMind / RFH the deal was on an ‘inappropriate legal basis’. DeepMind / RFH make questionable public statements in response;
- On 3 July 2017, the ICO confirms the project was unlawful and breached the Data Protection Act – several formal Undertakings are required;
- On 5 July 2017, Google DeepMind’s ‘Independent Review Board’ publishes its first report, and DeepMind publishes its response – neither of which acknowledges DeepMind’s ongoing unlawful activities;
- DeepMind’s original plans are published by TechCrunch in August, which confirm its intention to feed patients’ data to an AI for ‘research’;
- medConfidential asks the international Partnership on AI (of which DeepMind is a member) some questions – there was no reply;
- medConfidential submits evidence to the House of Lords AI Select Committee in September, with a December supplementary (note paragraph 68).
- After reflection, the entire process was unnecessary; (it was)
- June 2018: DeepMind excluded relevant questions from their legal review, but their lawyers did say: “12.1.4 Without intending any disrespect to DeepMind, we do not think the concepts underpinning Streams are particularly ground-breaking.”
- July 2018: There are other innovations at the Royal Free (final example).
- November 2018: Streams moves into Google, breaking previous promises. 🤦♀️
- December 2018: More details emerge that DeepMind’s public statements may not have been entirely true.
- January 2019: How did the incentives go badly wrong?
- July 2019: DeepMind publish details of their jurisdiction hopping to evade scrutiny. For example, in two sets of papers published by DeepMind into AKI/Streams: in an ‘NHS work’ paper espousing Streams, the author given as the contact email address (@nhs.net) discloses they’re paid by DeepMind; in the other paper on the AKI work (which they told HRA they could do on the RFH data), the same person is a co-author with no NHS affiliation stated.
- August 2019: NDG final comment: the clinicians who justified this were thinking of themselves, not their patients. ICO and NDG view of the linklaters figleaf (dated 2018, published 2019)
medConfidential’s specific interest in “AI” began with a data scandal involving Google DeepMind, but has since picked up many related issues. For companies seeking to make money from data about people – and public bodies that either have a statutory duty, or simply want to provide services to people – to a first approximation, the only data that matters is health data.
medConfidential’s primary concern is not ‘AI safety’ or ‘ethics’, though these are obviously both relevant; we look at process – which, for public bodies more generally, involves the principles underpinning the Rule of Law.
Current medConfidential pieces on AI:
- To support clinical decisions, 3 different models must be used, trained on different datasets – August 2017. (DeepMind accepts the principle but wants a monopoly.)
- Government will use AI however it wants, unless it is prevented – February 2018.
- AI and the Rule of Law – presented to the APPG on the Rule of Law, April 2018.
- Video: many of these issues were covered in our presentation at the FIPR 20th anniversary conference on 29th may.