Health data, AI, and Google DeepMind

In late 2015, Google DeepMind and the Royal Free Hospital in London signed a deal to secretly copy 1.6 million medical records and allow them to be fed to a DeepMind AI.

In April 2016, the New Scientist revealed this deal and – two years later, at the time of writing – it is still in the death throes of collapse, since the lawyers cannot agree who is to get the blame. What Google DeepMind told the Research Ethics Committee that approved the project at the start is now demonstrably all untrue:

The following is medConfidential’s public repository of links and documents, sequenced chronologically and around the issues that most make sense in retrospect. Our website’s DeepMindRFH tag has all the contemporaneous posts we made.

N.B. Documents are included in the order in which they first became public, so any media coverage before that date will likely not have included any reference to them.

In order of substantive events:

Initial news in April/May 2016:


Wider interests

medConfidential’s specific interest in “AI” began with a data scandal involving Google DeepMind, but has since picked up many related issues. For companies seeking to make money from data about people – and public bodies that either have a statutory duty, or simply want to provide services to people – to a first approximation, the only data that matters is health data.

medConfidential’s primary concern is not ‘AI safety’ or ‘ethics’, though these are obviously both relevant; we look at process – which, for public bodies more generally, involves the principles underpinning the Rule of Law.

Current medConfidential pieces on AI: