We read many academic papers about data projects. It is rare they result in anything at all, let alone anonymous briefings against academic inquiry.
We were therefore intrigued by two points in this Wired article, written with access to Google DeepMind executives:
- It reuses a quote from medConfidential that is 9 months old, as if nothing has changed in the last 9 months. If that was true, why did Wired write about it again?
- That the quote from the Google DeepMind executive suggests the academic paper to which the article refers has errors.
If, as DeepMind says, “It makes a series of significant factual and analytical errors”, we look forward to DeepMind publishing evidence of any errors as a scientifically rigorous organisation would, rather than hiding behind anonymous briefings from their press office and a hospital. Google claims “ “we’re completely at the mercy and direction” of the Royal Free”, but from the last 2 paragraphs of the same article, that’s obviously not completely true…
medConfidential has confidence in the scientific inquiry process – and we are aware DeepMind also do, given their own authorship of academic articles about their work.
While it is highly unusual, it is not a factual or analytical error to write an academic paper that is readable by all.
We expect that DeepMind was aware of the substance of the paper prior to publication, and didn’t say anything about any of those problems then. This behaviour is entirely consistent with DeepMind’s duplicity regarding our timeline of public facts about their original deal – they claim errors in public, but will say nothing about them when asked.
Colleagues at the Wellcome Trust are right – mistakes were made.
This is how AI will go wrong; good people with good intentions making a mistake and being institutionally incapable of admitting that most human of characteristics, imperfection.