Everyone’s experience in AI decision-making

Institutions that include everyone understand that great benefit comes from seeing complex issues in many different ways.

The most life-changing, rapid, and one-off decisions people must make are those to do with their health, and the health of their loved ones. Here too, the benefits of diversity are well understood. In medicine, there is a culture of “second opinions” – you can always ask another doctor for their opinion on a choice. This is acknowledged as a great strength of the medical community; indeed, the seeking of diverse (even possibly contradictory) opinions is actively supported by professionals realistic and humble enough to accept that there may not be one single right answer.

So why, as technology progresses, should we choose a lower standard for AIs offering diagnostic assistance to doctors?

Necessary variation in clinical Artifical Intelligence ‘opinion’ will arise only from open competition amongst providers, all respecting the consensual, safe, and transparent use of patients’ data, underpinned by medical ethics.

When you are ill and have a care team today, the decision process available to clinicians deciding your treatment comes not from a single view, but from a comprehensive assessment considering diverse perspectives.

The same should apply when AIs join a care team, which could mean one AI’s analysis spotting something another has assessed as less significant – it should only take one finding to prompt a new consideration. And should we not meet the urgent demand for more doctors, it may be appropriately diverse, ‘always on’, clinical AI assistance tools that could help recast the mix of experience required. (Or perhaps, in a future AI world, patients will be sick of experts…)

Diversity in the medical AI ecosystem will result from the choice of different modelling approaches and the use of different training data, the variation in outcomes (i.e. advice) will come about for similar reasons as today: differing opinions arising from different choices made by different ‘cultures’. No training dataset that systematically excludes some or any community should be acceptable, but different datasets in different models will result in different suggestions – reflecting the humanity of everyone.

The consultation of multiple clinical support systems should be as straightforward as the consultation of any single system in every hospital that meets modern standards for interoperability (FHIR, or the NHS goal of being paperless by 2020). Therefore, when requesting an assessment from an AI clinical support system, it will be just as easy to ask three – unless a monopolistic supplier limits your care to that provided by their models.

Diversity has sound economic reasons too: a mandate for multiple opinions would ensure a healthy, competitive market in AIs for clinical support. Such a mandate wouldn’t raise costs, as it would triple the market size – and it would ensure a continual process of innovation. Over time, as AI improves, there would be minimal risks in moving to newer systems; during the testing phase, four opinions are as easy to consolidate as three.

Also, where patients consent to research, over time, the health outcomes of those patients can become a measure of the different approaches. In that way, if AIs’ outputs are measured on their clinical benefit, “best” can become a clinical outcome – not a marketing claim. Which also delivers on the Government’s commitment that patients should know how their records have been used, and what was learnt from those projects.

In short, a mandate for progress through safe innovation is deliverable today, in line with professional practice and medical ethics, if that is what we want.

 

A National Health Service

Markets around the NHS must themselves be sustainable, and the NHS is in a position – as a research and development institution, and as the data controller in multiple clinical environments – to manage rapid development and testing of AI in a way that a recent flagship project did unlawfully.

It is clear, however, that some institutions within the NHS feel they are required to give up their patients’ data to avoid “falling behind”. All they are demonstrating is their own lack of awareness.

Every AI company is dependent upon masses of data; some may try to ‘free ride’ off the NHS infrastructure, hoping to copy some of the patients’ data that flows through it for profit, without even paying the taxes that fund the NHS. Whatever the case, in every ‘deal’ that is made, the original data controller remains the data controller – and there is no result that cannot be replicated (more cheaply) by another hospital with a similar dataset later, building on shared experience and published results.

Simply believing ‘the smartest guys in the room’ is neither wise, nor the only choice. Novelty can indeed be part of the legitimate research and care process, but the sort of innovations we need cannot involve the secret testing of AIs on humans without their knowledge or consent.

Great risk to the NHS comes only from the perverse incentives of commercial monopoly, grounded in the belief that there should be just a few data silos. (Guess whose?)

Google DeepMind’s Health division might be entirely dependent upon a continued supply of NHS data, but the NHS is not dependent upon Google unless it chooses to be; other AI developers – and search engines – are available. The NHS is not in a position to ensure an effective market in search engines, but just as it already does for health information, it has the authority to do so for clinical assistance; assuming there is the political desire to have a functional and sustainable system.


This will form the basis of Part II of medConfidential’s submission the House of Lords Inquiry on AI.  We’d welcome your thoughts at sam@medconfidential.org / @smithsam

1 thought on “Everyone’s experience in AI decision-making

  1. Pingback: HRIntelligencer 1.14 | HR Examiner

Comments are closed.