AI and demonstrations of political power

Last September, a company which helps institutions understand data started a new project. What their client wanted, was to tell whether one category of videos could be distinguished from some others. The project was successful on a test dataset, and they produced a demo. The very happy client forwarded this to their boss, who sent it to their boss, and so on, until then the Home Secretary went on TV to say that the Home Office had better technology than Google for blocking ISIS videos. At no point, was there a need to test or explain whether the demo worked beyond the test data. That seems to be the standard for AI – is data processing a place where the rule of law doesn’t matter?

The Department of Health also launched guidance on “decision making” AIs for use in the NHS. Innovations have always come to healthcare and spread based on understanding – there is no need to throw all past lessons out because a PR budget gets spent. Separately, the “Malicious AI report” is worth reading – but already feels dated as the risks are both real and timely, and political imperatives are rarely spun as malicious.

Given the option for a quick headline and a cheap political point, politicians will choose to score it. With digital systems of any kind, there is a temptation to take a shortcut and claim victory.  The Home Office claimed to have an AI which did what it wanted – by ignoring any real world requirements that made things harder. This is not the greatest of precedents for public bodies using AI tools to make decisions, especially on groups who do not command easy political support.

Explainability” is just putting in the extra time to test models and understand how they work – rather than selling the first thing that seems to meet the goal. That faster approach may have short term financial benefits, but it can be more widely toxic as an outcome generally best avoided. The Home Office can make this claim for this AI, as it as the first Department to do so; the next claim will be treated with the greater scepticism that it deserves. We’ve put the Home Office statements into the Government’s ethics framework – which again shows the failures of that framework.

‘Trustworthy technology’ needs to address systemic harms. The first mover advantage on AI will go to those with the least moral concerns about real world effectiveness, until there is a clear reputational harm for continuing to work on systems which are known to be damaging. This is why the ‘most successful’ public sector AI project in the UK is from the Home Office – harms to others are something they have never bothered to avoid.

What started out as a legitimate technology project – can AI help identify ISIS videos? –  demonstrating potential, was spun as something else. Had explainability been a prerequisite of that project as being considered a success as it was claimed, (rather than simply a stage of a trial). Where an entity refuses to follow such processes, as in the drug discovery arena, reputable actors should simply refuse to deal with them as that should be one of the requirements of being seen as a reputable actor.  The Partnership on AI was supposed to consider how to address such issues – but companies outside the partnership aren’t bound by their rules… But many of the staff of those outside would not wish to be barred from working there due to other associations (there, of course, must be a way to demonstrate lessons have been learnt)…

The AI guidance from the NHS contains a checklist, written by “industry” and the Wellcome Trust, which is so vague it barely addresses previous problems, let alone handling future questions. There are no considerations of the principles of ‘trustworthy technology’ by developers, nor any references to equivalent protocols for decision making AIs as we have for determining doctors are trained or new medicines are safe. Claiming you have a phase 0 success is one thing (whether a drug or AI), claiming you have a phase 3 success is quite another – and so it should be with machine learning tools that need to be explained.

Many of the greatest failures of the Home Office are due to technical ineptitude. While their policy can not correctly distinguish an arse from an elbow, technology has moved on sufficiently to do it for them, letting Marsham Street ignore the details while delivering the opposite of human flourishing.

Does HMG wish to be permanently excluded from buying from working with partnership members because it chased a cheap headline?  Does the partnership have the willingness to ensure members deliver “responsible” AI? It is the public headlines and narrative that matters, and the biggest headline about AI in Government is that of the Home Office wanting to choose content purely based on a single suprious claim. Government acts as a single customer to their suppliers; and the reverse must be true for AI & ethics.

 

Data Protection and Datasets of National Significance

Second reading of the Data Protection Bill is in a week – and Government has still not explained the effects of their proposals to centralise data policy in the periphery of Whitehall. As DCMS struggle with a politically led centre for AI data and ethics, announcements like the one from the Home Office will grow. Not because they have solved any problems, but they have done something which redefines the problem as sufficient for the box to get ticked, political claims to be made, and someone else to pick up the pieces. The Home Office does not care about DCMS politics or policy, but which way will Google DeepMind be lobbying DCMS on this?

Lord Mitchell amended the Data Protection Bill to require public bodies estimate the worth of their “datasets of national significance”. Lord Darzi is thinking along similar lines about a new deal with patients. While both good and worthy initiatives that are deserving of time, there is a risk other policies will make them irrelevant.

Lord Mitchell’s amendment mandates an assessment that should be written down, but under current rules, what NHS England or any public body will be forced to write by HM Treasury is that giving data to private companies that employ UK staff, will create new tax revenues from those staff (since company profits go offshore). One NHS trust working with Google might create a nice deal for themselves from some data – but the rest of the NHS will still have to pay a much higher rate.

What will happen when the public understand that this is how their data gets used, and where the money goes?

Even if the Government take the clause out of the Data Protection Bill, whether UK data should be flowing to tax havens is likely an increasingly important question for public debate.  This question is not going away – NHS Digital already do some checks that they’re not dealing with an empty shell company (PHE’s only meaningful step is to check that the fees are in their bank account). Does Government wish to ignore an issue that will resonate with the public on data, or leave in place the small and sensible steps Lord Mitchell added to the Bill?

Enclosed: