The selfish fiction of “Safe Return” is reckless and unsafe

Late last year, Our Future Health (OFH) decided very, very quietly that their testing and feedback to patients is so full of mistakes that they’re going to skip feedback unless it’s truly and clearly an emergency. Even with their millions in resources, OFH shortcuts had caused unnecessary stress for their members as the worried well were misled due to OFH failures and encouraged to seek unnecessary medical support. 

OFH, Biobank, and HDRUK share a culture, which is wants to show similar recklessness  by breaking the promises of modern and safe ways of working, and undermine governance processes that were put there to keep analysts honest and patients safe.

The “five safes” Trusted Research Environments model has been tested over decades, meaning “safe people” doing “safe projects” on “safe data” in “safe settings” to produce “safe outputs”, and while the precise meaning of the five safes evolves with the context and the datasets, half baked additions of “new safes” weaken the whole model for short term gain. Anyone who tries to weaken the model should be assumed to be untrustworthy across their entire approach, as HDRUK’s recklessness demonstrates.

Of course, the Department of Health in England prefers the acronym “SDE” because Sometimes Data Escapes.

No model can be infallible; cheats and crooks will always try to game the system, and an organisation saying they follow the “five safes” can still catastrophically screw up, but understanding the model is necessary to start with and there’s one way that is often used to undermine it for gain.


The most common way of undermining the “five safes”

Some researchers dream that they’ll write the Stata code and find a cure for cancer, they’ll knock out a preprint, and the Nobel committee will wake them up the next morning instead of their alarm.

The fiction of a “sixth safe” – safe return – doesn’t even make it into that dream. It is the imagination that a researcher will come up with something so novel and so ground breaking that they must contact doctors immediately and directly to tell them to change how they treat patients. The sort of ego driven ideas you get from analysts who don’t ever deal with patients (it’s common at the Department of Health in England).

Even in the dream above, there is dissemination of a paper undergoing peer review and scope for replication. Doctors can read the new idea, see debate, and decide what is best for their patients given all the evidence available at the time, reflecting that different patients have different needs. Good preprints would contain enough detail to show how another organisation could, independently, repeat the analysis on their own patients. Researchers interested in patient care use open ways of working to share analytical code so that colleagues can check and reuse the analysis rapidly, they can test edge cases and the diversity of conditions that may only be visible from further away. It’s how good science advances.

“Safe return” abandons good practice in favour of secrets and bluster. There is no scrutiny, no scope for other input, no reassessment. The original researchers play God in believing they must second guess the treatment decisions of doctors with clinical responsibility. In the context of the Government’s proposal for a politically controlled single patient record, DH/E suggests it would be politicians making the decisions themselves for their own political reasons. (RFK Jr style?)

Even covid discoveries didn’t need the fiction

Contrast with the RECOVERY trial for covid. Finding dexamethasone was the start, but the approach was not to micromanage and second guess doctors around the country or the world, but to write a paper, give it attention, and let clinicians make informed decisions about their patients. That system worked because everyone understood what and why.

Your doctor already has your medical record to use in your diagnosis, and doesn’t have to take anyone’s word for anything, especially untested treatments.

In the very rare truly exceptional (fictional) case (which never happens, but egos argue there may be gold at the end of the their research paper rainbow), then “safe outputs” can justify an exceptional output of personal data back to the original data controller to add a new variable which is the risk assessment. If it really matters, the researchers can do the work to tell people what’s there and why it matters, rather than tossing their incomplete research back to the clinicians to deal with without sufficient information. If it’s not published, it’s not yet research.

But the shared toxic cultures of HDRUK and Biobank make decisions that benefit themselves before patients.