Combine two immiscible and complex topics and the outcome is a devil’s amalgam of issues to resolve, as a House of Lords select committee exploring uses of artificial intelligence in healthcare discovered this week, writes John Egan.
Healthcare fundamentally is a matter of risk and benefit.
All treatments carry some risk, as does the judgement not to treat. The very personal decision of whether the risk is justified by the potential benefit is at the heart of the patient-clinician relationship.
New methods of analysing healthcare data, machine learning in general and artificial intelligence (AI) in particular offer great potential in the assessment of risk and its balance with benefit and should, therefore, be valuable when advising on treatment decisions. AI may be able to spot small cancerous shadows on x-rays that otherwise may not be noticed, enabling earlier treatment.
Healthcare is about people. AI is, by definition, a non-human technology. Like a meeting of two tectonic plates, the combining of the two will not be smooth. A committee hearing on Tuesday of the House of Lords Select Committee on Artificial Intelligence sought to reconcile the issues.
“AI is complementary to conventional clinical practice and does not replace it”, was a repeated chorus of those that gave evidence, aiming to alleviate fears from the human side. For the moment – possibly.
The trouble with AI is that, just as with its human equivalent, it is only as good as it has been trained to be. And, like the human brain, its reasoning is difficult to decipher. It is very much a “black box”.
Important “training sets” of clinical data, which include both symptoms and diagnosis, enable computerised AI systems to learn to recognise patterns in the symptoms and predict diagnoses. It is good if a patient is represented by a training set – socially, genetically and demographically speaking. Otherwise the AI advice could well be off the mark.
So, there is a social value for hospitals to share the clinical data of their patients, which will then properly represent their local population and support future healthcare decisions.
But there is also commercial value in the clinical data, as this can only be sourced from patients through their hospitals and clinicians. Committee member Baroness Grender MBE was keen to explore whether the UK National Health Service (NHS), in it current perilous financial state, would be properly reimbursed for its valuable contribution.
But how much is the raw data worth? It is not widely traded. AI companies such as Alphabet, owners of Google, have such market dominance that they can go elsewhere to find a better deal and the resulting training sets may then be less representative.
The trouble with AI is that, just as with its human equivalent, it is only as good as it has been trained to be
The AI committee posed the question: “Is there such a thing as NHS data?”
The question sought to reveal how clinical data within the many NHS organisations is held in a multitude of different forms and media, and with different levels of completeness and validation, so as to require huge consolidation and refinement to be of any use for AI training.
“The digital revolution has largely bypassed the NHS, which, in 2017, still retains the dubious title of being the world’s largest purchaser of fax machines. Many records are insecure paper-based systems which are unwieldy and difficult to use”, the report says.
DeepMind was represented at the AI committee meeting by former Liberal Democrat MP Dr Julian Huppert, who is now chair of its Independent Review Panel. By not being bound contractually, Dr Huppert described how the independence of his review panel is maintained and how it provides a means of demonstrating the transparency of the collaboration of the NHS with DeepMind.
Using patient data is a sensitive subject. Those gathered were unanimous in agreement – this data cannot be fully anonymised. Patients risk being “re-identified” using the medical information held in clinical databases. This practice is about be made illegal in the UK, but the sanction will only reach as far as its jurisdiction – with limited effect as a consequence.
Dr Hugh Harvey, a consultant radiologist from Guy’s and St Thomas’ NHS Foundation Trust, observed that in cases where an AI inferred diagnosis may be important for the individual concerned, it could be vital that they be traced and, if necessary, treated.
Dr Harvey also noted that some individuals seem relaxed about sharing their personal data, as evidenced by their willingness to share this with social network companies in order to use their services. With the much greater potential benefit arising from the analysis of their clinical data, sharing this with AI companies may be even more welcome.
Perhaps the broader sharing of clinical data with internet companies is in any case inevitable. Amazon and other platforms may soon be distributing pharmaceuticals. As well as knowing the purchasing preferences of individuals, these companies will also be aware of their ailments – by using artificial intelligence.
The power of the internet platforms needs to be balanced with effective regulation and multinational legislation in order to protect patients, according to Dr Huppert. Added to this is the need for transparency to reassure patients that the essential collaboration between healthcare providers and private technology companies is working in their interests.
In a world where some media find it more profitable to frighten people that to inform them, it will be important to encourage a more balanced reporting, so that the public can be informed of the complex world of risk and benefit that AI brings to healthcare.
Headline Photo Credit: Elnur/Shutterstock.com