There is considerable excitement about the potential of AI to deliver more accessible, efficient, and high-quality healthcare; alongside concern about data privacy, bias, and how these new tools will be used in clinical practice.
The key to realising the benefits and addressing the concerns is the adoption of standards for the development and implementation of AI by manufacturers and their customers. And the good news is that standards are both available and developing rapidly, says Dean Mawson, clinical director and founder of DPM Digital Health Consultancy.
There’s considerable interest in the potential uses of AI in healthcare at the moment; but there is also concern about the possible risks that it could pose.
Challenges include questions about data privacy and algorithmic bias, how we can make sure that AI tools are subject to robust validation and testing processes, and how to make sure they are used safely in a clinical setting.
To address these issues, manufacturers will need to be transparent about their data models and the way their algorithms are trained and validated.
There will also need to be more education and training for the people who procure and use these tools.
Building trust
However, that will only take us so far.
Manufacturers are, understandably, keen to protect their intellectual property – and some AI operates as a ‘black box’ around which we can only see inputs and outputs.
At the same time, busy healthcare organisations, clinicians and patients need to understand the fundamentals, but are never going to be experts in such a complex area.
So, how do we secure the adoption of AI in this environment, and make sure its risks are properly managed?
The key is going to be ‘trust’ which the Oxford English Dictionary defines as: ‘a firm belief in the reliability, truth, or ability of someone or something’.
And one way in which other sectors, from airlines to engineering and med tech, build trust is through regulation.