Creating A People-Powered Future For AI In Health

The potential for technology to radically transform healthcare is one that I’ve touched upon numerous times on this blog.  Absorbing technologies such as artificial intelligence has been an ongoing challenge for healthcare providers however, with recent studies suggesting that change will be little more than tinkering at the edges rather than the systemic transformation that these new technologies support.

The innovation charity Nesta are usually one of the more enlightened thinkers on such topics, so it was interesting to read their latest report on the future AI-powered health system in the UK.

The report highlights the importance of ensuring that any deployment of AI technologies works towards making healthcare accessible and patient focused rather than as a barrier to access.

AI in healthcare

Firstly however, it’s pleasing to see a more realistic portrayal of AI’s capabilities and likely use cases in the coming years.  Whilst there have been many predictions of AI replacing doctors, the most likely application is in supporting them to triage patients and provide advice.

“Current-generation AI seems likely to be adopted in health where there is not much of a competing solution, rather than replacing humans at things they are not good at,” the authors say.

They propose a number of principles by which AI should be deployed in healthcare:

  • AI should give citizens a much better understanding of their health and how it can be improved.
  • AI should be designed to make it faster and easier for patients to resolve their health problems.
  • The relationship between doctor and patient should remain central, with AI used to ensure that those conversations are with the right people, at the right time, and with the right information.
  • AI should not exacerbate health inequalities.
  • AI should be understandable, questionable and held to account, both by citizens and health professionals.

To ensure this happens, the report makes a number of policy recommendations:

  • Public and clinical scrutiny – both citizens and healthcare professionals should be involved in the design, development and implementation of AI technologies, with public panels created to provide oversight and to ensure that the voice of the public is heard.
  • Tests in real-world conditions – to ensure the reliability of the technology, testing should be done in real-world conditions as much as possible.
  • Decision-makers equipped to be informed users – education of public leaders and decision-makers so that they have the technical skills and authority to scrutinize and manage AI in a responsible way.

“There is currently a window of opportunity to shape the future of AI in health,” the authors say. “Policymakers are already working to set rules for AI and ownership of public data that ensure the public gets not only value for any data it decides to share, and privacy elsewhere.”

Given the speed with which the technology is developing however, and the speed with which the decision makers in both government and healthcare move, there runs a risk of this window becoming quite small, quite quickly.  The last few years has seen a growing volume of reports and whitepapers written outlining various steps that can be taken to ensure AI evolves in the right way, but it remains unclear whether any of those recommendations have actually been acted upon.

Related

Facebooktwitterredditpinterestlinkedinmail