Artificial intelligence (AI) has many potential applications in health care. Divya Srivastava writes that as policymakers grapple with the question of how to regulate AI health solutions, there is a clear need for international collaboration founded on robust research.
Digital health technologies have become an important part of health care systems across the world. This rise in popularity reflects rapid advances in wireless technology and computing power as well as increasing interest in the application of artificial intelligence and machine learning in health care.
Digital health technologies can be used for patients, health care professionals, health system managers and data services. They include the use of apps, programmes, software and AI in public health interventions and for specific procedures or therapeutic purposes. They can be used in isolation or combined with other products such as medical devices and diagnostic tests. Other important digital health technologies are used in patient administrative and operational support systems.
A more proactive approach
As recently as a few years ago, many countries did not have a strategy for digital health or explicit regulation around market access, safety and quality. Health policies did not address how to support the implementation and adoption of digital health technologies to move at scale. Furthermore, decision-making around digital health technologies tended to operate in silos, with privacy authorities thinking about data concerns independently from health authorities focusing on safety, quality and efficacy.
Fragmented decision-making remains a challenge, but growing interest in the intersection between AI and health care has shifted the decision-making landscape and could in part address this. First, globally, many countries are taking a proactive approach with respect to AI but also a government-wide approach that brings key institutions and stakeholders together. Second, there is growing interest in how to think about regulating AI solutions in health care. Third, at the international level, the World Health Organization and the Organisation for Economic Cooperation and Development are actively informing this policy debate, creating a space for decision-makers to come together.
What needs to happen?
With the continued growth in supply and demand for digital health technologies, coupled with the arrival of COVID-19, there is increased research and policy focus in this area, across geographies, nationally and internationally. With all the potential opportunities of AI in health care, the fundamental question of whether it is a force for good remains. Any attempt to answer this question requires grounding in the following principles to support knowledge and evidence on the use of responsible AI.
First, policymakers should have an understanding of the risk and functionality of AI solutions. For example, low risk could relate to simple monitoring or relatively higher risk around supporting diagnosis and clinical decisions.
Second, countries should have evidence standards for AI solutions in place. Health care economic evaluation provides methods to assess and evaluate the costs and benefits of medicines and medical technologies. In individual countries, health technology assessment bodies apply these methods to inform decisions around their use and adoption. For example, in the UK, the health technology assessment body – the National Institute for Health and Care Excellence – updated its Evidence Standards Framework to include the evidence requirements for AI solutions.
Third, robust studies of AI are needed. Our recent work on how to establish standards in economic evaluations for AI is a step in this direction to improve the calibre and benchmark AI related research outputs. Fourth, AI in health care is an active area of ongoing learning, requiring ways to test solutions and its applications. Collaboration could bring many benefits in this respect. Potential actions include listening and engaging with the public about concerns, public reporting and monitoring of AI performance, setting out rules about data control, incentivising and overseeing adherence to responsible AI principles, and monitoring solutions and applications once they are available on the market.
Finally, international collaboration will complement national efforts. This might include a focus on operationalising policies and codes of conduct that remove the unnecessary and unhelpful barriers to responsible AI while ensuring appropriate risk classification frameworks, mitigation measures and oversight are in place. As market activity continues to grow, tracking progress with the development of AI policy will improve our knowledge.
International forums offer a space for sharing collective learning to identify policy responses, joint problem solving and co-ordination to mitigate barriers. AI has become the use case for ongoing collaboration and learning in global health. Indeed, this brings to the fore a notion articulated almost two decades ago around a model for continuous learning by the National Academy of Medicine – learning health systems – an approach that resonates when it comes to AI in health and is more pressing now than ever before.
Note: This article gives the views of the author, not the position of EUROPP – European Politics and Policy or the London School of Economics. Featured image credit: Guschenkova / Shutterstock.com
Discussion about this post