Some of the world’s most influential governments in medical regulation have released guidelines for machine learning practices used in the development of medical Artificial Intelligence (AI). The regulatory landscape concerning medical AI is still evolving to catch-up with the technological advances in this area. However, the new guidelines, published by UK, US and Canadian government bodies, provides a significant leap forward for building trustworthy AI for medical use.
AI will revolutionise healthcare, but it is paramount that AI applications are developed to the highest standards to ensure effectiveness and safety for clinical professionals and their patients.
Last week, the UK Government’s Medicines and Healthcare products Regulatory Agency (MHRA) published a joint document with the U.S. Food and Drug Administration (FDA) and Health Canada, titled “Good Machine Learning Practice for Medical Device Development: Guiding Principles.” The partnership between these governmental organisations is indicative of the international efforts being undertaken to establish consensus in global standards for medical AI.
The document outlines 10 principles for Good Machine Learning Practise that will help promote safe, effective and high-quality medical devices that use AI and machine learning. The development of medical AI is iterative and data-driven and uniquely complex. It is therefore no surprise that the principles start from the collection of patient data, to the design of machine learning models to fit the data and its intended use cases. Furthermore, the first principle emphasises the need for expertise from different disciplines to be incorporated throughout the product life cycle. This collaboration between different stakeholders (e.g. clinical experts and data scientists) to create medical AI is essential and is often underestimated – an issue addressed in this article by gliff.ai from April 2021 about the “missing people in machine learning.”
An innovative start-up gliff.ai, spun out from Durham University, has strongly welcomed the release of these guidelines. gliff.ai’s software is specifically designed to assist the development of trustworthy AI by addressing the gap for much-needed Machine Learning Operations (MLOps) products.
“We’re really excited to see that these guidelines emphasise the role of high-quality data in AI development, as well as the need to involve experts from different disciplines,” says Chas Nelson, CTO of gliff.ai. “At gliff.ai, we’ve already been focusing on addressing the issues presented in these guidelines – for example, our platform facilitates collaboration between experts in clinical professions and data science, so that they can develop high-quality datasets together.”
Lucille Valentine, gliff.ai’s Head of Regulation and Compliance, closely monitors global developments in standards and regulation pertaining to AI and machine learning. “These international guidelines represent a huge milestone for medical AI development. What’s more, they underline the case that AI developers must use a first-rate MLOps system to develop their datasets, especially if they wish to see their products put into use in the real world,” says Lucille.
Whilst demonstrating a leap forward for trustworthy medical AI, these guidelines form only part of the foundations required for a global set of standards. The intergovernmental partnership expects that their 10 principles will help inform further international engagement with public health bodies.