It is clear that AI will revolutionize healthcare. Effective AI applications are capable of saving medical practitioners and healthcare institutions time and large sums of money and – even more importantly – improving outcomes for patients, by making the world’s experts globally available inside computers.
There are numerous studies that demonstrate excellent performing AI applications. One such study is by researchers at Heidelberg University who tested an AI designed to distinguish between dangerous skin lesions and benign ones. They found that the AI could accurately detect 95% of skin cancer incidences. This particular AI was developed using a dataset comprising over 100,000 dermoscopic images of melanomas. An effective solution like this could ideally be used to support the diagnostic and decision-making processes of physicians screening patients with skin cancer.
Unfortunately, the number of AI-based solutions in healthcare that have so far been brought to market are very few despite the many journal articles featuring new Machine Learning (ML) models. Earlier this year, Nature published a systematic review that aimed to assess the usefulness of ML models developed to support the diagnosis and treatment of Covid-19, by looking at over 2,000 articles from 2020. They found only 62 were of sufficient quality to be included in the review and, of these, none of the ML models is of potential clinical use due to a high or unclear risk of bias in the datasets, or methodological flaws in the processing of the data. Datasets are small or unbalanced or are poorly recorded, and few studies provide validation for their results.
Efficacious projects to embody experts’ knowledge into an ML model present very promising signs of what is to come. But it is one thing to design a promising ML model, and another to use the ML model and design an AI that will be used in clinical practice. Medical AI applications must not only meet the emerging standards for AI but also be able to satisfy existing regulatory standards governing software used for medical purposes to prove that it is effective and safe for clinical use. Just as trained medical professionals are required to certify their ability to safely diagnose and treat patients, so should AI.
Bridging the gulf between developing an interesting ML model and developing an effective AI which can make it into a real-world clinical setting is not going to be easy. But what good are ML models that aim to replicate the world’s greatest medical expertise, if the benefits of such discoveries can not be passed on to patients? For those of us working in this field, we must dedicate our attention to how we reach the end goal – improving healthcare – and a significant part of that means addressing the processes adopted within ML model development.
Capturing specialist knowledge through the manual annotation of data and then using these datasets for developing ML models – supervised learning – is an easy to comprehend the process that seems to offer great traceability. However developing the training dataset can involve domain experts (e.g. surgeons, physiotherapists, pediatric nurses etc.) sitting for copious hours, trawling through medical data and marking up thousands of images. Therefore it is not surprising that, within ML model development, up to 80% of the project costs are associated with the data and the preparation of training datasets. Those already working in machine learning will know that this time-consuming task is necessary for producing the highest quality datasets – a prerequisite for trustworthy AI.
Using in-house expertise in medical imaging and data science, gliff.ai has developed an innovative software platform specifically designed to assist the development of trustworthy AI by addressing the gap for much-needed Machine Learning Operations (MLOps) products. We are acutely aware of the need to develop high-quality datasets and to comply with the emerging standards and frameworks for developing AI.
To make the process of developing trusted AI as simple as possible, our user-friendly tools are designed for curating, annotating and managing datasets, all in one platform. Using these datasets, data scientists can then develop new AI models. All your data is end-to-end encrypted so only you and your team ever see your data. Furthermore, using obscured code in AI development is unacceptable in regulated environments, which is why our code is Open Source and available for inspection by users and regulators alike.
gliff.ai’s platform, the first stage of its MLOps software suite gives users the functionality to:
- CURATE datasets, combine 1000s of images and annotations, including image labels, all with complete dataset versioning
- ANNOTATE images with intuitive tools to capture domain expert knowledge through image-level, region-level and pixel-level annotation
- MANAGE teams on projects, including assigning images to individuals for annotation, compare annotations of different individuals and see progress with project insights
This year, gliff.ai will also be releasing two further products which will enable users to:
- REGISTER individual AI models and a complete training history in an auditable fashion with AI model versioning
- AUDIT projects and access complete histories of datasets, annotations and AI models that can be downloaded to complete regulatory documentation.
gliff.ai is about bringing the world’s experts together in one environment. Only by doing this can we create amazing AI that really changes the world.