This ANSI standard, developed by the US Consumer Technology Association (CTA) Artificial Intelligence (AI) working group, was released in the last week of February 2021. To prove Trustworthiness of an AI in healthcare the standard usefully considers Human, Technical and Regulatory Trust separately but it may end up trying to do too much.
Human Trust. Or, how to make sure that users feel like they can trust the AI in their healthcare software?
The specifications in this section will be familiar to healthcare software developers because good User Interfaces (UI) are so important for safe clinical use. A good User Experience (UX) is an artform, often attained through iterative design methods, but usability regulations already point to structured ways of planning for all expected outcomes, including mistakes that users might make. Socio-technical analysis and human factors engineering are accepted frameworks to identify the complete range of normal use; for being able to show that the software works as expected when used correctly, and that use errors are mitigated; this analysis increases trust.
Which specifications are specific to AI? The ANSI standard wants it to be clear that when an AI model powers the decisions that software makes, it is the developers’ responsibility to explain how the model works. Human Trust is one part of UX but it may have been more interesting to be clear which specifications are different for software that has an AI model.
Technical Trust. Is there any poor data and were there any shortcuts in getting or storing it?
This comprehensive section of the ANSI standard sets out requirements for the data that is used to create the AI model and the features that derive from it. Uncontroversially it stresses that the AI model developer should retain the ability to assess the source data for bias or for being representative of the target situation. Furthermore there must be complete dataset curation for training, testing and updating the AI model.
But the specifications also say that the AI model developer must keep sufficiently detailed contextual information about the data and datasets that are unrelated to the AI model. This includes the conditions under which the data were collected, acquired and stored: the data construct, dates acquired, ethics, consent and privacy. There should also be a narrative of data security during the AI model development, training and deployment, including plans for software updates.
Regulatory Trust. What regulatory oversight matters for AI in healthcare?
This is a relatively short section that says that the relevant regulatory frameworks must be satisfied for the environment where the software will be used. There is special mention of international ISO standards for: software lifecycle processes IEC 62304, risk management ISO 14971, and usability IEC 62366.
The ANSI standard might more usefully be read in reverse:
- Regulatory Trust covers what we as software developers already know about developing safe and effective software for healthcare, including software with AI models.
- Since an AI model truly is only as good as its input data, Technical Trust starts at the data source. Good data means ethically sourced and curated datasets that remain available for being tested for bias and appropriateness, and having auditable annotations. It also means data security.
- And finally, to complete Human Trust in the AI add those usability specifications that users need and that are unique to AI models.