UK to Pilot Algorithmic Impact Assessment (AIA) for Medical AI

Last week, the UK Government announced the pilot of an impact assessment tool to support the ethical development and adoption of artificial intelligence (AI) within healthcare.
‘Coded Bias’ brings bias in AI into mainstream

Coded Bias successfully demonstrates why unregulated, ill-thought-out and opaque approaches to developing AI can easily lead to flawed models that may not only discriminate against certain groups of society but also fail drastically to meet the applications’ original objectives.