With the new ‘Enterprise’ product, large companies can opt for an on-premises deployment…
Last week, the UK Government announced the pilot of an impact assessment tool to support the ethical development and adoption of artificial intelligence (AI) within healthcare.
“It is great to be leading the world in software development for AI…”
Some of the world’s most influential governments in medical regulation have released guidelines for machine learning practices used in the development of medical Artificial Intelligence (AI).
In Autumn 2021, gliff.ai completed a feasibility study for an anti-counterfeiting imaging artificial intelligence (AI) platform, working in partnership with Durham University and a multinational manufacturer.
gliff.ai has launched its innovative software platform, specifically designed to assist the development of trustworthy Artificial Intelligence (AI) by addressing the gap for much-needed Machine Learning Operations (MLOps) products.
Coded Bias successfully demonstrates why unregulated, ill-thought-out and opaque approaches to developing AI can easily lead to flawed models that may not only discriminate against certain groups of society but also fail drastically to meet the applications’ original objectives.