Insights

Interview with Dr Phil Jackson, Software Engineer

Have you ever wondered what it’s like working at gliff.ai? We asked our Software Engineer, Dr Phil Jackson, about his experience.
Group 2

Using gliff.ai to Realise the Potential of OCT Data

The aim of the OCT data project is to create a ground-truth annotated dataset that would be utilised for the future development of fully automated AI tool capable of segmenting and quantifying different features in retina images and scans...
Group 2

gliff.ai Launches Innovative Platform for Developing Trustworthy AI

gliff.ai has launched its innovative software platform, specifically designed to assist the development of trustworthy Artificial Intelligence (AI) by addressing the gap for much-needed Machine Learning Operations (MLOps) products.
Group 2 Copy

Keeping your data secure and private with end-to-end encryption

We at gliff.ai believe that data privacy and security is paramount for the future of FATEful AI.
Group 2 Copy 5

AI in MedTech — What are we actually doing?

The use of artificial intelligence in medicine and healthcare is thought to be capable of freeing up large quantities of an expert’s time by undertaking exacting and laborious work, saving significant sums of money, improving diagnostic outcomes and democratising medicine by making the world’s experts available globally inside a computer.
Group 2

The missing people in MLOps

MLOps brings together scientists who develop these early stage AI systems with engineers and operations experts who are able to convert these AI models into reliable and scalable products such as apps on your phone or smart microscopes in the hospital.
Group 2

Interview with Dr Phil Jackson, Software Engineer

Have you ever wondered what it’s like working at gliff.ai? We asked our Software Engineer, Dr Phil Jackson, about his experience.
Group 2

Using gliff.ai to Realise the Potential of OCT Data

The aim of the OCT data project is to create a ground-truth annotated dataset that would be utilised for the future development of fully automated AI tool capable of segmenting and quantifying different features in retina images and scans...
Group 2

gliff.ai Launches Innovative Platform for Developing Trustworthy AI

gliff.ai has launched its innovative software platform, specifically designed to assist the development of trustworthy Artificial Intelligence (AI) by addressing the gap for much-needed Machine Learning Operations (MLOps) products.
Group 2

Keeping your data secure and private with end-to-end encryption

We at gliff.ai believe that data privacy and security is paramount for the future of FATEful AI.
Group 2

AI in MedTech — What are we actually doing?

The use of artificial intelligence in medicine and healthcare is thought to be capable of freeing up large quantities of an expert’s time by undertaking exacting and laborious work, saving significant sums of money, improving diagnostic outcomes and democratising medicine by making the world’s experts available globally inside a computer.
Group 2

The missing people in MLOps

MLOps brings together scientists who develop these early stage AI systems with engineers and operations experts who are able to convert these AI models into reliable and scalable products such as apps on your phone or smart microscopes in the hospital.
Group 2

‘Coded Bias’ brings bias in AI into mainstream

Coded Bias successfully demonstrates why unregulated, ill-thought-out and opaque approaches to developing AI can easily lead to flawed models that may not only discriminate against certain groups of society but also fail drastically to meet the applications’ original objectives.
Group 2

Why we’re choosing GNU AGPLv3

gliff.ai believes that we should practice what we preach and want our users to trust that our products, helping them create FATEful AI systems, are FATEful themselves. One obvious way to do this is to make our codes open source and thus enable people to scrutinise our codes and feel confident that they’re doing only what we say they’re doing.
Group 2
Pen pointing at a medical scan of a brain

A response to “The Use of Artificial Intelligence in Health Care: Trustworthiness (ANSI/CTA-2090)

This ANSI standard, developed by the US Consumer Technology Association (CTA) Artificial Intelligence (AI) working group, was released in the last week of February 2021. To prove Trustworthiness of an AI in healthcare the standard usefully considers Human, Technical and Regulatory Trust separately but it may end up trying to do too much.
Group 2

Interview with Dr Phil Jackson, Software Engineer

Have you ever wondered what it’s like working at gliff.ai? We asked our Software Engineer, Dr Phil Jackson, about his experience.

Using gliff.ai to Realise the Potential of OCT Data

The aim of the OCT data project is to create a ground-truth annotated dataset that would be utilised for the future development of fully automated AI tool capable of segmenting and quantifying different features in retina images and scans...

gliff.ai Launches Innovative Platform for Developing Trustworthy AI

gliff.ai has launched its innovative software platform, specifically designed to assist the development of trustworthy Artificial Intelligence (AI) by addressing the gap for much-needed Machine Learning Operations (MLOps) products.

Keeping your data secure and private with end-to-end encryption

We at gliff.ai believe that data privacy and security is paramount for the future of FATEful AI.

AI in MedTech — What are we actually doing?

The use of artificial intelligence in medicine and healthcare is thought to be capable of freeing up large quantities of an expert’s time by undertaking exacting and laborious work, saving significant sums of money, improving diagnostic outcomes and democratising medicine by making the world’s experts available globally inside a computer.

The missing people in MLOps

MLOps brings together scientists who develop these early stage AI systems with engineers and operations experts who are able to convert these AI models into reliable and scalable products such as apps on your phone or smart microscopes in the hospital.

‘Coded Bias’ brings bias in AI into mainstream

Coded Bias successfully demonstrates why unregulated, ill-thought-out and opaque approaches to developing AI can easily lead to flawed models that may not only discriminate against certain groups of society but also fail drastically to meet the applications’ original objectives.

Why we’re choosing GNU AGPLv3

gliff.ai believes that we should practice what we preach and want our users to trust that our products, helping them create FATEful AI systems, are FATEful themselves. One obvious way to do this is to make our codes open source and thus enable people to scrutinise our codes and feel confident that they’re doing only what we say they’re doing.
Pen pointing at a medical scan of a brain

A response to “The Use of Artificial Intelligence in Health Care: Trustworthiness (ANSI/CTA-2090)

This ANSI standard, developed by the US Consumer Technology Association (CTA) Artificial Intelligence (AI) working group, was released in the last week of February 2021. To prove Trustworthiness of an AI in healthcare the standard usefully considers Human, Technical and Regulatory Trust separately but it may end up trying to do too much.