We are entering the era of Web3 – the latest evolution of the Internet – characterised by decentralised information and sharing metadata. Web3 is about increasing cooperation and efficiency. And a critical ingredient in Web3 is the use of advanced technologies such as Artificial Intelligence (AI).
At gliff.ai, we are particularly focused on how AI is developed for medical purposes. gliff.ai works to make it easier for doctors to develop their own AI for use in their own practise. We take the pain out of developing AI in a regulated space such as medicine.
Why? Because AI can improve outcomes for patients and support medical practitioners around the world. It can also save health institutions money, and decrease inequality in healthcare. But it is simply not enough to just develop a medical AI. An AI must satisfy regulatory standards for software used for medical purposes to prove that it is effective and safe for clinical use. Otherwise, it cannot be used in real-life settings.
So then how do we develop AI for real-life healthcare and medical uses? This is where other key aspects of Web3 become central components of the gliff.ai’s software.
Gathering all the people you need into one place is an outstanding challenge in AI development. They may work in different organisations, fields or countries. The ability to share data and work among team members and collaborators – anywhere in the world – is a major benefit that we offer our users.
AI is created from patient data and the preservation of patient confidentiality has to be central to any platform used for developing medical AI. Going further than other MLOps software platforms, gliff.ai offers end-to-end encryption to ensure that highly sensitive and commercially valuable data – such as patient scans and data – are always secured.
At gliff.ai, our aim is to enable those with data to use it for AI development. We don’t want to aggregate users’ data. We don’t want to monetize users’ data. Being end-to-end encrypted, we can’t even see users’ data. Mankind stands to benefit greatly from data, but creating data swamps where people lose control of their data is unacceptable. That’s why data belonging to gliff.ai users will always belong to those users.
Transparency is not just a word which we talk about; we put transparency into practice. Transparency is particularly important when creating AI for use in a highly regulated environment such as Medical Devices. Firstly, our code for manipulating the data is open source, so that nothing is obscured. Secondly, gliff.ai provides users with a pixel level audit trail for the datasets generated so that we know who manipulated what, when, where and how.