Close up of a person's eye

Using gliff.ai to Realise the Potential of OCT Data

The aim of the OCT data project is to create a ground-truth annotated dataset that would be utilised for the future development of fully automated AI tool capable of segmenting and quantifying different features in retina images and scans…

Interview with Dr Phil Jackson, Software Engineer

Have you ever wondered what it’s like working at gliff.ai? We asked our Software Engineer, Dr Phil Jackson, about his experience.

Phil is our lead developer for a series of contracts and has a PhD in Machine Learning from Durham University.

Phil Jackson with Cat
Dr Phil Jackson, Software Engineer
Can you describe what you do on a typical day within gliff.ai’s tech team?

These days I mostly write TypeScript, occasionally Python. We’re building out the platform right now so I get to implement lots of new features, with a large side of code review.

What do you like best about being a Software Engineer?

I get paid to make things. Every day I spend working on the product, it improves a little bit.

And based upon your personal experience, how would you describe the culture within gliff.ai?

Relaxed, informal, occasionally frantic but good work/life balance overall; we’re not expected to regularly work evenings/weekends.

So why did you decide to study Machine Learning (ML)?

Machine Learning followed naturally from the more traditional computer vision work I did in my Masters – it was clear at the time that vision would be completely dominated by ML techniques going forward, and it was all very new and exciting. I’ve also had some interest in AI for pretty much all my life, having experimented a bit with artificial neural nets shortly after I first learned to write code.

What can you tell me about ‘Style Augmentation’?

Style Augmentation [SA] is a data augmentation technique for Machine Learning vision tasks, which works by randomising the colours and textures in training images while preserving shape.

Other data augmentation techniques mostly perform geometric distortions (e.g. rotation, flipping, cropping/zooming), but there is evidence that convolutional neural networks (CNNs) often rely too much on texture (good review of literature in https://arxiv.org/abs/1811.12231, which does a similar thing to SA and was produced concurrently). So if we randomise texture then the ML model won’t be able to rely on it and will have to rely on shape, which is more reliably conserved between different domains.

You can think of it as making a harder training set for the model to train on, where the diversity of input is increased by texture randomisation, so when it’s done training, it will perform better on images it hasn’t seen before.

And when you’re not involved in software engineering, what do you get up to?

I like to play badminton and Kerbal Space Program… I enjoy sci-fi, and I also like to discuss issues relating to basic income and global development. In particular, I really enjoyed reading Factfulness by Hans Rosling, which challenged my perceptions of the developing world and provided some very compelling data visualisations. These days, if I need a data fix I go to ourworldindata.org.