Explainable AI for Anomaly Detection

Machine learning, specifically artificial deep neural network, have been used extensively in the last decade to tackle problems in computer vision. Medical imaging is one field of application whose large potential has slowly started to be understood and exploited. Within medical imaging Optical Coherence Tomography (OCT) is an imaging technique which is based on low-coherence interferometry and employed in ophthalmology where it yields large quantities of detailed images from within the eye, in particular, from the retina.
The field of ophthalmology investigates various eye anomalies, e.g. eye diseases, which are distinguishable on OCT images. Thus, recently, numerous supervised machine learning approaches have trained artificial neural networks for classification and semantic image segmentation tasks. Based on this the explainability of these models is investigated, e.g. by Layer-wise Relevance Propagation (LRP).
On the other hand, many eye anomalies are rare and training data is little. For example in the case of subretinal/intraretinal liquids, hyperreflective foci, and subretinal hyperreflective material. In such cases, anomalies can be detected in an unsupervised learning setting through anomaly detection by employing e.g. autoencoder algorithms (possibly in combination with Generative Adversarial Networks, GANs). However, the study of explainability in these cases has been limited so far. But questions like "which area of the input image lead to the classification of an image as outlier/anomaly" are very important for medical professionals.

  • Goal

    Developing and investigating algorithms in the domain of Explainable AI (XAI) for anomaly detection in OCT image data.

     

  • Approach

    1. Do literature review of modern XAI approaches with focus on unsupervised learning (in particular anomaly detection)
    2. Identify suitable XAI approaches applicable in anomaly detection on OCT image data
    3. If necessary, define novel XAI approaches applicable in anomaly detection on OCT image data (e.g. by adopting LRP to the unsupervised learning setting)
    4. Perform anomaly detection on OCT image data (e.g. by learning a compression-and-reconstruction function through an autoencoder, possibly in combination with a GAN)
    5. Apply the previously identified XAI approaches to anomaly detection algorithms on OCT image data
  • Further Information

    • 20% theory, 80% implementation
    • Term paper or master’s thesis, 1 student
    • Prior knowledge recommended in
      • Basics in Machine Learning and Deep Learning
      • Python
      • (optional) familiarity with Deep Learning Frameworks like Tensorflow and/or PyTorch

Haben wir dein Interesse geweckt?

Ich möchte mich auf die Stelle als Explainable AI for Anomaly Detection bewerben.