Explainable AI in Optical Coherence Tomography

Understanding the reasoning behind predictions and decisions made by AI algorithms becomes increasingly important as more and more decisions are based on or influenced by AI.
However, in machine vision, the automated classification, detection, and segmentation of objects usually doesn’t provide explainability out-of-the-box. Even more so, in unsupervised learning settings. The medical field, in particular, would benefit from Explainable AI (XAI) as medical decision usually require explainability.

  • Goal

    Develop new XAI algorithms for supervised and unsupervised learning with Optical Coherence Tomography (OCT) image data sets.

     

  • Approach

    1. Literature review of current XAI approaches (e.g. backpropagation-based, perturbation-based, distillation-based)
    2. Identify suitable XAI approaches for OCT image data in
      1. Supervised learning settings (e.g. trainable attention)
      2. Unsupervised learning settings (e.g. autoencoder plus GAN)
    3. Implement suitable XAI approaches
    4. Benchmark suitable XAI approaches qualitatively and quantitatively
  • Further Information

    • 20% theory, 80% implementation
    • Term paper or master’s thesis, 1 student
    • Prior knowledge recommended in
      • Basics in Machine Learning and Deep Learning
      • Python
      • (optional) familiarity with Deep Learning Frameworks like Tensorflow and/or PyTorch

Haben wir dein Interesse geweckt?

Ich bin an der Studienarbeit Explainable AI in Optical Coherence Tomography interessiert und möchte mehr erfahren.