Fovea Localization for Age-related Macular degeneration (AMD) and Non-AMD Patients

2021, Apr 9    

This blog presents the solution to the ADAM challenge hosted by the https://grand-challenge.org. The ADAM challenge focuses on the investigation and development of algorithms associated with the diagnosis of Age-related Macular degeneration (AMD) and segmentation of lesions in fundus photos from AMD patients. In this blog we focus on the subtask of the problem, that is localization of Fovea. The fovea centralis is a small, central pit composed of closely packed cones in the eye. It is located in the center of the macula lutea of the retina.[1] The fovea is responsible for sharp central vision (also called foveal vision), which is necessary in humans for activities for which visual detail is of primary importance, such as reading and driving. The fovea is surrounded by the parafovea belt and the perifovea outer region.[2]

Location of Fovea for Non-AMD
Figure 1: Location of Fovea for Non-AMD

Code can be found here: https://bit.ly/2OZ8uer
More about the competition: https://amd.grand-challenge.org/Home/

We wil, explore the following:

  • Exploratory data analysis
  • Data transformation for object detection
  • Creating custom datasets
  • Creating the model
  • Defining the loss, optimizer, and IOU metric
  • Training and evaluation of the model
  • Deploying the model

Exploratory data analysis

In the dataset, there is a CSV file which has three columns where first column tells the name of the image file, second and third represents x, y coordinate of Fovea’s center. There are 311 Non-AMD patients while 89 are AMD.

X, Y coordinate of Fovea’s center for different classes
Figure 2 :X, Y coordinate of Fovea’s center for different classes.

Original size of the image for height varies from 1400 to 2200 and the width ranges from 1400 to 2400 pixels.

A look into Dataset with image size
Figure 3: A look into Dataset with image size

Data Transformation and Augmentation [Results only]

Data augmentation and transformation is a critical step in training deep learning algorithms, especially for small datasets. The iChallenge-AMD dataset in this chapter has only 400 images, which is considered a small dataset. As a reminder, we will later split 20 percent of this dataset for evaluation purposes. Since the images have different sizes, we need to resize all images to a pre-determined size. Then, we can utilize a variety of augmentation techniques, such as horizontal flipping, vertical flipping, and translation, to expand our dataset during training.

  • Image Resizing:

    • We will resize the image to (256, 256). We also have to scale the labels in the same ratio. Refer to the code link.
    • Image Resizing
      Figure 4: Image Resizing
  • Random Vertical Fliping

    Vertical Flip
    Figure 5: Vertical Flip
  • Random Horizontal Fliping

    Horizontal Flip
    Figure 6: Horizontal Flip
  • Random Shifting

    Random Shifting
    Figure 7: Random Shifting

Creating Model:

Model Architechture
Figure 8: Model Architechture

The code snippet shoes the implementation:

Model implementation
Figure 9: Model implementation

The loss, optimizer, and IOU metric:

  • Loss Curve
    Loss Curve
    Figure 10: Loss Curve
  • Accuracy Curve
    Data Distribution
    Accuracy Curve
  • IOU Metric as label in the figure.
    Data Distribution
    IOU

References: