An Approach for Automated Segmentation of Retinal Layers In Peripapillary Spectralis SDOCT Images Using Curve Regularisation

Robert Kromer1*, Shafin Rahman2, Filip Filev1 and Maren Klemm1

1University Ophthalmology Center Hamburg-Eppendorf, Hamburg, Germany

2University of Manitoba, Department of Computer Science, Winnipeg, Canada

Corresponding Author:
Robert Kromer
University Ophthalmology Center Hamburg- Eppendorf
Hamburg, Germany
Tel: +49(0)15222824076
E-mail: rkromer@me.com

Received date: March 06, 2017; Accepted date: April 06, 2017; Published date: April 08, 2017

Citation: Kromer R, Rahman S, Filev F, Klemm M (2017) An Approach for Automated Segmentation of Retinal Layers In Peripapillary Spectralis SD-OCT Images Using Curve Regularisation Ins Ophthal. Vol. 1 No. 2:10

Visit for more related articles at Insights in Ophthalmology

Abstract

Spectral domain optical coherence tomography (SD-OCT) is used for cross-section imaging, in ophthalmology for retinal tissue. Extracting structural information about the retinal layers is becoming increasingly important, as this has the potential to expand retinal disease research and improve diagnosis. The purpose of our study was to facilitate a robust and efficient algorithm for segmentation of peripapillary retinal Spectralis SD-OCT images. The approach utilized median filtering for pre-processing and curve fitting-based regularisation for layer segmentation. For evaluating the methodology 40 SD-OCT images were used. To quantify the algorithmic performance, the resulting segmentation was compared against manual segmentation. Comparing the error in automatic segmentation with inter-rater variability showed no significant difference. This shows that segmentation for retinal images can achieve high precision. However, there is no doubt that existing algorithms do not work well on all pathologies and automatic segmentation will likely never replace the ophthalmologist.

Keywords

Retinal imaging; Retinal nerve fiber layer; Image processing; Optical coherence tomography (OCT); Nerve fiber layer imaging and analysis

Introduction

Spectral domain optical coherence tomography (SD-OCT) is a useful image modality that yields non-invasive high-resolution cross-sectional images of biological tissue [1]. In ophthalmology, this technology is used for diagnostic purposes, as it allows the collection of in vivo information, permitting the display of tiny details of the retinal structure [2]. Extracting structural information about the retinal layers is becoming increasingly important, as this has the potential to expand retinal disease research and improve diagnosis. At present, quantitative thickness measurements and topographic thickness maps derived from those data are widely used for diagnostic and scientific purposes [3]. Manual segmentation is limited in its clinical value, as it is time- consuming and difficult. As a result, a number of automatic segmentation methods have been proposed to segment retinal layers [4-20].

The most common segmentation methods are based on intensity information [4-13]. However, these segmentation methods have difficulties when experiencing intensity discontinuity and inconsistencies in the retinal layers. Several improved segmentation approaches have been proposed [14-18]. Fabritius et al. utilised 3D information to overcome this limitation; however, this group only segmented a single layer (with two boundaries). Yang et al. used local and global gradient information to segment nine boundaries [18].

A common problem in image segmentation, especially with cross-sectional retinal images, is speckle noise. New versions of SD-OCT such as Spectralis (SPECTRALIS; Heidelberg Engineering, Heidelberg, Germany) try to minimise noise by averaging a specified number of frames and simultaneously using eye tracking. However, speckle noise still needs to be removed for precise identification of the boundaries of cellular layers of the retina [19-21]. Typically in the literature, this de-noising step is referred to as image pre-processing. The most popular preprocessing methods are median filtering and non-linear filtering [6,11-13,19-24]. While median filtering reduces noise effectively, it decreases image resolution, as well. Non-linear filters can be justified through their ability to preserve edge information.

In our study, we used as pre-processing a mean filter due to its simplicity and its property of preserving the important macrostructure of the image, adopted from Herzog et al. [19]. For layer segmentation, a curve fitting-based regularisation, modified from Yang et al. is utilised [17]. Our focus is on developing a robust and efficient approach for peripapillary SD-OCT images. As we are using Spectralis SD-OCT technology with active frame averaging and eye tracking for images with reduced noise, pre-processing is needed to a lesser extent.

The purpose of our study is to present a robust and efficient curve fitting-based regularisation algorithm for segmentation of peripapillary retinal Spectralis SD-OCT images.

Material and Methods

For evaluating the layer segmentation methodology, 40 SD-OCT images of 40 healthy participants were used (one eye was chosen randomly for each subject). The Ethics Committee of the Medical Association of Hamburg

ruled that approval was not required for this study, as all data were acquired anonymously. The study followed the recommendations of the Declaration of Helsinki (Seventh revision, 64th Meeting, Fortaleza, Brazil) and Good Clinical Practice. Written informed consent was obtained from each patient before any examination procedures were performed. Patients were excluded from the study if they were unable to give informed consent.

Each patient’s history and medical records were carefully reviewed for diseases that could possibly affect the retinal nerve fibre layer (RNFL). Only patients satisfying inclusion and exclusion criteria were included. The ophthalmic inclusion criteria were (i) best-corrected visual acuity of 0.3 LogMAR or better, (ii) spherical refraction within ± 5.0 dioptres (D), (iii) cylindrical correction within ± 2.0 D, and (iv) normal results for visual field testing (Humphrey Visual Field Analyzer 30-2 (76 points over the central 30° of the visual field]; Humphrey, San Leandro, CA, USA). The exclusion criteria were (i) intensive alcohol abuse, (ii) body mass index>30 kg/m2, (iii) intraocular pressure ≥ 21 mm Hg, (iv) anterior ischaemic optic neuropathy, and (v) congenital abnormalities of the optic nerve.

Patients underwent a series of ophthalmic examinations, including (i) assessment of best-corrected visual acuity by autorefractometry (OCULUS/NIDEK auto-refractometer, OCULUS Optikgeräte GmbH, Wetzlar, Germany) followed by subjective refractometry, (ii) slit lamp-assisted biomicroscopy of the anterior segment,

(iii) ophthalmoscopy after medical dilation of the pupil, (iv) visual field testing (Humphrey Visual Field Analyzer 30-2 [76 points over the central 30° of the visual field]; Humphrey, San Leandro, CA, USA), (v) Goldmann applanation tonometry, and (vii) SDOCT image acquisition (SPECTRALIS; Heidelberg Engineering, Heidelberg, Germany).

The RNFL scans were acquired using SD-OCT (SPECTRALIS software version 6.0a; Heidelberg Engineering). This methodology obtains non-contact frames in high resolution of the RNFL. The device is a combination of conventional OCT technology and confocal scanning laser ophthalmoscopy. A superluminescent diode was used to emit a light beam with a wavelength of 870 nm. The SDOCT can receive up to 40,000 A-scans per second with a depth resolution of 7 μm and a transverse resolution of 14 μm. The confocal scanning laser ophthalmoscopy (cSLO) technology uses a laser in order to illuminate the retina and to scan it point-by-point to deliver a real-time capture of the retina. This reference image was linked and saved to the SD-OCT scan with an eye tracking system (TrueTrackTM, Heidelberg Engineering, Heidelberg, Germany). An additional feature, the automatic realtime averaging mode (ART), resulted in the achievement of even higher quality. First, the area of interest was identified with cSLO and then locked. Every time the eye was tracked in the same direction, scans were taken. Measured data were automatically averaged and artefacts were minimised. In this study, only high-quality data with a total of at least 18 frames were used to provide images with low noise. Due to high-resolution scans, the individual layers of the RNFL were discriminable even in the absence of pupil dilatation. We first positioned the scan perfectly centred on the optic disc and enabled the ART mode. For each patient, three high-resolution scans and three highspeed scans were acquired by a single examiner to minimise variability. All images not meeting the following criteria of quality were dismissed: (i) the fundus had to be clear before and during image acquisition, (ii) absence of scan and algorithm failures was necessary, (iii) the grey scale saturation of each RNFL needed to be consistent, with the retinal pigment epithelium showing maximal shading, and (iv) no discontinuity of the scanned layer.

All SD-OCT images were manually segmented independently by two experienced observers. These segmented images were established as gold standards (approximate ground truth) and were compared with the automatically segmented layers. Statistical analysis was performed using a commercially available software package (Prism 6 for Mac OSX; GraphPad Software, Version 6.0e). The means and standard deviations were presented and P-values were corrected according to the Bonferroni method to correct for the performance of multiple statistical analyses. All P-values were two-tailed, and a P-value<0.05 was considered to indicate statistical significance. Correlation was performed using Pearson correlation calculations, as the values sampled from the populations followed an approximate Gaussian distribution. The correlation coefficient is indicated by r.

Calculation

The proposed method consists of image pre-processing and layer segmentation. The input images are monochromatic. Image preprocessing involves 1) median filtering (adopted from Herzog et al. [19]), 2) grey-level homogenisation, 3) feature extraction using a thresholding operation, and 4) removal of falselydetected isolated vessel pixels. Layer segmentation uses curve regularisation, as SD-OCT images contain a significant amount of noise, and therefore the curve features many local minima and maxima. In a typical image, a few layers are typically more prominent than others. Consequently, these more prominent boundaries are segmented initially, followed by the less prominent ones. As the layer boundary is horizontal, each column of the image matrix is analysed separately. The following segmentation of the boundaries of the individual layers can be described by a curve. The boundary can be described by the following n degree polynominal function:

F(x)=anxn+an-1xn-1+an-2xn-2+…+a2x2+a1x+a0

where ai is the co-efficient of the equation and x represents the horizontal axis of the curve; the width of the image. x can be any integer value from 1 to image width (w). For each x an individual f(x) is calculated, leading ultimately to a curve that represents the boundary of the layer. This curve fits exactly to the boundary in the case that w=n-1. Using curve fitting-based regularisation by modifying n to n<<w, the curve is optimised and less distorted by noise.

We used the proposed consensus nomenclature for the classification of retinal layers and bands visible on SD- OCT images of a normal eye by the International Nomenclature for Optical Coherence Tomography [IN_OCT] Panel [25]. Eight boundaries were detected, including: Boundary 1 corresponding to the inner limiting membrane (ILM); Boundary 2 between the nerve fiber layer (NFL) and ganglion cell layer (GCL); Boundary 3 We used the proposed consensus nomenclature for the classification of retinal layers and bands visible on SD- OCT images of a normal eye by the International Nomenclature for Optical Coherence Tomography [IN_OCT] Panel [25]. Eight boundaries were detected, including: Boundary 1 corresponding to the inner limiting membrane (ILM); Boundary 2 between the nerve fiber layer (NFL) and ganglion cell layer (GCL); Boundary 3 between the Myoid Zone (MZ) and Ellipsoid Zone (EZ); and Boundary 4 between the Interdigitation Zone (IZ) and the RPE/ Bruch’s Complex (RPE) (Figure 1).

insights-ophthalmology-intra-retinal

Figure 1: Illustration of eight intra-retinal boundaries from top to bottom: boundary 1: ILM, boundary 2: NFL/GCL, boundary 3: GCL/IPL, boundary 4: IPL/INL, boundary 5: INL/OPL, boundary 6: IS/OS, boundary 7: OS/RPE and boundary 8: BM/Choroid.

Image pre-processing

The first step in our approach is noise suppression: A median filter was chosen due to its simplicity and its property of preserving the important macrostructure of the image. We applied a 6 × 6 median filter to each image. This suppresses most of the speckle and homogenises the retina and choroid by destroying the underlying microstructure.

The pixel value of the resulting gradient image is not normalised. Therefore, all pixel values were linearly re- mapped to the range 0 to 1, leading to a shade-corrected image with reduced background intensity variations and enhanced contrast (Figure 2a).

insights-ophthalmology-retinal-boundaries

Figure 2: Steps of the proposed methods exemplary demonstrated for the ILM, ELM and BM/Choroid boundary.

Further image pre-processing is required to distinguish between layer boundaries and the background. The threshold is dynamically calculated using the following equation:

threshold=mean(gradlmg)+3 × std (gradlmg)

Figure 2b shows an image with feature extraction using this thresholding operation.

A significant number of unnecessary segments that were not connected with layers were detected and were considered as noise. To remove those unnecessary segments, we applied some concept for removing unconnected components. It is obvious that all of the unnecessary segments are smaller in size, which means that the total number of pixels inside the unnecessary segments is relatively low. Component regions were built in the image, and all pixels in a component region were assigned the same label. In order to remove artefacts, the pixel area in each connected region was measured; during artefact removal, each region connected to an area below p was reclassified as a non-layer. An image resulting from the removal of all non- layer classified pixels is shown in Figure 2c.

Layer segmentation

The first three visible boundaries are those of the ILM, MZ/EZ and IZ/RPE. For each column of image matrix, suppose that r is a vector containing the position of white pixels in ascending order. Now, we apply the following difference operation on r to calculate r’ where i=1,2,…, length(r) -1

r’(i)=r(i+1)-r (i) .

For any i, we get relatively high values in the r’ array, because of the black band inside of the two white layers. In this setting, index i is considered as the position for the ILM layer boundary. The value length (r) is the position of the MZ/EZ boundary. The IZ/RPE boundary is considered as the median point of the values ranging from r(i) to r(length (r)). The described calculation was repeated for each column of the image matrix, resulting in a curve for the ILM, MZ/EZ and IZ/RPE layer boundaries, as shown in Figure 2d. This was followed by the previously described curve fitting-based regularisation, illustrated in Figure 2e.

Between the two ILM and MZ/EZ boundaries are two layers distinguishable as being separated by the NFL/GCL. Assuming that ILM and MZ/EZ are vectors representing the curve of the ILM and MZ/EZ layers, then the column position needs to be analysed for i=1,2,…,n from ILM (i) to MZ/EZ in order to find NFL/GCL (i). Within that range of columns, the second-order derivative of the original image pixel values was calculated.

The vector position greater than the predefined threshold was assigned to NFL/GCL (i): because d is a vector of the gradient of the column of the gradient image, the threshold is mean (d)+0.7x std(d), resulting in a curve for NFL/GCL. After curve fittingbased regularisation, the curve is less dependent on noise. All segmented and calculated boundaries are presented in Figure 3.

insights-ophthalmology-Exemplary-demonstration

Figure 3: Exemplary demonstration of all segmented and calculated boundaries using the proposed method.

Results

In order to quantify the algorithmic performance of the proposed method, the resulting segmentation was compared against its corresponding ground truth image. This image was obtained by the manual creation of a boundary mask in which all boundary pixels are set to one and all non-boundary pixels are set to zero.

Thus, the performance of automated layer segmentation could be assessed. Each layer was represented as a vector of pixel position (they coordinate of the layer). The vector of each boundary was calculated by the algorithm and compared with the boundary vector of the manually segmented image in terms of Euclidean distance between vectors. Performance was evaluated based on the distance between vectors. A lower distance value indicated good performance. The performance results are plotted in Table 1 (first observer vs. second observer, and algorithm performance vs. first observer). The average of all distances in the case of the first observer vs. the second observer was 3.443 ± 0.9058. The average of all distances in the case of algorithm performance vs. the first observer was 3.661 ± 0.9043. Comparing the average distances of the first observer vs. the second observer with algorithm performance vs. the first observer showed no significant difference (p > 0.05). The best and worst algorithm segmentations are presented in (Figures 4 and 5).

Layer Boundaries Algorithm performance vs. First observer First observer vs. Second observer
ILM 2.370 ± 0.8334 2.344 ± 0.8280
NFL/GCL 4.152 ± 1.154 3.244 ± 0.9125
GCL/IPL 3.451 ± 0.9078 3.266 ± 0.9671
IPL/INL 3.482 ± 0.9515 3.562 ± 0.9668
INL/OPL 3.724 ±1.128 3.661 ± 1.208
IS/OS 5.379 ± 1.151 4.054 ± 1.296
OS/RPE 4.397 ± 1.248 4.523 ± 1.473
BM/Choroid 5.649 ± 1.452 5.195 ± 1.977

Table 1: Average performance measures (numbers in Euclidian vector difference).

insights-ophthalmology-individual-comparisons

Figure 4: Distances of individual comparisons between algorithm performance and first observer in Euclidian vectors for all segmented boundaries.

insights-ophthalmology-Pre-segmentation

Figure 5: Segmentation results: Best and worst case. Pre-segmentation image quality is distinctly lower in the worst case compared to the best case.

Discussion and Conclusion

Although OCT technology has been actively researched since 1991, the segmentation of retinal layers has only been fully explored since the beginning of this millennium. Segmentation, however, remains one of the most difficult, and at the same time, most frequently required, steps in SD-OCT image analysis. We have introduced a robust boundary segmentation method facilitating mean filter and curve fitting-based regularisation of noisereduced averaged SD-OCT peripapillary scans. The comparison of average Euclidean distance of vectors of boundaries between manually segmented images and algorithm performance showed no significant difference. This indicates that inter-rater variability seems too similar to the error of the automatic segmentation.

Direct comparison with the literature is limited, as different settings such as different areas of the retina or different SDOCT devices are utilised in each method. Generally, no typical segmentation method exists that can be expected to work equally well for all tasks [26]. Furthermore, no standardised approach exists for measuring differences between boundaries: in the literature, units of μm, pixels and voxels have been used [21].

The reasons for the high inter-rater variability and, hence, the potential variability of automatic boundary detection might be due to the complicated detection of boundaries even when performed manually. It is important to note that several factors might have contributed to smaller values of difference, including machine time (SD-OCT), noise-reduced averaged acquisition technique (ART mode) and high image quality.

The results of our approach and the literature show that segmentation for retinal images, whether it is peripapillary (as in our approach) or related to other regions of the retina [21], can achieve high precision. However, it should be noted that results are very much dependent upon image quality and machine time.

Potential limitations of our study are that several boundaries are calculated, instead of being segmented, in order to improve speed. Furthermore, we have only applied our approach to healthy participants without any retinal pathology. Additional future work needs to analyse the algorithm performance for different retinal diseases and the variation in those conditions. Strengths of this study are that high-quality images (taken by an experienced, trained investigator) and manually segmented images from two observers were used. Moreover, the simplicity of the presented method should be noted, as this leads to a short computational time (3-5 s) using available personal computer technology.

In summary, we demonstrate that peripapillary layer segmentation using high-quality SD-OCT scans has high performance compared to manual segmentation using curve regularisation. There is no doubt that existing algorithms do not work well on all pathologies; therefore, such limitations make them more appropriate for research applications than for clinical use. Automatic segmentation will likely never replace the ophthalmologist; however, obtaining more information from less complicated data could offer valuable improvements to patient care.

References

Select your language of interest to view the total content in your interested language

Viewing options

Flyer image

Share This Article