ObjectiveTo observe and analyze the accuracy of the optic disc positioning and segmentation method of fundus images based on deep learning.MethodsThe model training strategies were training and evaluating deep learning-based optic disc positioning and segmentation methods on the ORIGA dataset. A deep convolutional neural network (CNN) was built on the Caffe framework of deep learning. A sliding window was used to cut the original image of the ORIGA data set into many small pieces of pictures, and the deep CNN was used to determine whether each small piece of picture contained the complete disc structure, so as to find the area of the disc. In order to avoid the influence of blood vessels on the segmentation of the optic disc, the blood vessels in the optic disc area were removed before segmentation of the optic disc boundary. A deep network of optic disc segmentation based on image pixel classification was used to realize the segmentation of the optic disc of fundus images. The accuracy of the optic disc positioning and segmentation method was calculated based on deep learning of fundus images. Positioning accuracy=T/N, T represented the number of fundus images with correct optic disc positioning, and N represented the total number of fundus images used for positioning. The overlap error was used to compare the difference between the segmentation result of the optic disc and the actual boundary of the optic disc.ResultsOn the dataset from ORIGA, the accuracy of the optic disc localization can reach 99.6%, the average overlap error of optic disc segmentation was 7.1%. The calculation errors of the average cup-to-disk ratio for glaucoma images and normal images were 0.066 and 0.049, respectively. Disc segmentation of each image took an average of 10 ms.ConclusionThe algorithm can locate the disc area quickly and accurately, and can also segment the disc boundary more accurately.
ObjectiveTo apply the multi-modal deep learning model to automatically classify the ultra-widefield fluorescein angiography (UWFA) images of diabetic retinopathy (DR). MethodsA retrospective study. From 2015 to 2020, 798 images of 297 DR patients with 399 eyes who were admitted to Eye Center of Renmin Hospital of Wuhan University and were examined by UWFA were used as the training set and test set of the model. Among them, 119, 171, and 109 eyes had no retinopathy, non-proliferative DR (NPDR), and proliferative DR (PDR), respectively. Localization and assessment of fluorescein leakage and non-perfusion regions in early and late orthotopic images of UWFA in DR-affected eyes by jointly optimizing CycleGAN and a convolutional neural network (CNN) classifier, an image-level supervised deep learning model. The abnormal images with lesions were converted into normal images with lesions removed using the improved CycleGAN, and the difference images containing the lesion areas were obtained; the difference images were classified by the CNN classifier to obtain the prediction results. A five-fold cross-test was used to evaluate the classification accuracy of the model. Quantitative analysis of the marker area displayed by the differential images was performed to observe the correlation between the ischemia index and leakage index and the severity of DR. ResultsThe generated fake normal image basically removed all the lesion areas while retaining the normal vascular structure; the difference images intuitively revealed the distribution of biomarkers; the heat icon showed the leakage area, and the location was basically the same as the lesion area in the original image. The results of the five-fold cross-check showed that the average classification accuracy of the model was 0.983. Further quantitative analysis of the marker area showed that the ischemia index and leakage index were significantly positively correlated with the severity of DR (β=6.088, 10.850; P<0.001). ConclusionThe constructed multimodal joint optimization model can accurately classify NPDR and PDR and precisely locate potential biomarkers.
ObjectiveTo propose automatic measurement of global and local tessellation density on color fundus images based on a deep convolutional neural network (DCNN) method. MethodsAn applied study. An artificial intelligence (AI) database was constructed, which contained 1 005 color fundus images captured from 1 024 eyes of 514 myopic patients in the Northern Hospital of Qingdao Eye Hospital from May to July, 2021. The images were preprocessed by using RGB color channel re-calibration method (CCR algorithm), CLAHE algorithm based on Lab color space, Retinex algorithm for multiple iterative illumination estimation, and multi-scale Retinex algorithm. The effects on the segmentation of tessellation by adopting the abovemetioned image enhancement methods and utilizing the Dice, Edge Overlap Rate and clDice loss were compared and observed. The tessellation segmentation model for extracting the tessellated region in the full fundus image as well as the tissue detection model for locating the optic disc and macular fovea were built up. Then, the fundus tessellation density (FTD), macular tessellation density (MTD) and peripapillary tessellation density (PTD) were calculated automatically. ResultsWhen applying CCR algorithm for image preprocessing and the training losses combination strategy, the Dice coefficient, accuracy, sensitivity, specificity and Jordan index for fundus tessellation segmentation were 0.723 4, 94.25%, 74.03%, 96.00% and 70.03%, respectively. Compared with the manual annotations, the mean absolute errors and root mean square errors of FTD, MTD, PTD automatically measured by the model were 0.014 3, 0.020 7, 0.026 7 and 0.017 8, 0.032 3, 0.036 5, respectively. ConclusionThe DCNN-based segmentation and detection method can automatically measure the tessellation density in the global and local regions of the fundus of myopia patients, which can more accurately assist clinical monitoring and evaluation of the impact of fundus tessellation changes on the development of myopia.