When compared with experienced graders, multimodal deep learning (DL) networks used to segment geographic atrophy (GA) lesions can deliver accurate results, according to a study published in Translational Vision Science & Technology. Using fundus autofluorescence (FAF) and near-infrared (NIR) images, researchers assessed DL-based methods for accurate segmentation of GA lesions. They conducted a retrospective analysis using imaging data from the eyes of patients enrolled in natural history studies of GA and in Proxima A and B. To automatically segment GA lesions on FAF, the team used two multimodal DL networks. Experienced graders compared segmentation accuracy with annotation. From 183 patients in Proxima B, the training data set included 940 image pairs (FAF and NIR); the test data set consisted of 497 image pairs from 154 patients in Proxima A. The study team also used Dice coefficient scores, Pearson correlation coefficient (r), and Bland-Altman plots to evaluate performance. Dice scores for the DL network to grader comparison ranged from 0.89 to 0.92 for screening visit on the test set; Dice score between graders was 0.94.