Is DR Just the Start for Google Brain?

A team of investigators recently reported online in the Journal of the American Medical Association1 that a Google algorithm allows computers to diagnose diabetic retinopathy (DR) using retina photos. The technology will never replace doctors, but Google hopes it will assist them, especially in underserved areas. Google’s first priority with this technology is to distribute it in such areas so resource-challenged clinics can see only patients who need to be screened for treatment, and to reach populations that would not otherwise get this type of care.

Ehsan Rahimy, MD, MBA, a retina specialist at the Palo Alto Medical Foundation in California, was one of the image readers in the Google deep-learning project and a physician-consultant to Google. He said it’s important to remember that while Google is looking at this project from a global health perspective, “There’s a lot of help needed at home too. The global health perspective does include our own urban and more rural areas where people don’t have access to an ophthalmologist.”

Peter Karth, MD, MBA, a vitreoretinalspecialist at Stanford University and Oregon Eye Consultants, was another image reader in the study and a physician-consultant to Google. He said one of the challenges of developing the algorithm was gathering the data and finding enough clear examples of DR. Dr. Karth said the overall quality of images fed into the algorithm was average, and more than half were non-mydriatic images. After inputting 60,000 images, the research team found that continuing to add images no longer improved the algorithm.

Moving forward, Google and the research team hope to use the algorithm for other ocular diseases, such as glaucoma and macular degeneration. Dr. Karth pointed out it will be even more challenging to find the images for these diseases, which will require more complex procedures than screening for DR.

One of the interesting things in this project is that Google Brain, the project team developing the algorithm, is not using feature engineering. Instead of the researchers teaching the computer what a hemorrhage looks like, they input an image with the directive, “This is moderate retinopathy.” With enough data, the computer learned what that is. As a result, the researchers aren’t sure what the algorithm is using for its diagnosis. Dr. Rahimy said the algorithm is “probably seeing things that are not congruent with the way we see these diseases.” He explained it might be detecting features that are beyond human resolution or that physicians have not recognized or identified historically.

For now, Google’s focus is clearly on research, with a goal of helping underserved areas. There is no pressure to monetize the project. In time, one could imagine this technology expanding into other imaging modalities and diagnoses of diseases in other parts of the body. The algorithm is not tied to any specific instruments or modality so there certainly may be opportunities for broader use. Another possibility could be to have the algorithm already installed in devices, like fundus cameras or other imaging platforms, as they are delivered to the clinic.


1 Gulshan V, Peng L, Coram M, et al, Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA. Published online November 29, 2016. doi:10.1001/jama.2016.17216.

Steve Lenier has worked with medical content for almost 30 years, with an emphasis on ophthalmology since 2005. – @SteveLenier

Other Twitter Sources – Dr. Rahimy’s – @SFretina, Dr. Karth’s – @PeterKarthMD