Research Area:  Machine Learning
Deep learning has become a promising approach for automated support for clinical diagnosis. When medical data samples are limited, collaboration among multiple institutions is necessary to achieve high algorithm performance.However, sharing patient data often has limitations due to technical,legal,or ethical concerns. In this study, we propose methods of distributing deep learning models as an attractive alternative to sharing patient data.We simulate the distribution of deep learning models across 4 institutions using various training heuristics and compare the results with a deep learning model trained on centrally hosted patient data. The training heuristics investigated include ensembling single institution models, single weight transfer, and cyclical weight transfer. We evaluated these approaches for image classification in 3 independent image collections (retinal fundus photos, mammography, and ImageNet).We find that cyclical weight transfer resulted in a performance that was comparable to that of centrally hosted patient data. We also found that there is an improvement in the performance of cyclical weight transfer heuristic with a high frequency of weight transfer.We show that distributing deep learning models is an effective alternative to sharing patient data. This finding has implications for any collaborative deep learning study.
Distributed Deep Learning
Author(s) Name:  Ken Chang, Niranjan Balachandar, Carson Lam, Darvin Yi, James Brown, Andrew Beers, Bruce Rosen, Daniel L Rubin, Jayashree Kalpathy-Cramer
Journal name:  Journal of the American Medical Informatics Association
Publisher name:  Oxford University Press
Volume Information:  Volume 25, Issue 8, August 2018, Pages 945–954
Paper Link:   https://academic.oup.com/jamia/article/25/8/945/4956468?login=false