Research Area:  Machine Learning
Can health entities collaboratively train deep learning models without sharing sensitive raw data? This paper proposes several configurations of a distributed deep learning method called SplitNN to facilitate such collaborations. SplitNN does not share raw data or model details with collaborating institutions. The proposed configurations of splitNN cater to practical settings of i) entities holding different modalities of patient data, ii) centralized and local health entities collaborating on multiple tasks and iii) learning without sharing labels. We compare performance and resource efficiency trade-offs of splitNN and other distributed deep learning methods like federated learning, large batch synchronous stochastic gradient descent and show highly encouraging results for splitNN.
Keywords:  
Split Learning
Health
Distributed Deep Learning
Machine Learning
Author(s) Name:  Praneeth Vepakomma, Otkrist Gupta, Tristan Swedish, Ramesh Raskar
Journal name:  Computer Science
Conferrence name:  
Publisher name:  arXiv:1812.00564
DOI:  10.48550/arXiv.1812.00564
Volume Information:  
Paper Link:   https://arxiv.org/abs/1812.00564