Research Area:  Machine Learning
Deep Neural Networks (DNNs) have played a major role in advancing computer vision research in the past years. DNNs are especially effective for tasks where large amount of labeled data is available. However, for many tasks such as object detection and semantic segmentation, labeled data is expansive to acquire. In such cases, it is beneficial to apply transfer learning and leverage data from another domain where labels are cheaper to collect. Transfer learning often involves two stages: pre-training on a source task with a large amount of data and fine-tuning on the target task with relatively less data. In this thesis, we first study the process of training Convolutional Neural Netowrks (CNNs) for image classification, which is the most widely used source task in computer vision. We examine each step in this process in detail and propose various modifications that improve model accuracy on the source task. Next, we fine-tune the improved source model on target tasks to show that these improvements on the source task can be transferred to improvements on target tasks. In the rest of this thesis, we present our works on transfer learning in various application domains including clustering, automatic 2D-to-3D conversion, and object detection. We demonstrate how transfer learning, in different forms, helps improving performance on the target task by leveraging other datasets and source tasks.
Name of the Researcher:  Xie, Junyua
Name of the Supervisor(s):  Ali Farhadi
Year of Completion:  2019
University:  University of Washington
Thesis Link:   Home Page Url