Amazing technological breakthrough possible @S-Logix pro@slogix.in

Office Address

  • #5, First Floor, 4th Street Dr. Subbarayan Nagar Kodambakkam, Chennai-600 024 Landmark : Samiyar Madam
  • pro@slogix.in
  • +91- 81240 01111

Social List

On the depth of deep neural networks: A theoretical view - 2016

On The Depth Of Deep Neural Networks: A Theoretical View

Research Area:  Machine Learning

Abstract:

People believe that depth plays an important role in the success of deep neural networks (DNN). However, this belief lacks solid theoretical justifications as far as we know. We investigate the role of depth from the perspective of margin bound. In margin bound, expected error is upper bounded by empirical margin error plus Rademacher Average (RA) based capacity term. First, we derive an upper bound for RA of DNN, and show that it increases with increasing depth. This indicates the negative impact of depth on test performance. Second, we show that deeper networks tend to have larger representation power (measured by Betti numbers based complexity)than shallower networks in multi-class settings, and thus can lead to smaller empirical margin error. This implies a positive impact of depth. The combination of these two results shows that for DNN with a restricted number of hidden units, increasing depth is not always good since there is a tradeoff between positive and negative impacts. These results inspire us to seek alternative ways to achieve a positive impact of depth, imposing margin-based penalty terms to cross-entropy loss so as to reduce empirical margin error without increasing depth. Our experiments show that in this way, we achieve significantly better test performance.

Keywords:  

Author(s) Name:  Shizhao Sun, Wei Chen, Liwei Wang, Xiaoguang Liu, and Tie-Yan Liu

Journal name:  

Conferrence name:  Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence (2016)

Publisher name:  AAAI.ORG

DOI:  

Volume Information:  AAAI-2016, PP 2066-2072