Research Area:  Machine Learning
Transformer architectures show significant promise for natural language processing. Given that a single pretrained model can be fine-tuned to perform well on many different tasks, these networks appear to extract generally useful linguistic features. A natural question is how such networks represent this information internally. This paper describes qualitative and quantitative investigations of one particularly effective model, BERT. At a high level, linguistic features seem to be represented in separate semantic and syntactic subspaces. We find evidence of a fine-grained geometric representation of word senses. We also present empirical descriptions of syntactic representations in both attention matrices and individual word embeddings, as well as a mathematical argument to explain the geometry of these representations.
Keywords:  
Author(s) Name:  Andy Coenen, Emily Reif, Ann Yuan, Been Kim, Adam Pearce, Fernanda ViĆ©gas, Martin Wattenberg
Journal name:  Computer Science
Conferrence name:  
Publisher name:  arXiv:1906.02715
DOI:  10.48550/arXiv.1906.02715
Volume Information:  
Paper Link:   https://arxiv.org/abs/1906.02715