Files
Abstract
Deep transformer models have pushed performance on NLP tasks to new limits, suggesting sophisticated treatment of complex linguistic inputs. However, we have limited understanding of how these models handle representation of input sequences, and whether this reflects sophisticated composition of meaning like that done by humans. In this dissertation, we take steps to analyze and improve compositionality in natural language models.
We present systematic analysis of phrasal representations in state-of-the-art pre-trained transformers. We use tests leveraging human judgments of phrase similarity and meaning shift, and compare results before and after control of word overlap, to tease apart lexical effects versus composition effects. We find that phrase representation in these models relies heavily on word content, with little evidence of nuanced composition. We also identify variations in phrase representation quality across models, layers, and representation types, and make corresponding recommendations for usage of representations from these models.
Motivated by the observations of pre-trained transformers, we explore directions of improving compositionality in neural language models. We first investigate the impact of fine-tuning on the capacity of contextualized embeddings to capture phrase meaning information beyond lexical content. Specifically, we fine-tune models on an adversarial paraphrase classification task with high lexical overlap, and on a sentiment classification task. After fine-tuning, we assess phrasal representations in controlled settings following prior work. We find that fine-tuning largely fails to benefit compositionality in these representations, though training on sentiment yields a small, localized benefit for certain models. In follow-up analyses, we identify confounding cues in the paraphrase dataset that may explain the lack of composition benefits from that task, and we discuss factors underlying the localized benefits from sentiment training. We then inspect a model with compositional architecture and show that the model shows weak compositionality despite incorporating explicit composition structure.