Files

Abstract

Many patterns found in natural language syntax have multiple possible explanations or structural descriptions. Even within the currently dominant Minimalist theoretical framework (Chomsky 1995, 2000), it is not uncommon to encounter multiple analyses for the same phenomenon proposed in the literature. A natural question, then, is whether one could evaluate and compare syntactic proposals from a quantitative point of view. Taking this line of reasoning further, I aim to capture, formalize, and subsequently automate the intuition behind the process of developing a syntactic analysis. The contributions of this dissertation are threefold. First, I show how an evaluation measure inspired by the Minimum Description Length principle (Rissanen 1978) can be used to compare accounts of syntactic phenomena implemented as minimalist grammars (Stabler 1997), and how arguments for and against a given analysis translate into quantitative differences. Next, I build upon Kobele's (2018, to appear) notion of lexical item decomposition to propose a principled way of making linguistic generalizations by detecting and eliminating syntactic and phonological redundancies in the data. Finally, I design and implement an optimization algorithm capable of transforming a naive minimalist grammar over unsegmented words into a grammar over morphemes. As proof of concept, I conduct a number of experiments on fragments of the English grammar, including the auxiliary system, passives, and raising verbs; l-selection of prepositional phrases; and allomorphy in verb stems. The experiments demonstrate how optimizing a quantitative measure can produce linguistically plausible analyses similar to those proposed in theoretical literature.

Details

Actions

PDF

from
to
Export
Download Full History