|Date||Mar 02, 2012|
|Title||Beyond MaltParser -- Recent Advances in Transition-Based Dependency Parsing|
The transition-based approach to dependency parsing has become popular thanks to its simplicity and efficiency. Systems like MaltParser achieve linear-time parsing with projective dependency trees using locally trained classifiers to predict the next parsing action and greedy best-first search to retrieve the optimal parse tree, assuming that the input sentence has been morphologically disambiguated using a part-of-speech tagger. In this talk, I survey recent developments in transition-based dependency parsing that address some of the limitations of the basic transition-based approach. First, I discuss different methods for extending the coverage to non-projective trees, which are required for linguistic adequacy in many languages. Secondly, I show how globally trained classifiers and beam search can be used to mitigate error propagation and enable richer feature representations. Finally, I present a model for joint tagging and parsing that leads to improvements in both tagging and parsing accuracy as compared to the standard pipeline approach.