SpeakerMichael Strube
DateFeb 28, 2014
Time11:00AM 12:00PM
LocationIF 4.31/4.33
TitleIssues I Don't Understand about Coreference and Coherence
AbstractCase 1: A few years ago we developed a hypergraph-based model for coreferenceresolution where hyperedges represent features and vertices represent mentions(Cai & Strube, Coling 2010). Since a hyperedge can connect more than twovertices, the model captured the set property of coreference relations nicely.The system performed well at the CoNLL'11 shared task on unrestrictedcoreference resolution (Cai et al., CoNLL-ST 2011). However, when we reducedthe hypergraph to a normal graph and replaced the hypergraph clustering algorithmwith a simple greedy clustering technique, the performance went up. Thesimplified system ranked 2nd in the CoNLL'12 shared task on English (Martschatet al., EMNLP-CoNLL-ST 2012). Furthermore, the performance did not even sufferwhen we turned the approach into an unsupervised one by leaving out the edgeweights (Martschat, ACL Student Session 2013).

Case 2: Barzilay& Lapata (ACL 2005, CL 2008) introduced the entity grid model to capturethe entity-based local coherence of documents. They create a matrix wherecolumns represent discourse entities and rows sentences. Cells indicate thesyntactic function the entity occupies in a sentence. Barzilay & Lapatacompute probabilities of entity transitions, turn these into feature vectors,and then apply machine learning methods to distinguish between coherent and notcoherent documents. We took the basic idea from Barzilay & Lapata, but interpretedthe matrix as a bipartite graph. When we computed coherence using simplegraph-based measures directly on this graph, the results were basically thesame (Guinaudeau & Strube, ACL 2013).

Overall I ampuzzled by the lackluster performance of machine learning techniques on thesetasks and the competitiveness of our simple graph-based approaches:

- Are ourgraph-based approaches really that good? Or is the state-of-the-art just tooweak?

- Are the taskscoreference resolution and modeling local coherence ill-defined?

- Do we applymachine learning correctly to these tasks? Are there better ways to exploitannotated training data?

- Which featuresmight help to finally improve the performance?

- Are currentmachine learning methods appropriate for such tasks?

BioMichael Strube leads the Natural Language Processing group at the privatelyfounded Heidelberg Institute for Theoretical Studies in Heidelberg, Germany. Heis also Honorarprofessor in the Computational Linguistics Department at theUniversity of Heidelberg. Michael Strube received an M.A. in German Languageand Literature from the University of Freiburg in 1992 and a Ph.D. inComputational Linguistics from the same university in 1996. Before joining HITShe was awarded a postdoctoral fellowship at the Institute for Research inCognitive Science at the University of Pennsylvania. Together with his former Ph.D.student Simone Paolo Ponzetto he received the Honorable Mention for the 2010IJCAI-JAIR Best Paper Prize for their work on knowledge extraction fromWikipedia.

Previous Next

List