Call for papers
Background
Many NLP researchers, especially those not working in the area of discourse processing, tend to equate coreference resolution with the sort of coreference that people did in MUC, ACE, and OntoNotes, having the impression that coreference is a well-worn task owing in part to the large number of coreference papers reporting results on the MUC/ACE/OntoNotes coreference corpora. This is an unfortunate misconception: the previous shared tasks on coreference resolution have largely focused on entity coreference, which constitutes only one of the many kinds of coreference relations that were discussed in theoretical and computational linguistics in the past few decades. In fact, by focusing on entity coreference resolution, NLP researchers have only scratched the surface of the wealth of interesting problems in coreference resolution.
The first workshop on Coreference Resolution Beyond OntoNotes (CORBON 2016), which was held in conjunction with NAACL HLT 2016, was borne out of the concern that there is currently no venue for proper discussion and dissemination of results on coreference work that goes beyond entity coreference given the fact that the main NLP conferences each accept only a small number of coreference papers, almost all of which focused on entity coreference. CORBON 2016 sought to (1) encourage work on under-investigated coreference resolution tasks as well as coreference resolution in under-investigated languages and (2) provide a forum for coreference researchers to discuss and present such work. The workshop was quite successful in achieving its goals: 65% of the submissions focused on coreference resolution in less-investigated languages (e.g., Russian, German, Hindi, Tamil, Basque, Czech, Polish), and more than half of the submissions focused on under-investigated coreference tasks, including the resolution of bridging references, metonyms, generic anaphora, cataphora, event anaphora, anaphoric connectives, and verb-phrase ellipsis.
Objectives
In addition to continuing our efforts on promoting work on (1) under-investigated coreference tasks and (2) coreference resolution in less-researched languages (see the Topics section below for details), CORBON 2017 comes with three major innovations:
First, the workshop will include a special theme on knowledge-rich coreference resolution, with which we especially encourage submissions that focus on employing sophisticated knowledge sources (e.g., semantics and world knowledge) for coreference resolution. This special theme was motivated by one of the invited speakers of CORBON 2016, Michael Strube, who pointed out that the performance of coreference models that do not employ sophisticated knowledge is plateauing, and that performance gains beyond the current state of the art will likely come from the incorporation of sophisticated knowledge sources. We believe that the use of sophisticated knowledge sources will be especially important for challenging coreference tasks that go beyond entity coreference, as simple string-matching facilities are unlikely to be useful for such tasks.
Second, we will organize a shared task on coreference resolution as part of the workshop. Previous shared tasks on coreference resolution operated in a setting where a large amount of training data was provided to train coreference resolvers in a fully supervised manner. Our shared task has a different goal. We seek to investigate how to project annotations from English to other languages for which large unannotated parallel data is available. We believe that with this exciting setting, the shared task can help promote the development of coreference technologies that are applicable to a larger number of natural languages than is currently possible.
Third, we will organize a one-hour panel discussing the future research directions for coreference resolution. To facilitate interaction between the panellists and the participants, we will include a Q&A period at the end of the panel discussion.
Topics
The workshop welcomes submissions describing both theoretical and applied computational work on coreference resolution, especially for languages other than English, less-researched forms of coreference and new applications of coreference resolution. The submissions are expected to discuss theories, evaluation, limitations, system development and techniques relevant to the workshop topics. Topics of interest include but are not limited to the following:
- Coreference resolution for less-researched languages (e.g., annotation strategies, resolution modules and formal evaluation)
- Evaluation of influence of language-specific properties such as lack of articles, quasi-anaphora, ellipsis or complexity of reflexive pronouns to coreference resolution
- Representation of coreferential relations other than identity coreference (e.g., bridging references, reference to abstract entities, etc.)
- Investigation of difficult cases of anaphora and coreference and their resolution by resorting to e.g. discourse-based and pragmatic levels
- Coreference resolution in noisy data (e.g. in speech and social networks)
- New applications of coreference resolution
Since progress in these under-explored coreference tasks is currently limited in part by the scarcity of annotated corpora, papers that describe the creation and annotation of corpora, especially those with less-investigated coreference phenomena and those involving less-researched languages, are particularly welcome. In addition, the program committee members will be asked to give special attention to submissions that echo our special theme on knowledge-rich coreference resolution, which, as mentioned above, involves the use of sophisticated knowledge sources for coreference resolution.
Submission instructions
We solicit previously unpublished work, presented either as long or short papers, following the style guidelines for EACL 2017, produced with the official LaTeX template. To be included in the final proceedings, accepted papers have to be made available both as LaTeX sources and PDF.
Camera-ready version of long papers should have at most 9 pages of content plus 2 additional pages for references. Short papers are limited to 5 pages of content plus 2 pages for references.