Overview of the task

Previous shared tasks on coreference resolution (e.g., the SemEval 2010 shared task Coreference Resolution in Multiple Languages, the CoNLL 2011 and CoNLL 2012 shared tasks) operated in a setting where a large amount of training data was provided to train coreference resolvers in a fully supervised manner. Our shared task has a different goal: we are primarily interested in a low-resource setting. In particular, we seek to investigate how well one can build a coreference resolver for a language for which there is no coreference-annotated data available for training.

With a rising interest in annotation projection, we hereby offer a projection-based task which will facilitate the application of existing coreference resolution algorithms to new languages. We believe that with this exciting setting, the shared task can help promote the development of coreference technologies that are applicable to a larger number of natural languages than is currently possible.

This year we will focus on two languages — German and Russian. To mimic a low-resource setting, no German or Russian coreference-annotated data will be provided. Rather, to facilitate system development, the shared task participants will be provided two versions of an English-German-Russian parallel corpus: an unlabelled version and a labelled version. The labelled version has the English side of the parallel corpus automatically coreference-labelled using the Berkeley coreference resolver, which was trained on the English OntoNotes corpus.

Participants will compete in two tracks: 

  1. Closed track: projection-based coreference resolution on German and Russian. The only coreference-annotated training data that the participants can use is the English OntoNotes corpus. Alternatively, they can use any of the publicly-available coreference resolvers trained on English OntoNotes. They can then use whatever parallel corpus and method they prefer to project the English annotations into German/Russian and subsequently train a new coreference resolver on the projected annotations. As for additional linguistic information, the participants can use POS information provided by the parser of their choice. Note that they do not have to use the provided English-German-Russian parallel corpus. 
  1. Open track: coreference resolution on German and Russian with no restriction on the kind of coreference-annotated data the participants can use for training. For instance, they can label their own German/Russian coreference data and use it to train a German/Russian coreference resolver, or they can adopt a heuristic-based approach where they employ knowledge of German/Russian to write coreference rules for these languages.

The participants can choose to take part in either one or both tracks. The systems will be run on test data by participants who are required to send their outputs to the Shared Task Coordinator by December 27th (CET). If you are planning to participate, please register for the shared task by sending your name, affiliation and tracks you will be participating in to the Shared task coordinator.

Data package

Training set

As the training set, we have chosen the English-German and English-Russian parts of the News-Commentary11 parallel corpus. The original sentence-aligned text files were split into documents and tokenised using EuroParl tools. The English part was preprocessed and coreference-resolved using Berkeley Entity Resolution System ("coref_predict" mode).

Please download the training dataset here.

Test set

Additional resources:

Documentation:

Evaluation

The evaluation will be done on a manually annotated German-Russian parallel corpus. The guidelines used for the annotation of the corpus are quite compatible with the OntoNotes guidelines for English (Version 6.0) in terms of types of referring expressions that are annotated. The exceptions are  that we (a) handle only NPs and do not annotate verbs that are coreferent with NPs, (b) include appositions into the markable span and do not mark them as a separate relation, (c) annotate pronominal adverbs in German if they co-refer with an NP. Please check our github repository for the complete guidelines and sample annotations.

Similar to CoNLL 2012, we will compute a number of existing scoring metrics — MUC, B-CUBED, CEAF and BLANC —  and use the unweighted average of MUC, B-CUBED and CEAF scores (computed by the official CoNLL 2012 scorer) to determine the winning system. We will not evaluate singletons and we kindly ask the participants to exclude them from the submitted data.  

 

Results

The winning team for CORBON 2017 Shared Task is Michal Novak and Anna Nedoluzhko from the Charles University, Czech Republic.

Screenshot 2017-02-09 14.47.12.png

We thank everyone who expressed their interest in the shared task and congratulate the Prague team for their achievement.

Contact

If you are interested in participating in the shared task, please sign up to our discussion group to make sure you don't miss any important information.

Feel free to contact the Shared Task Coordinator Yulia Grishina if you have any questions.