Frequently Asked Questions
- How do I submit
my solution?To add your solution to the leaderboard, follow these steps: first, click the "Sign in" button in the upper right corner. Allow the RuCoLA application to access your account and then fill out the registration form. After that, the "Submit" button will become available.
Your submission must include the method name, the team name and the file with predictions for the test set. Also, we encourage you to attach a link with more information about your solution if you want to make it public or simply share more about the underlying model.
Submit your prediction in a CSV file with two columns named "id" and "acceptable", where the first column contains the indices of corresponding sentences from the test set and the second column contains binary predictions. You can find a sample submission file in our GitHub repository.
- What are
the metrics for the task?We use two binary classification metrics: standard classification accuracy and Matthews Correlation Coefficient (MCC). The latter was used in the original CoLA work and is often preferred to accuracy when solving imbalanced classification problems, which is why we consider it our main metric.
- How does
the leaderboard work?We sort all participating methods by the overall MCC value; for convenience, we also display separate results for expert and machine-generated sentences on the "By source" tab. For each pair of team name and model name, only the best result is displayed on the leaderboard.
- Where do the datasets
come from?We compile RuCoLA from several sources: linguistics publications and textbooks, tasks of the Unified State Exam, student essays, and sentences generated by machine translation and paraphrase models. You can find more detailed information in the accompanying blog post (in Russian).
- What does the "Human
Baseline" solution stand for?This line on the leaderboard represents the performance of BA/MA students in linguistics and philology who manually labeled the sentences from the test set. We provide it as a point of reference for all automatic methods. For now, the human baseline results are available only for expert-written sentences; performance for machine-generated examples will be evaluated in the near future.
- Why are there
two leaderboards?There are two groups of error sources in our data. We aggregate model performance across all groups for the main leaderboard but provide detailed metrics for each group to simplify analysis and interpretation of the results.
- What license is RuCoLA
distributed under?The baseline solution code and acceptability labels are distributed under the Apache 2.0 license; you can read the exact terms in our repository. The original texts were obtained from several sources, which are listed in the repository; the rights to these texts belong to their respective authors.
- How do I cite this work?
For a BibTeX citation, please see the corresponding section of the RuCoLA GitHub repository.