Loading
Evaluation: Completed
4987
135
5
271

πŸ“ˆ Results

The evaluation data contained a set of artificial transcriptions created by introducing different types of errors, e.g., removed and added words, changed and shuffled characters randomly. The official ranking of the competition is the one calculated on the original data only, using CERR:

The official HTREC 2022 ranking is based on CERR on the original evaluation data. Also shown is WERR on the original data and CERR and WERR on the synthetic data.
  Submission ID Name CERR WERR CERRsynth WERRsynth
1 191047 neverix 2.53 14.97 -7.72 -23.14
2 191394 MichaelIbrahim 1.03 4.63 -7.78 -31.86
3 191686 lacemaker 0.57 3.40 -0.63 -3.11

 

We note that only the best submission per team is shown here, but we also share the CSV of the ranking (as a Challenge Resource) so that participants will be able to re-rank based on the synthetic data, e.g., with the following command if loaded through Pandas:

>>> import pandas as pd
>>> official_ranking = pd.read_csv("official_ranking.csv")
>>> official_ranking.sort_values(by="sCERR", ascending=False).head(30)