WMT25: General MT

The task focuses on evaluation of general capabilities of machine translation (MT) systems. Its primary goal is to test performance across a wide range of languages, domains, genres, and modalities. Systems are evaluated with humans.

Official website: https://www2.statmt.org/wmt25/translation-task.html

Download test sets Register your team Create submission Competition updates

Leaderboard of WMT25: General MT

wmt25genmt test set (*-*)

# Name BLEU chrF Date
1 Anonymous submission #117 --- --- July 7, 2025, 9:44 a.m.
2 Anonymous submission #116 --- --- July 7, 2025, 9:41 a.m.
3 Anonymous submission #115 --- --- July 7, 2025, 9:26 a.m.
4 Anonymous submission #114 --- --- July 4, 2025, 5:59 p.m.
5 Anonymous submission #113 --- --- July 4, 2025, 5:40 p.m.
6 Anonymous submission #112 --- --- July 4, 2025, 4:48 p.m.
7 Anonymous submission #111 --- --- July 4, 2025, 4:46 p.m.
8 Anonymous submission #110 --- --- July 4, 2025, 2:58 p.m.
9 Anonymous submission #109 --- --- July 4, 2025, 2:14 p.m.
10 Anonymous submission #108 --- --- July 4, 2025, 1:19 p.m.
BLEU and ChrF are sacreBLEU scores. Systems in bold face are your submissions. We only display the top-10 submissions per language pair. Submission validation errors denoted by -1.0 score.

Click on the column header to sort the table. Hold down the Shift key and select a second column to sort by multiple criteria.