BLEURT: Learning Robust Metrics for Text Generation (Paper Explained)

Описание к видео BLEURT: Learning Robust Metrics for Text Generation (Paper Explained)

Proper evaluation of text generation models, such as machine translation systems, requires expensive and slow human assessment. As these models have gotten better in previous years, proxy-scores, like BLEU, are becoming less and less useful. This paper proposes to learn a proxy score and demonstrates that it correlates well with human raters, even as the data distribution shifts.

OUTLINE:
0:00 - Intro & High-Level Overview
1:00 - The Problem with Evaluating Machine Translation
5:10 - Task Evaluation as a Learning Problem
10:45 - Naive Fine-Tuning BERT
13:25 - Pre-Training on Synthetic Data
16:50 - Generating the Synthetic Data
18:30 - Priming via Auxiliary Tasks
23:35 - Experiments & Distribution Shifts
27:00 - Concerns & Conclusion

Paper: https://arxiv.org/abs/2004.04696
Code: https://github.com/google-research/bl...

Abstract:
Text generation has made significant advances in the last few years. Yet, evaluation metrics have lagged behind, as the most popular choices (e.g., BLEU and ROUGE) may correlate poorly with human judgments. We propose BLEURT, a learned evaluation metric based on BERT that can model human judgments with a few thousand possibly biased training examples. A key aspect of our approach is a novel pre-training scheme that uses millions of synthetic examples to help the model generalize. BLEURT provides state-of-the-art results on the last three years of the WMT Metrics shared task and the WebNLG Competition dataset. In contrast to a vanilla BERT-based approach, it yields superior results even when the training data is scarce and out-of-distribution.

Abstract: Thibault Sellam, Dipanjan Das, Ankur P. Parikh

Links:
YouTube:    / yannickilcher  
Twitter:   / ykilcher  
BitChute: https://www.bitchute.com/channel/yann...
Minds: https://www.minds.com/ykilcher

Комментарии

Информация по комментариям в разработке