The General Language Understanding Evaluation (GLUE) is a benchmark based on 9 English sentence understanding tasks used to gauge how well an NLP model is able to perform on general language tasks.
Share this post
GLUE and SuperGLUE
Share this post
The General Language Understanding Evaluation (GLUE) is a benchmark based on 9 English sentence understanding tasks used to gauge how well an NLP model is able to perform on general language tasks.