JUSTers at SemEval-2020 Task 4: Evaluating Transformer Models against Commonsense Validation and Explanation

Abstract

In this paper, we describe our team’s (JUSTers) effort in the Commonsense Validation and Explanation (ComVE) task, which is part of SemEval2020. We evaluate five pre-trained Transformer-based language models with various sizes against the three proposed subtasks. For the first two subtasks, the best accuracy levels achieved by our models are 92.90% and 92.30%, respectively, placing our team in the 12th and 9th places, respectively. As for the last subtask, our models reach 16.10 BLEU score and 1.94 human evaluation score placing our team in the 5th and 3rd places according to these two metrics, respectively. The latter is only 0.16 away from the 1st place human evaluation score.

Publication
Fourteenth Workshop on Semantic Evaluation
Ali Fadel
Ali Fadel
Machine Learning Engineer II

Software engineer interested in problem solving and machine learning based solutions, likes to create content and teach others.

Related