Back to Search Start Over

The Copenhagen Team Participation in the Check-Worthiness Task of the Competition of Automatic Identification and Verification of Claims in Political Debates of the CLEF-2018 CheckThat! Lab

Authors :
Cappellato , Linda
Ferro , Nicola
Nie, Jian-Yun
Soulier, Laure
Hansen, Casper
Hansen, Christian
Simonsen, Jakob Grue
Lioma, Christina
Cappellato , Linda
Ferro , Nicola
Nie, Jian-Yun
Soulier, Laure
Hansen, Casper
Hansen, Christian
Simonsen, Jakob Grue
Lioma, Christina
Source :
Hansen , C , Hansen , C , Simonsen , J G & Lioma , C 2018 , The Copenhagen Team Participation in the Check-Worthiness Task of the Competition of Automatic Identification and Verification of Claims in Political Debates of the CLEF-2018 CheckThat! Lab . in L Cappellato , N Ferro , J-Y Nie & L Soulier (eds) , CLEF 2018 Working Notes . 10 edn , 81 , CEUR-WS.org , CEUR Workshop Proceedings , vol. 2125 , 19th Working Notes of CLEF Conference and Labs of the Evaluation Forum, CLEF 2018 , Avignon , France , 10/09/2018 .
Publication Year :
2018

Abstract

We predict which claim in a political debate should be prioritized for fact-checking. A particular challenge is, given a debate, how to produce a ranked list of its sentences based on their worthiness for fact checking. We develop a Recurrent Neural Network (RNN) model that learns a sentence embedding, which is then used to predict the checkworthiness of a sentence. Our sentence embedding encodes both semantic and syntactic dependencies using pretrained word2vec word embeddings as well as part-of-speech tagging and syntactic dependency parsing. This results in a multi-representation of each word, which we use as input to a RNN with GRU memory units; the output from each word is aggregated using attention, followed by a fully connected layer, from which the output is predicted using a sigmoid function. The overall performance of our techniques is successful, achieving the overall second best performing run (MAP: 0.1152) in the competition, as well as the highest overall performance (MAP: 0.1810) for our contrastive run with a 32% improvement over the second highest MAP score in the English language category. In our primary run we combined our sentence embedding with state of the art check-worthy features, whereas in the contrastive run we considered our sentence embedding alone

Details

Database :
OAIster
Journal :
Hansen , C , Hansen , C , Simonsen , J G & Lioma , C 2018 , The Copenhagen Team Participation in the Check-Worthiness Task of the Competition of Automatic Identification and Verification of Claims in Political Debates of the CLEF-2018 CheckThat! Lab . in L Cappellato , N Ferro , J-Y Nie & L Soulier (eds) , CLEF 2018 Working Notes . 10 edn , 81 , CEUR-WS.org , CEUR Workshop Proceedings , vol. 2125 , 19th Working Notes of CLEF Conference and Labs of the Evaluation Forum, CLEF 2018 , Avignon , France , 10/09/2018 .
Notes :
application/pdf, English
Publication Type :
Electronic Resource
Accession number :
edsoai.on1322715801
Document Type :
Electronic Resource