
We extend the HotFlip attack algorithm used for universal trigger generation by jointly minimizing the target class loss of a fact checking model and the entailment class loss of an auxiliary natural language inference model. Here, we investigate how to generate adversarial attacks against fact checking systems that preserve the ground truth meaning and are semantically valid. In addition, such attacks produce semantically nonsensical inputs, as they simply concatenate triggers to existing samples. However, for inference tasks such as fact checking, these triggers often inadvertently invert the meaning of instances they are inserted in. One potent type of attack are universal adversarial triggers, which are individual n-grams that, when appended to instances of a class under attack, can trick a model into predicting a target class. Publisher = "Association for Computational Linguistics",Ībstract = "Adversarial attacks reveal important vulnerabilities and flaws of trained models.

Initial Data Collection and NormalizationĬitation = "Generating Label Cohesive and Well-Formed Adversarial Claims",īooktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", In Proceedings of theįirst Workshop on Fact Extraction and VERification (FEVER), pages 109ȱ13." Team Papelo: Transformer Networks at FEVER. The evidence is the gold evidence from the FEVER dataset for REFUTE and SUPPORT claims.įor NEI claims, we extract evidence sentences with the system in "Christopher Malon.

This dataset is used in the paper "Generating Label Cohesive and Well-Formed Adversarial Claims", EMNLP 2020 Dataset for training classification-only fact checking with claims from the FEVER dataset.
