Improving BERT-Based Noisy Text Classification with Knowledge of the Data domain

Published in The 6th Workshop on Noisy User-generated Text (WNUT 2020), 2020

Recommended citation: @inproceedings{doan-bao-etal-2020-sunbear, title = "{S}un{B}ear at {WNUT}-2020 Task 2: Improving {BERT}-Based Noisy Text Classification with Knowledge of the Data domain", author = "Doan Bao, Linh and Nguyen, Viet Anh and Pham Huu, Quang", booktitle = "Proceedings of the Sixth Workshop on Noisy User-generated Text (W-NUT 2020)", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.wnut-1.73", doi = "10.18653/v1/2020.wnut-1.73", pages = "485--490", } https://aclanthology.org/2020.wnut-1.73/

Abstract

This paper proposes an improved custom model for WNUT task 2: Identification of Informative COVID-19 English Tweet. We improve experiment with the effectiveness of fine-tuning methodologies for state-of-the-art language model RoBERTa. We make a preliminary instantiation of this formal model for the text classification approaches. With appropriate training techniques, our model is able to achieve 0.9218 F1-score on public validation set and the ensemble version settles at top 9 F1-score (0.9005) and top 2 Recall (0.9301) on private test set.

Citation

@inproceedings{doan-bao-etal-2020-sunbear, title = “{S}un{B}ear at {WNUT}-2020 Task 2: Improving {BERT}-Based Noisy Text Classification with Knowledge of the Data domain”, author = “Doan Bao, Linh and Nguyen, Viet Anh and Pham Huu, Quang”, booktitle = “Proceedings of the Sixth Workshop on Noisy User-generated Text (W-NUT 2020)”, month = nov, year = “2020”, address = “Online”, publisher = “Association for Computational Linguistics”, url = “https://aclanthology.org/2020.wnut-1.73”, doi = “10.18653/v1/2020.wnut-1.73”, pages = “485–490”, }