Hofmann, Valentin; Schütze, Hinrich und Pierrehumbert, Janet B.
(Mai 2022):
An Embarrassingly Simple Method to Mitigate Undesirable Properties of Pretrained Language Model Tokenizers.
60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland, May 22-27, 2022.
Muresan, Smarandakov; Nakov, Preslav und Villavicencio, Aline (Hrsg.):
In: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers),
Stroudsburg, PA: Association for Computational Linguistics. S. 385-393
[PDF, 480kB]
Vorschau

Abstract
We introduce FLOTA (Few Longest Token Approximation), a simple yet effective method to improve the tokenization of pretrained language models (PLMs). FLOTA uses the vocabulary of a standard tokenizer but tries to preserve the morphological structure of words during tokenization. We evaluate FLOTA on morphological gold segmentations as well as a text classification task, using BERT, GPT-2, and XLNet as example PLMs. FLOTA leads to performance gains, makes inference more efficient, and enhances the robustness of PLMs with respect to whitespace noise.
Dokumententyp: | Konferenzbeitrag (Paper) |
---|---|
EU Funded Grant Agreement Number: | 740516 |
EU-Projekte: | Horizon 2020 > ERC Grants > ERC Advanced Grant > ERC Grant 740516: NonSequeToR - Non-sequence models for tokenization replacement |
Fakultätsübergreifende Einrichtungen: | Centrum für Informations- und Sprachverarbeitung (CIS) |
Themengebiete: | 000 Informatik, Informationswissenschaft, allgemeine Werke > 000 Informatik, Wissen, Systeme
400 Sprache > 400 Sprache 400 Sprache > 410 Linguistik |
URN: | urn:nbn:de:bvb:19-epub-92203-0 |
Ort: | Stroudsburg, PA |
Sprache: | Englisch |
Dokumenten ID: | 92203 |
Datum der Veröffentlichung auf Open Access LMU: | 27. Mai 2022, 10:11 |
Letzte Änderungen: | 27. Mai 2022, 10:11 |