Abstract: This paper presents a comprehensive evaluation of three language models: RoBERTa, SlovakBERT, and BERT-Multilingual, using datasets of varying sizes (ranging from 50,000 to 1,000,000 ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results