Eosc/norbert

From Nordic Language Processing Laboratory
Revision as of 17:32, 2 December 2020 by Andreku (talk | contribs) (The tokenization issue was fixed in the UD 2.6 release)
Jump to: navigation, search

Working Notes for Norwegian BERT-Like Models

Report on the creation of FinBERT: https://arxiv.org/pdf/1912.07076.pdf

Working NVIDIA implementation workflow on Saga

Available Bokmål Text Corpora

Preprocessing and Tokenization

SentencePiece library finds 157 unique characters in Norwegian Wikipedia dump.

Input file format:

1. One sentence per line. These should ideally be actual sentences, not entire paragraphs or arbitrary spans of text. (Because BERT uses the sentence boundaries for the "next sentence prediction" task).

2. Blank lines between documents. Document boundaries are needed so that the "next sentence prediction" task doesn't span between documents.

Evaluation

Do we have available Norwegian test sets for typical NLP tasks to evaluate our NorBERT?