Difference between revisions of "Vectors/norlm/norbert"
Line 23: | Line 23: | ||
The vocabulary was generated using the SentencePiece algorithm and Tokenizers library ([https://github.com/ltgoslo/NorBERT/blob/main/tokenization/spiece_tokenizer.py code]). The resulting [https://github.com/ltgoslo/NorBERT/blob/main/vocabulary/norwegian_sentencepiece_vocab_30k.json Tokenizers model] was [https://github.com/ltgoslo/NorBERT/blob/main/tokenization/sent2wordpiece.py converted] to the standard [https://github.com/ltgoslo/NorBERT/blob/main/vocabulary/norwegian_wordpiece_vocab_30k.txt BERT WordPiece format]. | The vocabulary was generated using the SentencePiece algorithm and Tokenizers library ([https://github.com/ltgoslo/NorBERT/blob/main/tokenization/spiece_tokenizer.py code]). The resulting [https://github.com/ltgoslo/NorBERT/blob/main/vocabulary/norwegian_sentencepiece_vocab_30k.json Tokenizers model] was [https://github.com/ltgoslo/NorBERT/blob/main/tokenization/sent2wordpiece.py converted] to the standard [https://github.com/ltgoslo/NorBERT/blob/main/vocabulary/norwegian_wordpiece_vocab_30k.txt BERT WordPiece format]. | ||
+ | |||
+ | =NorBERT model= | ||
+ | |||
+ | ==Configuration== | ||
+ | NorBERT corresponds in its configuration to Google's Bert-Base Cased for English, with 12 layers and hidden size 768. [https://github.com/ltgoslo/NorBERT/blob/main/norbert_config.json Configuration file] | ||
+ | |||
+ | ==Training overview== | ||
+ | NorBERT was trained on the Norwegian academic HPC system called [https://documentation.sigma2.no/hpc_machines/saga.html Saga]. Most of the time the training was distributed across 4 compute nodes and 16 NVIDIA P100 GPUs. Training took approximately 3 weeks. [http://wiki.nlpl.eu/index.php/Eosc/pretraining/nvidia Instructions for reproducing the training setup with EasyBuild] | ||
+ | |||
+ | ==Training code== | ||
+ | Similar to the creators of [https://github.com/TurkuNLP/FinBERT FinBERT], we employed the implementation by NVIDIA which allows relatively fast multi-node and multi-GPU training. |
Revision as of 00:04, 12 January 2021
Contents
NorBERT: Bidirectional Encoder Representations from Transformers
Training corpus
We use clean training corpora with ordered sentences:
- Norsk Aviskorpus (NAK); 1.7 billion words;
- Bøkmal Wikipedia; 160 million words;
- Nynorsk Wikipedia; 40 million words;
In total, this comprises about two billion word tokens, both in Bøkmal and in Nynorsk; thus, this is a joint model. In the future, separate Børmal and Nynorsk models are planned as well.
Preprocessing
1. Wikipedia texts were extracted using segment_wiki.
2. In NAK, for years up to 2005, the text is in the one-token-per-line format. There are special delimiters signaling the beginning of a new document and providing the URLs. We converted this to running text using a self-made de-tokenizer.
3. In NAK, everything up to and including 2011 is in the ISO 8859-01 encoding ('Latin-1'). These files were converted to UTF-8 before any other pre-processing.
4. The resulting corpus was sentence-segmented using Stanza. We left blank lines between documents (and sections in the case of Wikipedia) so that the "next sentence prediction" task doesn't span between documents.
Vocabulary
The vocabulary for the model is of size 30 000 and contains cased entries with diacritics. It is generated from raw text, without, e.g., separating punctuation from word tokens. This means one can feed raw text into NorBERT.
The vocabulary was generated using the SentencePiece algorithm and Tokenizers library (code). The resulting Tokenizers model was converted to the standard BERT WordPiece format.
NorBERT model
Configuration
NorBERT corresponds in its configuration to Google's Bert-Base Cased for English, with 12 layers and hidden size 768. Configuration file
Training overview
NorBERT was trained on the Norwegian academic HPC system called Saga. Most of the time the training was distributed across 4 compute nodes and 16 NVIDIA P100 GPUs. Training took approximately 3 weeks. Instructions for reproducing the training setup with EasyBuild
Training code
Similar to the creators of FinBERT, we employed the implementation by NVIDIA which allows relatively fast multi-node and multi-GPU training.