Difference between revisions of "Eosc/norbert"

From Nordic Language Processing Laboratory
Jump to: navigation, search
(Preprocessing and Tokenization)
 
(41 intermediate revisions by 2 users not shown)
Line 1: Line 1:
 +
''These are just working notes. See the [[Vectors/norlm/norbert|official NorBERT release announcement]].''
 +
 
= Working Notes for Norwegian BERT-Like Models =
 
= Working Notes for Norwegian BERT-Like Models =
  
 +
Report on the creation of FinBERT: https://arxiv.org/pdf/1912.07076.pdf
 +
 +
[http://wiki.nlpl.eu/index.php/Eosc/pretraining/nvidia Working NVIDIA implementation workflow on Saga]
 +
 +
[https://github.uio.no/andreku/NorBERT NorBERT tools]
 +
 +
= Status for the joint model =
 +
# Training corpus: '''prepared'''
 +
# Training corpus [https://github.uio.no/andreku/NorBERT/blob/master/sentence_segment.slurm segmentation]: '''complete'''
 +
# SentencePiece vocabulary: '''[https://github.uio.no/andreku/NorBERT/tree/master/vocabulary created]'''
 +
# TF Records for the Phase 1 (sequence length 128): '''complete''' (''/cluster/projects/nn9851k/andreku/norbert_data/norbert128/'')
 +
# Training for the Phase 1: '''complete'''
 +
# TF Records for the Phase 2 (sequence length 512): '''complete''' (''/cluster/projects/nn9851k/andreku/norbert_data/norbert512/'')
 +
# Training for the Phase 2: '''in the process'''
 +
 +
= Available Bokmål Text Corpora =
 +
 +
== We use ==
 +
*[https://www.nb.no/sprakbanken/ressurskatalog/oai-nb-no-sbr-4/ Norsk Aviskorpus]; 1.7 billion words; sentences are ordered; clean;
 +
*[https://dumps.wikimedia.org/nowiki/latest/ Norwegian Wikipedia]; 160 million words; sentences are ordered; clean (more or less);
 +
 +
 +
NAK Bøkmal: 712 145 669 word tokens
 +
 +
NAK Nynorsk: 47 180 985 word tokens
 +
 +
NAK unspecified: 1 119 744 725 word tokens
 +
 +
We start with training a joint BERT model. Meanwhile, we will run [https://github.com/saffsd/langid.py langid] on unspecified texts to later train separate models.
 +
 +
== We currently do not use ==
 +
*[https://www.hf.uio.no/iln/english/about/organization/text-laboratory/projects/nowac/index.html noWAC]; 700 million words; sentences are ordered; semi-clean;
 +
*[https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-1989# CommonCrawl from CoNLL 2017]; 1.3 billion words; sentences are shuffled; not clean;
 +
*[https://www.nb.no/sprakbanken/ressurskatalog/oai-nb-no-sbr-34/ NB Digital]; 200 million words; sentences are ordered; semi-clean (OCR quality varies).
 +
 +
= Special stuff for Norsk Aviskorpus =
 +
''/cluster/projects/nn9851k/andreku/norbert_corpora/NAK/''
 +
 +
1. Post-2011 archives contain XML files, one document per file, UTF-8 encoding. A simple XML reader will extract text from  them easily. No problems here.
  
= Available Text Corpora =
+
2. For years up to 2005 ('1998-2011/1/' subdirectory), the text is in the one-token-per-line format. There are special delimiters signaling the beginning of a new document and providing the URLs.
 +
'''Will have to decide on how exactly to convert it to running text.''' [https://github.com/moses-smt/mosesdecoder/blob/master/scripts/tokenizer/detokenizer.perl Moses de-tokenizer]?
 +
'''DONE''' using a self-made tokenizer (in the repo)
  
*[https://www.nb.no/sprakbanken/ressurskatalog/oai-nb-no-sbr-4/ Norsk Aviskorpus]
+
3. Everything up to and including 2011 ('1998-2011/' subdirectory) is in the ISO 8859-01 encoding ('Latin-1'). The '1998-2011/3' subdirectory contains XML files which are in 8859-01 as well, although some of them falsely claim (in their headers) to be UTF-8.  
*[https://dumps.wikimedia.org/nowiki/latest/ Norwegian Wikipedia]
+
Must convert to UTF-8 before any other pre-processing.
*[https://www.hf.uio.no/iln/english/about/organization/text-laboratory/projects/nowac/index.html noWAC]
+
'''DONE'''
*[https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-1989# CommonCrawl from CoNLL 2017]
 
*[https://www.nb.no/sprakbanken/ressurskatalog/oai-nb-no-sbr-34/ NB Digital] ???
 
  
 
= Preprocessing and Tokenization =
 
= Preprocessing and Tokenization =
  
 +
1. Check quotes after detokenization    '''DONE'''
 +
 +
2. Obtain Nynorsk Wikipedia (since we are going to train a joint BERT model) '''DONE'''
 +
 +
3. Find out how Stanza evaluates sentence segmentation. Compare its performance with UDPipe and Punkt. ('''POSTPONED''')
 +
 +
4. Train a joint Stanza sentence segmenter on Nynorsk and Bokmål '''if necessary''' (are embeddings needed?). ('''POSTPONED''')
 +
 +
5. Sentence-segment the corpora. '''DONE'''
 +
 +
== Vocabulary ==
 
[https://github.com/google/sentencepiece SentencePiece] library finds '''157''' unique characters in Norwegian Wikipedia dump.
 
[https://github.com/google/sentencepiece SentencePiece] library finds '''157''' unique characters in Norwegian Wikipedia dump.
  
It seems there are some tokenization issues in the [https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-3131 UDPipe Norwegian model trained on UD 2.5] (''norwegian-bokmaal-ud-2.5-191206.udpipe'').
+
Should we assume that the input to the trained model will be tokenized text (punctuation marks separated from words) or not?
It always merges punctuation marks with the preceding words, as can be checked at the [https://lindat.mff.cuni.cz/services/udpipe/ online demo].
+
 
 +
This is an issue of balancing between the needs of a '''naive user''' (who wants to avoid any pre-processing) and the needs of a '''computational linguist''' (who arguably wants to have more linguistically meaningful tokens at the output).
 +
 
 +
'''We decided that the default model is trained on raw text, but if time allows, a `tokenized' model should be trained for comparison.'''
 +
 
 +
== Training input file format ==
 +
 
 +
1. One sentence per line. These should ideally be actual sentences, not entire paragraphs or arbitrary spans of text.
 +
(Because BERT uses the sentence boundaries for the "next sentence prediction" task).
 +
Will do sentence-splitting with [https://stanfordnlp.github.io/stanza/performance.html Stanza].
 +
 
 +
2. Blank lines between documents. Document boundaries are needed so that the "next sentence prediction" task doesn't span between documents.
 +
 
 +
3. Text files are converted to TFRecords. Each TFR is about 60 times larger than the original gzipped text file. We need about 300 GB to store TFRs for sequence length 128 for our full training corpus.
 +
 
 +
= Training details =
 +
Batch size: '''96''' (EngBERT: 256, FinBERT: 140)
 +
 
 +
Global batch size (8 GPUs): '''768''' (EngBERT: 4096, FinBERT: 1120)
 +
 
 +
Target epochs over the full corpus: '''3''' (EngBERT: 40, FinBERT: 3)
  
Example:
+
Target training steps: '''795 000''' (EngBERT: 1 000 000, FinBERT: 1 000 000)
<pre>
 
# text = På østsiden av vannet, er det en godkjent bålplass.
 
1 På på ADP _ _ _ _ _ _
 
2 østsiden østsid NOUN _ Definite=Def|Gender=Masc|Number=Sing _ _ _ _
 
3 av av ADP _ _ _ _ _ _
 
4 vannet, $vannet, PUNCT _ _ _ _ _ _
 
5 er være AUX _ Mood=Ind|Tense=Pres|VerbForm=Fin _ _ _ _
 
6 det det PRON _ Gender=Neut|Number=Sing|Person=3|PronType=Prs _ _ _ _
 
7 en en DET _ Gender=Masc|Number=Sing|PronType=Art _ _ _ _
 
8 godkjent godkjent ADJ _ Definite=Ind|Degree=Pos|Gender=Neut|Number=Sing _ _ _ _
 
9 bålplass. bålplass. NOUN _ Abbr=Yes _ _ _ SpaceAfter=No
 
</pre>
 
  
The model trained on the previous 2.4 release (''norwegian-bokmaal-ud-2.4-190531.udpipe'') does not exhibit such behavior.
+
The model will train on approximately '''680M''' sentences in the end (EngBERT: 4B, FinBERT: 1.1B).
  
There is an active [https://github.com/UniversalDependencies/UD_Norwegian-Bokmaal/pull/5 pull request] which supposedly fixes this.
+
Time for 1 epoch: 133 hours / 6 days
  
 
= Evaluation =
 
= Evaluation =
 
Do we have available Norwegian test sets for typical NLP tasks to evaluate our NorBERT?
 
Do we have available Norwegian test sets for typical NLP tasks to evaluate our NorBERT?
 +
Please see [[Eosc/norbert/benchmark]] for a discussion.

Latest revision as of 22:19, 12 January 2021

These are just working notes. See the official NorBERT release announcement.

Working Notes for Norwegian BERT-Like Models

Report on the creation of FinBERT: https://arxiv.org/pdf/1912.07076.pdf

Working NVIDIA implementation workflow on Saga

NorBERT tools

Status for the joint model

  1. Training corpus: prepared
  2. Training corpus segmentation: complete
  3. SentencePiece vocabulary: created
  4. TF Records for the Phase 1 (sequence length 128): complete (/cluster/projects/nn9851k/andreku/norbert_data/norbert128/)
  5. Training for the Phase 1: complete
  6. TF Records for the Phase 2 (sequence length 512): complete (/cluster/projects/nn9851k/andreku/norbert_data/norbert512/)
  7. Training for the Phase 2: in the process

Available Bokmål Text Corpora

We use


NAK Bøkmal: 712 145 669 word tokens

NAK Nynorsk: 47 180 985 word tokens

NAK unspecified: 1 119 744 725 word tokens

We start with training a joint BERT model. Meanwhile, we will run langid on unspecified texts to later train separate models.

We currently do not use

  • noWAC; 700 million words; sentences are ordered; semi-clean;
  • CommonCrawl from CoNLL 2017; 1.3 billion words; sentences are shuffled; not clean;
  • NB Digital; 200 million words; sentences are ordered; semi-clean (OCR quality varies).

Special stuff for Norsk Aviskorpus

/cluster/projects/nn9851k/andreku/norbert_corpora/NAK/

1. Post-2011 archives contain XML files, one document per file, UTF-8 encoding. A simple XML reader will extract text from them easily. No problems here.

2. For years up to 2005 ('1998-2011/1/' subdirectory), the text is in the one-token-per-line format. There are special delimiters signaling the beginning of a new document and providing the URLs. Will have to decide on how exactly to convert it to running text. Moses de-tokenizer? DONE using a self-made tokenizer (in the repo)

3. Everything up to and including 2011 ('1998-2011/' subdirectory) is in the ISO 8859-01 encoding ('Latin-1'). The '1998-2011/3' subdirectory contains XML files which are in 8859-01 as well, although some of them falsely claim (in their headers) to be UTF-8. Must convert to UTF-8 before any other pre-processing. DONE

Preprocessing and Tokenization

1. Check quotes after detokenization DONE

2. Obtain Nynorsk Wikipedia (since we are going to train a joint BERT model) DONE

3. Find out how Stanza evaluates sentence segmentation. Compare its performance with UDPipe and Punkt. (POSTPONED)

4. Train a joint Stanza sentence segmenter on Nynorsk and Bokmål if necessary (are embeddings needed?). (POSTPONED)

5. Sentence-segment the corpora. DONE

Vocabulary

SentencePiece library finds 157 unique characters in Norwegian Wikipedia dump.

Should we assume that the input to the trained model will be tokenized text (punctuation marks separated from words) or not?

This is an issue of balancing between the needs of a naive user (who wants to avoid any pre-processing) and the needs of a computational linguist (who arguably wants to have more linguistically meaningful tokens at the output).

We decided that the default model is trained on raw text, but if time allows, a `tokenized' model should be trained for comparison.

Training input file format

1. One sentence per line. These should ideally be actual sentences, not entire paragraphs or arbitrary spans of text. (Because BERT uses the sentence boundaries for the "next sentence prediction" task). Will do sentence-splitting with Stanza.

2. Blank lines between documents. Document boundaries are needed so that the "next sentence prediction" task doesn't span between documents.

3. Text files are converted to TFRecords. Each TFR is about 60 times larger than the original gzipped text file. We need about 300 GB to store TFRs for sequence length 128 for our full training corpus.

Training details

Batch size: 96 (EngBERT: 256, FinBERT: 140)

Global batch size (8 GPUs): 768 (EngBERT: 4096, FinBERT: 1120)

Target epochs over the full corpus: 3 (EngBERT: 40, FinBERT: 3)

Target training steps: 795 000 (EngBERT: 1 000 000, FinBERT: 1 000 000)

The model will train on approximately 680M sentences in the end (EngBERT: 4B, FinBERT: 1.1B).

Time for 1 epoch: 133 hours / 6 days

Evaluation

Do we have available Norwegian test sets for typical NLP tasks to evaluate our NorBERT? Please see Eosc/norbert/benchmark for a discussion.