Difference between revisions of "Eosc/norbert"

From Nordic Language Processing Laboratory
Jump to: navigation, search
(Evaluation)
(Special stuff for Norsk Aviskorpus)
Line 19: Line 19:
  
 
2. For years up to 2005 ('1998-2011/1/' subdirectory), the text is in the one-token-per-line format. There are special delimiters signaling the beginning of a new document and providing the URLs.
 
2. For years up to 2005 ('1998-2011/1/' subdirectory), the text is in the one-token-per-line format. There are special delimiters signaling the beginning of a new document and providing the URLs.
'''Will have to decide on how exactly to convert it to running text.'''
+
'''Will have to decide on how exactly to convert it to running text.''' [https://github.com/moses-smt/mosesdecoder/blob/master/scripts/tokenizer/detokenizer.perl Moses de-tokenizer]?
  
 
3. Everything up to and including 2011 ('1998-2011/' subdirectory) is in the ISO 8859-01 encoding ('Latin-1'). The '1998-2011/3' subdirectory contains XML files which are in 8859-01 as well, although some of them falsely claim (in their headers) to be UTF-8.  
 
3. Everything up to and including 2011 ('1998-2011/' subdirectory) is in the ISO 8859-01 encoding ('Latin-1'). The '1998-2011/3' subdirectory contains XML files which are in 8859-01 as well, although some of them falsely claim (in their headers) to be UTF-8.  

Revision as of 15:17, 3 December 2020

Working Notes for Norwegian BERT-Like Models

Report on the creation of FinBERT: https://arxiv.org/pdf/1912.07076.pdf

Working NVIDIA implementation workflow on Saga

Available Bokmål Text Corpora

Special stuff for Norsk Aviskorpus

/cluster/projects/nn9447k/andreku/norbert_corpora/NAK/

1. Post-2011 archives contain XML files, one document per file, UTF-8 encoding. A simple XML reader will extract text from them easily. No problems here.

2. For years up to 2005 ('1998-2011/1/' subdirectory), the text is in the one-token-per-line format. There are special delimiters signaling the beginning of a new document and providing the URLs. Will have to decide on how exactly to convert it to running text. Moses de-tokenizer?

3. Everything up to and including 2011 ('1998-2011/' subdirectory) is in the ISO 8859-01 encoding ('Latin-1'). The '1998-2011/3' subdirectory contains XML files which are in 8859-01 as well, although some of them falsely claim (in their headers) to be UTF-8. Must convert to UTF-8 before any other pre-processing.

Preprocessing and Tokenization

SentencePiece library finds 157 unique characters in Norwegian Wikipedia dump.

Should we assume that the input to the trained model will be tokenized text (punctuation marks separated from words) or not?

This is an issue of balancing between the needs of a naive user (who wants to avoid any pre-processing) and the needs of a computational linguist (who arguably wants to have more linguistically meaningful tokens at the output).

Training input file format

1. One sentence per line. These should ideally be actual sentences, not entire paragraphs or arbitrary spans of text. (Because BERT uses the sentence boundaries for the "next sentence prediction" task).

2. Blank lines between documents. Document boundaries are needed so that the "next sentence prediction" task doesn't span between documents.

Evaluation

Do we have available Norwegian test sets for typical NLP tasks to evaluate our NorBERT?

Eosc/norbert/benchmark