Difference between revisions of "Eosc/pretraining"

From Nordic Language Processing Laboratory
Jump to: navigation, search
(ELMo)
(Available implementations)
 
(28 intermediate revisions by 2 users not shown)
Line 9: Line 9:
 
of these implementations, in an automated and uniform manner,
 
of these implementations, in an automated and uniform manner,
 
on multiple HPC systems.
 
on multiple HPC systems.
 +
 +
= Tokenization =
 +
 +
There are several established tokenization workflows for large pre-trained language models.
 +
We are describing them here.
 +
 +
* '''ELMo''' does not use any sub-word tokenization per se.
 +
It splits tokens by white spaces, and then represents each token as a sequence of UTF-8 code units (max 50 by default, 8 bit each).
 +
The final (non-contextual) token embedding is produced by running a simple CNN over this sequence.
 +
This naturally handles OOV words, since they are composed of the same UTF-8 code units.
 +
 +
* '''BERT''' and the company employ sub-word tokenization.
 +
 +
The original BERT uses WordPiece: an implementation of the standard character-level BPE encoding with some form of language modeling employed to select sub-words.
 +
 +
The English BERT model from Google [https://www.aclweb.org/anthology/N19-1423.pdf employs a vocabulary of 30 000 "word pieces"].
 +
 +
Google does not provide the code they used to learn a new WordPiece vocabulary (neither does NVIDIA). Instead, they both suggest using Google's open source [https://github.com/google/sentencepiece SentencePiece] library.
 +
SentencePiece is even superior to WordPiece in some respect: for example, it does not require pre-segmentation of words.
 +
After training on a text corpus, it produces the ".model and ".vocab" files, where the former contains character merges, and the latter contains the sub-word vocabulary itself.
 +
The SentencePiece output can be converted to a BERT-compatible vocabulary using https://github.com/spyysalo/sent2wordpiece.
 +
 +
This is what Turku folks did to train the FinBERT. Actually, they provide a SentencePiece-generated BERT-compatible [http://dl.turkunlp.org/nlpl-2020/vocabs/no/cased/ vocabulary trained on Norwegian Wikipedia].
 +
We can use this vocabulary or train our own.
 +
Note that SentencePiece does not remove diacritics from tokens, and the vocabulary provided by Turku also contains diacritics.
 +
In their presentation at the NLPL Winter school, they hinted this can be a problem for BERT, but I do not see why.
 +
 +
Finally, HuggingFace provides its own fast [https://github.com/huggingface/tokenizers Tokenizers library.] It implements:
 +
* '''CharBPETokenizer''': The original char-level Byte Pair Encoding; training on Norwegian Wikipedia takes 13 minutes;
 +
* '''SentencePieceBPETokenizer''': A BPE implementation from SentencePiece; training on Norwegian Wikipedia takes 25 minutes;
 +
* '''BertWordPieceTokenizer''': Reimplementation of the BERT WordPiece tokenizer; removes diacritics; training on Norwegian Wikipedia takes 15 minutes;
 +
* '''ByteLevelBPETokenizer''': The byte level version of the BPE (recommended by HuggingFace, because it ensures that all tokens will always be known); training on Norwegian Wikipedia takes 15 minutes.
 +
 +
It seems that the Tokenizers library is the best choice, being well integrated into the widely used Transformers package.
 +
As for the particular tokenizer for future Norwegian BERT, '''SentencePieceBPETokenizer''' looks like the best option.
 +
The problem with '''BertWordPieceTokenizer''' is that it removes diacritics which is critical for Norwegian. But it is yet to see whether diacritics cause problems to BERT training.
 +
The problem with  '''ByteLevelBPETokenizer''' is that its output can be difficult to examine and interpret, since it works with bytes, not characters. Workarounds can be designed, but this will introduce an additional layer of non-transparency, especially for those not IT-savvy.
 +
 +
As for diacritics, the Tokenizers library controls their removal via the <code>[https://github.com/huggingface/tokenizers/blob/master/bindings/python/py_src/tokenizers/normalizers/__init__.pyi strip_accents]</code> parameter. Thus, if one passes an accented text to the English BERT wordpiece vocabulary, by default it will be preprocessed by stripping all accents (thus mimicking the behavior of the original BERT). When using the vocabulary from Multilingual BERT, one has to explicitly set the <code>strip_accents</code> parameter to '''False'''.
 +
 +
* '''RoBERTa''' uses a byte level BPE tokenizer (seemingly identical to the one used in the GPT-2 and the '''ByteLevelBPETokenizer''' from HuggingFace) with the vocabulary of 50 000 sub-word units.
 +
 +
* '''ELECTRA''' models from Google simply use the vocabulary from Google's BERT (thus, WordPiece). Should work with other sub-word tokenization schemes as well.
 +
 +
= Design =
 +
 +
Which systems to target?  At least two of the following would
 +
seem desirable: Saga, Puhti, eX3, the quad-V100 Power9 node at UiO.
 +
 +
Inclusion of Saga would allow comparison to (older) P100 cards;
 +
they do support [https://developer.nvidia.com/blog/mixed-precision-programming-cuda-8/ half-precision operations] and [https://aistein.github.io/dlprof/2018/05/07/xla-optimizations.html XLA].
 +
The Power9 node may be interesting because of its non-Intel cpu
 +
architecture.
  
 
= BERT =
 
= BERT =
Line 17: Line 70:
  
 
== Available implementations ==
 
== Available implementations ==
- [https://github.com/google-research/bert Reference Google implementation in TensorFlow].  
+
* [https://github.com/google-research/bert Reference implementation in TensorFlow].  
 
Requirements: 1.11 <= TensorFlow < 2.0.
 
Requirements: 1.11 <= TensorFlow < 2.0.
  
- [https://huggingface.co/transformers/model_doc/bert.html HuggingFace Transformers implementation].  
+
Developed by Google.
 +
 
 +
''Multi-GPU training'': Not officially supported, but supposedly can be achieved with [https://www.tensorflow.org/tutorials/distribute/keras Distributed training] or with [https://github.com/google-research/bert/issues/743 Horovod]
 +
 
 +
''Multi-node training'': Not officially supported, but supposedly can be achieved with [https://www.tensorflow.org/tutorials/distribute/keras Distributed training] or with [https://github.com/google-research/bert/issues/743 Horovod]
 +
 
 +
''Training time'': training on 3.3 billion words for 40 epochs takes "four days on 4 to 16 Cloud TPUs".
 +
 
 +
* [https://huggingface.co/transformers/model_doc/bert.html HuggingFace Transformers implementation].  
 
Can train either with TensorFlow or with PyTorch.  
 
Can train either with TensorFlow or with PyTorch.  
 
Requirements: Python >=3.6, TensorFlow >= 2.0, PyTorch >=1.3.1.
 
Requirements: Python >=3.6, TensorFlow >= 2.0, PyTorch >=1.3.1.
  
- [https://github.com/NVIDIA/DeepLearningExamples/tree/master/TensorFlow/LanguageModeling/BERT NVIDIA BERT for TF].   
+
Developed by HuggingFace (no corporations involved :)).
 +
 
 +
''Multi-GPU training'': Yes, [https://pytorch.org/docs/stable/distributed.html PyTorch+NCCL]
 +
 
 +
''Multi-node training'': Yes, [https://pytorch.org/docs/stable/distributed.html PyTorch+NCCL]
 +
 
 +
''Training time'': training on 160 million words for 2 epochs takes 8-9 days on 4 NVIDIA P100 GPUs.
 +
 
 +
* [https://github.com/NVIDIA/DeepLearningExamples/tree/master/TensorFlow/LanguageModeling/BERT NVIDIA BERT for TF].   
 
Add multi-node, multi-gpu support and XLA and mixed precision; recommended by our [https://github.com/TurkuNLP/FinBERT/blob/master/nlpl_tutorial/training_bert.md role models].
 
Add multi-node, multi-gpu support and XLA and mixed precision; recommended by our [https://github.com/TurkuNLP/FinBERT/blob/master/nlpl_tutorial/training_bert.md role models].
Requirements: Docker, tensorflow >= 1.11, networkx, [https://github.com/NVIDIA/enroot Enroot] and [https://github.com/NVIDIA/pyxis Pyxis] for multi-node training.
+
Requirements: tensorflow >= 1.11, dllogger.
 +
 
 +
Developed by NVIDIA.
  
- [https://github.com/NVIDIA/DeepLearningExamples/tree/master/PyTorch/LanguageModeling/BERT NVIDIA BERT for PyTorch].   
+
''Multi-GPU training'': Yes, [https://github.com/horovod/horovod/#usage TensorFlow+Horovod+NCCL]
 +
 
 +
''Multi-node training'': Yes, [https://github.com/horovod/horovod/#usage TensorFlow+Horovod], requires [https://github.com/NVIDIA/enroot Enroot] and [https://github.com/NVIDIA/pyxis Pyxis].
 +
 
 +
''Training time'': training on 3.3 billion words for 40 epochs [https://docs.nvidia.com/ngc/multi-node-bert-user-guide/ takes 3 days with 16 NVIDIA V100 GPUs] or [https://arxiv.org/pdf/1912.07076.pdf 12 days with 8 NVIDIA V100 GPUs].
 +
 
 +
* [https://github.com/NVIDIA/DeepLearningExamples/tree/master/PyTorch/LanguageModeling/BERT NVIDIA BERT for PyTorch].   
 
Add multi-node, multi-gpu support and XLA and mixed precision.
 
Add multi-node, multi-gpu support and XLA and mixed precision.
Requirements: Docker, PyTorch NGC container from Nvidia, [https://github.com/NVIDIA/enroot Enroot] and [https://github.com/NVIDIA/pyxis Pyxis] for multi-node training.
+
Requirements: Docker, PyTorch NGC container from Nvidia.
 +
 
 +
Developed by NVIDIA.
 +
 
 +
''Multi-GPU training'': Yes, [https://pytorch.org/docs/stable/distributed.html PyTorch+NCCL]
  
- [https://github.com/soskek/bert-chainer Chainer implementation].  
+
''Multi-node training'': Yes, [https://pytorch.org/docs/stable/distributed.html PyTorch+NCCL], requires [https://github.com/NVIDIA/enroot Enroot] and [https://github.com/NVIDIA/pyxis Pyxis].
 +
 
 +
''Training time'': training on 3.3 billion words for 40 epochs [https://docs.nvidia.com/ngc/multi-node-bert-user-guide/ takes 3 days with 16 NVIDIA V100 GPUs]
 +
 
 +
* [https://github.com/NVIDIA/Megatron-LM Megatron] ???
 +
 
 +
* [https://github.com/soskek/bert-chainer Chainer implementation].  
 
Not much interesting to us, since it does not support training, only inference.
 
Not much interesting to us, since it does not support training, only inference.
  
Line 42: Line 129:
  
 
== Available implementations ==
 
== Available implementations ==
- [https://github.com/allenai/bilm-tf Reference TensorFlow implementation].  
+
* [https://github.com/allenai/bilm-tf Reference TensorFlow implementation].  
 
Requirements: Python >=3.5, 1.2 < TensorFlow < 1.13 (later versions produce too many deprecation warnings), h5py.
 
Requirements: Python >=3.5, 1.2 < TensorFlow < 1.13 (later versions produce too many deprecation warnings), h5py.
  
Created (but not much maintained) by [https://allenai.org/ Allen AI].
+
Developed (but not much maintained) by [https://allenai.org/ Allen AI].
  
''Multi-node training'': unknown
+
''Multi-GPU training'': Yes (TensorFlow native support)
  
''Training time'': one epoch over 100 million word tokens takes 3 hours with 2 NVIDIA P100 GPUs (batch size 192)
+
''Multi-node training'': unknown (arguably not required for ELMo)
  
- [https://github.com/ltgoslo/simple_elmo_training LTG implementation].  
+
''Training time'': one epoch over 100 million word tokens takes 3 hours with 2 NVIDIA P100 GPUs (batch size 192). 3 epochs already give reasonable performance in NLP tasks.
 +
 
 +
* [https://github.com/ltgoslo/simple_elmo_training LTG implementation].  
 
Based on the reference implementation, but with improved data loading, hyper-parameter handling, and the code updated to more recent versions of TensorFlow.  
 
Based on the reference implementation, but with improved data loading, hyper-parameter handling, and the code updated to more recent versions of TensorFlow.  
 
Requirements: Python >=3.5, 1.15 <= TensorFlow < 2.0 (2.0 version is planned), h5py, smart_open.
 
Requirements: Python >=3.5, 1.15 <= TensorFlow < 2.0 (2.0 version is planned), h5py, smart_open.
 +
 
[http://wiki.nlpl.eu/index.php/Vectors/elmo/tutorial Tutorial] is available. A PyPi module is planned.
 
[http://wiki.nlpl.eu/index.php/Vectors/elmo/tutorial Tutorial] is available. A PyPi module is planned.
  
Created by [https://www.mn.uio.no/ifi/english/research/groups/ltg/ UiO LTG].
+
Developed by [https://www.mn.uio.no/ifi/english/research/groups/ltg/ UiO LTG].
  
''Multi-node training'': unknown
+
''Multi-GPU training'': Yes (TensorFlow native support)
  
''Training time'': one epoch over 100 million word tokens takes 3 hours with 2 NVIDIA P100 GPUs (batch size 192)
+
''Multi-node training'': unknown (arguably not required for ELMo)
  
- [https://docs.allennlp.org/master/api/data/token_indexers/elmo_indexer/ PyTorch implementation in AllenNLP].  
+
''Training time'': one epoch over 100 million word tokens takes 3 hours with 2 NVIDIA P100 GPUs (batch size 192). 3 epochs already give reasonable performance in NLP tasks.
 +
 
 +
* [https://docs.allennlp.org/master/api/data/token_indexers/elmo_indexer/ PyTorch implementation in AllenNLP].  
 
Not much interesting to us, since it does not support training, only inference.  
 
Not much interesting to us, since it does not support training, only inference.  
 
Requirements: Python >= 3.6, 1.6 <= PyTorch < 1.7.
 
Requirements: Python >= 3.6, 1.6 <= PyTorch < 1.7.
Line 70: Line 162:
 
The most important changes are removing the next sentence prediction objective and dynamically changing the masking pattern applied to the training data.
 
The most important changes are removing the next sentence prediction objective and dynamically changing the masking pattern applied to the training data.
 
Otherwise, it is just BERT on steroids (training longer, bigger batches, longer sequences).  
 
Otherwise, it is just BERT on steroids (training longer, bigger batches, longer sequences).  
 +
 
Interestingly, the [https://openreview.net/forum?id=SyxS0T4tvS RoBERTa paper] was rejected by ICLR 2020.
 
Interestingly, the [https://openreview.net/forum?id=SyxS0T4tvS RoBERTa paper] was rejected by ICLR 2020.
  
 
== Available implementations ==
 
== Available implementations ==
- [https://github.com/pytorch/fairseq/tree/master/examples/roberta Reference implementation in Fairseq].  
+
* [https://github.com/pytorch/fairseq/tree/master/examples/roberta Reference implementation in Fairseq].  
 
Requirements: Python >= 3.6, PyTorch >= 1.4, [https://github.com/NVIDIA/nccl NCCL].
 
Requirements: Python >= 3.6, PyTorch >= 1.4, [https://github.com/NVIDIA/nccl NCCL].
  
- [https://huggingface.co/transformers/model_doc/roberta.html HuggingFace Transformers implementation].
+
Developed by Facebook.
 +
 
 +
''Multi-GPU training'': Yes, [https://pytorch.org/docs/stable/distributed.html PyTorch + NCCL]
 +
 
 +
''Multi-node training'': Yes, [https://pytorch.org/docs/stable/distributed.html PyTorch + NCCL]
 +
 
 +
''Training time'': not reported (essentially, they just [https://arxiv.org/abs/1907.11692 recommend to train for as long as you can])
 +
 
 +
* [https://huggingface.co/transformers/model_doc/roberta.html HuggingFace Transformers implementation].
 
Can train either with TensorFlow or with PyTorch.  
 
Can train either with TensorFlow or with PyTorch.  
 
Requirements: Python >=3.6, TensorFlow >= 2.0, PyTorch >=1.3.1.
 
Requirements: Python >=3.6, TensorFlow >= 2.0, PyTorch >=1.3.1.
 +
 +
Developed by HuggingFace.
 +
 +
''Multi-GPU training'': Yes, [https://pytorch.org/docs/stable/distributed.html PyTorch + NCCL]
 +
 +
''Multi-node training'': Yes, [https://pytorch.org/docs/stable/distributed.html PyTorch + NCCL]
 +
 +
''Training time'': unknown
  
 
= ELECTRA =
 
= ELECTRA =
Line 85: Line 194:
  
 
== Available implementations ==
 
== Available implementations ==
- [https://github.com/google-research/electra Reference Google implementation in TensorFlow].
+
* [https://github.com/google-research/electra Reference implementation in TensorFlow].
Single-GPU training only.
 
 
Requirements: Python 3, 1.15 <= TensorFlow < 2.0.
 
Requirements: Python 3, 1.15 <= TensorFlow < 2.0.
  
- [https://huggingface.co/transformers/model_doc/electra.html HuggingFace Transformers implementation].
+
Developed by Google.
 +
 
 +
''Multi-GPU training'': Not supported.
 +
 
 +
''Multi-node training'': Not officially supported, but supposedly can be achieved with [https://www.tensorflow.org/tutorials/distribute/keras Distributed training] or with [https://github.com/google-research/bert/issues/743 Horovod]
 +
 
 +
''Training time'': training on 18 billion words [https://github.com/google-research/electra#quickstart-pre-train-a-small-electra-model takes 4 days on 1 NVIDIA V100 GPU].
 +
 
 +
* [https://huggingface.co/transformers/model_doc/electra.html HuggingFace Transformers implementation].
 
Can train either with TensorFlow or with PyTorch.  
 
Can train either with TensorFlow or with PyTorch.  
 
Requirements: Python >=3.6, TensorFlow >= 2.0, PyTorch >=1.3.1.
 
Requirements: Python >=3.6, TensorFlow >= 2.0, PyTorch >=1.3.1.
 +
 +
Developed by HuggingFace (well, strictly speaking it is [https://github.com/huggingface/transformers/pull/4656 still in development])
 +
 +
''Multi-GPU training'': Yes, [https://pytorch.org/docs/stable/distributed.html PyTorch + NCCL]
 +
 +
''Multi-node training'': Yes, [https://pytorch.org/docs/stable/distributed.html PyTorch + NCCL]
 +
 +
''Training time'': Should be approximately the same as the reference implementation, but not directly reported.

Latest revision as of 17:25, 2 December 2020

Background

This page provides an informal, technically-oriented survey over available (and commonly used) architectures and implementations for large-scale pre-training (and fine-tuning) of contextualized neural language models.

The NLPL use case, will install, validate, and maintain a selection of these implementations, in an automated and uniform manner, on multiple HPC systems.

Tokenization

There are several established tokenization workflows for large pre-trained language models. We are describing them here.

  • ELMo does not use any sub-word tokenization per se.

It splits tokens by white spaces, and then represents each token as a sequence of UTF-8 code units (max 50 by default, 8 bit each). The final (non-contextual) token embedding is produced by running a simple CNN over this sequence. This naturally handles OOV words, since they are composed of the same UTF-8 code units.

  • BERT and the company employ sub-word tokenization.

The original BERT uses WordPiece: an implementation of the standard character-level BPE encoding with some form of language modeling employed to select sub-words.

The English BERT model from Google employs a vocabulary of 30 000 "word pieces".

Google does not provide the code they used to learn a new WordPiece vocabulary (neither does NVIDIA). Instead, they both suggest using Google's open source SentencePiece library. SentencePiece is even superior to WordPiece in some respect: for example, it does not require pre-segmentation of words. After training on a text corpus, it produces the ".model and ".vocab" files, where the former contains character merges, and the latter contains the sub-word vocabulary itself. The SentencePiece output can be converted to a BERT-compatible vocabulary using https://github.com/spyysalo/sent2wordpiece.

This is what Turku folks did to train the FinBERT. Actually, they provide a SentencePiece-generated BERT-compatible vocabulary trained on Norwegian Wikipedia. We can use this vocabulary or train our own. Note that SentencePiece does not remove diacritics from tokens, and the vocabulary provided by Turku also contains diacritics. In their presentation at the NLPL Winter school, they hinted this can be a problem for BERT, but I do not see why.

Finally, HuggingFace provides its own fast Tokenizers library. It implements:

  • CharBPETokenizer: The original char-level Byte Pair Encoding; training on Norwegian Wikipedia takes 13 minutes;
  • SentencePieceBPETokenizer: A BPE implementation from SentencePiece; training on Norwegian Wikipedia takes 25 minutes;
  • BertWordPieceTokenizer: Reimplementation of the BERT WordPiece tokenizer; removes diacritics; training on Norwegian Wikipedia takes 15 minutes;
  • ByteLevelBPETokenizer: The byte level version of the BPE (recommended by HuggingFace, because it ensures that all tokens will always be known); training on Norwegian Wikipedia takes 15 minutes.

It seems that the Tokenizers library is the best choice, being well integrated into the widely used Transformers package. As for the particular tokenizer for future Norwegian BERT, SentencePieceBPETokenizer looks like the best option. The problem with BertWordPieceTokenizer is that it removes diacritics which is critical for Norwegian. But it is yet to see whether diacritics cause problems to BERT training. The problem with ByteLevelBPETokenizer is that its output can be difficult to examine and interpret, since it works with bytes, not characters. Workarounds can be designed, but this will introduce an additional layer of non-transparency, especially for those not IT-savvy.

As for diacritics, the Tokenizers library controls their removal via the strip_accents parameter. Thus, if one passes an accented text to the English BERT wordpiece vocabulary, by default it will be preprocessed by stripping all accents (thus mimicking the behavior of the original BERT). When using the vocabulary from Multilingual BERT, one has to explicitly set the strip_accents parameter to False.

  • RoBERTa uses a byte level BPE tokenizer (seemingly identical to the one used in the GPT-2 and the ByteLevelBPETokenizer from HuggingFace) with the vocabulary of 50 000 sub-word units.
  • ELECTRA models from Google simply use the vocabulary from Google's BERT (thus, WordPiece). Should work with other sub-word tokenization schemes as well.

Design

Which systems to target? At least two of the following would seem desirable: Saga, Puhti, eX3, the quad-V100 Power9 node at UiO.

Inclusion of Saga would allow comparison to (older) P100 cards; they do support half-precision operations and XLA. The Power9 node may be interesting because of its non-Intel cpu architecture.

BERT

Bidirectional Encoder Representations from Transformers (BERT) is a deep language model jointly conditioned on both left and right context in all layers. It is based on the Transformer neural architecture (Devlin et al 2019).

The de-facto standard for contextualized representations in modern NLP.

Available implementations

Requirements: 1.11 <= TensorFlow < 2.0.

Developed by Google.

Multi-GPU training: Not officially supported, but supposedly can be achieved with Distributed training or with Horovod

Multi-node training: Not officially supported, but supposedly can be achieved with Distributed training or with Horovod

Training time: training on 3.3 billion words for 40 epochs takes "four days on 4 to 16 Cloud TPUs".

Can train either with TensorFlow or with PyTorch. Requirements: Python >=3.6, TensorFlow >= 2.0, PyTorch >=1.3.1.

Developed by HuggingFace (no corporations involved :)).

Multi-GPU training: Yes, PyTorch+NCCL

Multi-node training: Yes, PyTorch+NCCL

Training time: training on 160 million words for 2 epochs takes 8-9 days on 4 NVIDIA P100 GPUs.

Add multi-node, multi-gpu support and XLA and mixed precision; recommended by our role models. Requirements: tensorflow >= 1.11, dllogger.

Developed by NVIDIA.

Multi-GPU training: Yes, TensorFlow+Horovod+NCCL

Multi-node training: Yes, TensorFlow+Horovod, requires Enroot and Pyxis.

Training time: training on 3.3 billion words for 40 epochs takes 3 days with 16 NVIDIA V100 GPUs or 12 days with 8 NVIDIA V100 GPUs.

Add multi-node, multi-gpu support and XLA and mixed precision. Requirements: Docker, PyTorch NGC container from Nvidia.

Developed by NVIDIA.

Multi-GPU training: Yes, PyTorch+NCCL

Multi-node training: Yes, PyTorch+NCCL, requires Enroot and Pyxis.

Training time: training on 3.3 billion words for 40 epochs takes 3 days with 16 NVIDIA V100 GPUs

Not much interesting to us, since it does not support training, only inference.

ELMo

Embeddings from Language Models (ELMo) use bidirectional LSTM language models to produce contextualized word token representations (Peters et al 2018).

The only architecture in the list to use recurrent neural networks, not Transformers. Despite being much less computationally demanding, often performs on par with BERT.

Available implementations

Requirements: Python >=3.5, 1.2 < TensorFlow < 1.13 (later versions produce too many deprecation warnings), h5py.

Developed (but not much maintained) by Allen AI.

Multi-GPU training: Yes (TensorFlow native support)

Multi-node training: unknown (arguably not required for ELMo)

Training time: one epoch over 100 million word tokens takes 3 hours with 2 NVIDIA P100 GPUs (batch size 192). 3 epochs already give reasonable performance in NLP tasks.

Based on the reference implementation, but with improved data loading, hyper-parameter handling, and the code updated to more recent versions of TensorFlow. Requirements: Python >=3.5, 1.15 <= TensorFlow < 2.0 (2.0 version is planned), h5py, smart_open.

Tutorial is available. A PyPi module is planned.

Developed by UiO LTG.

Multi-GPU training: Yes (TensorFlow native support)

Multi-node training: unknown (arguably not required for ELMo)

Training time: one epoch over 100 million word tokens takes 3 hours with 2 NVIDIA P100 GPUs (batch size 192). 3 epochs already give reasonable performance in NLP tasks.

Not much interesting to us, since it does not support training, only inference. Requirements: Python >= 3.6, 1.6 <= PyTorch < 1.7.

RoBERTa

Robustly Optimized BERT (RoBERTa) is a BERT variation by Facebook. The most important changes are removing the next sentence prediction objective and dynamically changing the masking pattern applied to the training data. Otherwise, it is just BERT on steroids (training longer, bigger batches, longer sequences).

Interestingly, the RoBERTa paper was rejected by ICLR 2020.

Available implementations

Requirements: Python >= 3.6, PyTorch >= 1.4, NCCL.

Developed by Facebook.

Multi-GPU training: Yes, PyTorch + NCCL

Multi-node training: Yes, PyTorch + NCCL

Training time: not reported (essentially, they just recommend to train for as long as you can)

Can train either with TensorFlow or with PyTorch. Requirements: Python >=3.6, TensorFlow >= 2.0, PyTorch >=1.3.1.

Developed by HuggingFace.

Multi-GPU training: Yes, PyTorch + NCCL

Multi-node training: Yes, PyTorch + NCCL

Training time: unknown

ELECTRA

In ELECTRA, a discriminator model tries to detect which tokens in the input were replaced by a small generator language model. It is claimed to be computationally efficient in comparison to other Transformer models (Clark et al 2019).

Available implementations

Requirements: Python 3, 1.15 <= TensorFlow < 2.0.

Developed by Google.

Multi-GPU training: Not supported.

Multi-node training: Not officially supported, but supposedly can be achieved with Distributed training or with Horovod

Training time: training on 18 billion words takes 4 days on 1 NVIDIA V100 GPU.

Can train either with TensorFlow or with PyTorch. Requirements: Python >=3.6, TensorFlow >= 2.0, PyTorch >=1.3.1.

Developed by HuggingFace (well, strictly speaking it is still in development)

Multi-GPU training: Yes, PyTorch + NCCL

Multi-node training: Yes, PyTorch + NCCL

Training time: Should be approximately the same as the reference implementation, but not directly reported.