Difference between revisions of "Eosc/pretraining/nvidia"

From Nordic Language Processing Laboratory
Jump to: navigation, search
(Building software and installing custom modules)
(Prerequisites)
Line 16: Line 16:
  
 
Finally, the host machine must have Internet connection.
 
Finally, the host machine must have Internet connection.
 +
 +
===IMPORTANT NOTE FOR NVIDIA A100 GPUs===
 +
NVIDIA A100 GPUs (any GPUs with CUDA compute capability 8) are designed to work best with CUDA 11 and cuDNN 8. Applications built with CUDA 10 and cuDNN 7 [https://docs.nvidia.com/cuda/ampere-compatibility-guide/index.html in principle can be run on A100] (by JIT-compiling PTX code into GPU binary code on every run), but this is not recommended by NVIDIA.
 +
 +
[https://www.tensorflow.org/install/source#gpu TensorFlow supports CUDA 11 and cuDNN 8 only starting from TF 2.4]. Earlier TensorFlow versions (and definitely TF 1) are not guaranteed to compile with CUDA 11. In practice, our attempts to do this indeed failed, and the same is true for [https://stackoverflow.com/questions/64593245/could-not-find-any-cudnn-h-matching-version-8-in-any-subdirectory other practitioners]. It might still be possible to build TensorFlow 1.15 with CUDA 11, but this will arguably require a significant amount of tinkering.
 +
 +
BERT training code below is based on the [https://github.com/NVIDIA/DeepLearningExamples/tree/master/TensorFlow/LanguageModeling/BERT NVIDIA BERT] implementation which uses TensorFlow 1 (for example, 1.15.2 is well tested by us). It is not functional with TensorFlow 2 without significant rewriting. Thus, it is bound to the libraries built with CUDA 10. When this software stack is run on A100 GPUs, unexpected behavior can occur: warnings, errors and failures.
 +
 +
We are still looking for ways to cope with this, but as of now the BERT training recipe below is 100% guaranteed to run only on NVIDIA P100 (Pascal) and V100 (Volta) architectures.
  
 
== Setting things up ==
 
== Setting things up ==

Revision as of 23:21, 27 March 2021

Background

This page provides a recipe to large-scale pre-training of a BERT neural language model, using the high-efficiency NVIDIA BERT implementation (which is based on TensorFlow and NCCL, among others, in contrast to the NVIDIA Megatron code).

Software Installation

Prerequisites

We assume that EasyBuild and Lmod are already installed on the host machine.

We also assume that core software (compilers, most toolchains, CUDA drivers, etc) are also already installed system-wide, or at least that their easyconfigs are available to the system-wide EasyBuild installation.

Finally, the host machine must have Internet connection.

IMPORTANT NOTE FOR NVIDIA A100 GPUs

NVIDIA A100 GPUs (any GPUs with CUDA compute capability 8) are designed to work best with CUDA 11 and cuDNN 8. Applications built with CUDA 10 and cuDNN 7 in principle can be run on A100 (by JIT-compiling PTX code into GPU binary code on every run), but this is not recommended by NVIDIA.

TensorFlow supports CUDA 11 and cuDNN 8 only starting from TF 2.4. Earlier TensorFlow versions (and definitely TF 1) are not guaranteed to compile with CUDA 11. In practice, our attempts to do this indeed failed, and the same is true for other practitioners. It might still be possible to build TensorFlow 1.15 with CUDA 11, but this will arguably require a significant amount of tinkering.

BERT training code below is based on the NVIDIA BERT implementation which uses TensorFlow 1 (for example, 1.15.2 is well tested by us). It is not functional with TensorFlow 2 without significant rewriting. Thus, it is bound to the libraries built with CUDA 10. When this software stack is run on A100 GPUs, unexpected behavior can occur: warnings, errors and failures.

We are still looking for ways to cope with this, but as of now the BERT training recipe below is 100% guaranteed to run only on NVIDIA P100 (Pascal) and V100 (Volta) architectures.

Setting things up

  • Clone our repository: git clone https://source.coderefinery.org/nlpl/easybuild.git
  • Its directory ('easybuild') will serve as your building factory. Rename it to whatever you think fits well. Change to this directory.
  • To use the same procedure across different systems we provide a custom preparation script.
  • To prepare directories and get the path settings run it:

./setdir.sh

  • It will create the file SETUP.local with the settings you are going to use in the future (by simply running source SETUP.local after loading EasyBuild).
  • It will also print a command for your users to run if they want to be able to load your custom modules.

Building software and installing custom modules

  • Load EasyBuild (e.g., module load EasyBuild/4.3.3)
  • Load your settings: source SETUP.local
  • Check your settings: eb --show-config
  • Imitate NVIDIA BERT installation by running

eb --robot nlpl-nvidia-bert-tf-20.06.08-gomkl-2019b-Python3.7.4.eb --dry-run

  • or eb --robot nlpl-nvidia-bert-tf-20.06.08-foss-2019b-Python3.7.4.eb --dry-run if your CPU architecture is different from Intel (for example, AMD)
  • EasyBuild will show the list of required modules, marking those which have to be installed from scratch (by downloading and building the corresponding software).
  • If no warning or errors were shown, build everything required by NVIDIA BERT:

eb --robot nlpl-nvidia-bert-tf-20.06.08-gomkl-2019b-Python3.7.4.eb

  • or eb --robot nlpl-nvidia-bert-tf-20.06.08-foss-2019b-Python3.7.4.eb if your CPU architecture is different from Intel (for example, AMD)
  • After the process is finished, your modules will be visible along with the system-provided ones via module avail, and can be loaded with module load.
  • Approximate building time for all the required modules is several hours (from 3 to 5).

Data Preparation

To train BERT, 3 data pieces are required:

  • a training corpus (CORPUS), a collection of plain text files (can be gzip-compressed)
  • a WordPiece vocabulary (VOCAB), a plain text file
  • a BERT configuration (CONFIG), a JSON file defining the model hyperparameters

Ready-to-use toy examples data can be found in the tests/text_data subdirectory:

  • no_wiki/: a directory with texts from Norwegian Wikipedia (about 1.2 million words)
  • norwegian_wordpiece_vocab_20k.txt: Norwegian WordPiece vocabulary (20 000 entries)
  • norbert_config.json: BERT configuration file replicating BERT-Small for English (adapted to the number of entries in the vocabulary)

Training Example

  • Extend your $MODULEPATH with the path to your custom modules, using the command suggested by the setdir.sh script:
  • module use -a PATH_TO_YOUR_REPOSITORY/easybuild/install/modules/all/
  • Load the NVIDIA BERT module:
  • module load nlpl-nvidia-bert/20.06.8-gomkl-2019b-tensorflow-1.15.2-Python-3.7.4
  • Run the training script:
  • train_bert.sh CORPUS VOCAB CONFIG
  • This will convert your text data into TF Record files (stored in data/tfrecords/) and then train a BERT model with batch size 48 and 1000 train steps (the model will be saved in model/)
  • Training on the toy data above takes not more than an hour on 4 Saga GPUs.
  • We use 4 GPUs by default (to test hardware and software as much as possible). Modify the train_bert.sh script to change this or other BERT training parameters.