Difference between revisions of "Translation/home"
(→Example scripts) |
(→Translating-amun) |
||
Line 269: | Line 269: | ||
=== Translating-amun === | === Translating-amun === | ||
− | ul> | + | <ul> |
<li>As in the previous examples, comment the <code>MARIAN line</code> and add the SLURM headers: | <li>As in the previous examples, comment the <code>MARIAN line</code> and add the SLURM headers: | ||
<pre>#!/bin/bash | <pre>#!/bin/bash |
Revision as of 14:50, 28 February 2018
Contents
Background
An experimentation environment for Statistical and Neural Machine Translations (SMT and NMT) is maintained for NLPL under the coordination of the University of Helsinki (UoH). Initially, the software and data are commissioned on the Finnish Taito supercluster.
Available software and data
Statistical machine translation and word alignment
- Moses SMT pipeline with word alignment tools GIZA++, MGIZA, fast_align, with IRSTLM language model, with SALM:
- Release 4.0, installed on Abel and Taito as
moses/4.0-65c75ff
(usage notes below) - Release mmt-mvp-v0.12.1, installed on Taito as
"moses/mmt-mvp-v0.12.1-2739-gdc42bcb
(not recommended)
- Release 4.0, installed on Abel and Taito as
- Additional word alignment tools efmaral and eflomal:
- Most recent version
efmaral/0.1_2017_11_24
, installed on Abel and Taito (usage notes below) - Previous version
efmaral/0.1_2017_07_20
, installed on Taito (not recommended)
- Most recent version
Neural machine translation
- HNMT (Helsinki Neural Machine Translation System) is installed on Taito-GPU. Usage notes below.
- Release 1.0.1 from https://github.com/robertostling/hnmt installed as
nlpl-hnmt/1.0.1
- Release 1.0.1 from https://github.com/robertostling/hnmt installed as
- Marian is installed on Taito-GPU. Usage notes below.
- Release 1.2.0 from https://github.com/marian-nmt/marian installed as
nlpl-marian/1.2.0
- Release 1.2.0 from https://github.com/marian-nmt/marian installed as
Datasets
- IWSLT17 parallel data (0.6G, on Taito and Abel):
/proj[ects]/nlpl/data/translation/iwslt17
- WMT17 news task parallel data (16G, on Taito and Abel):
/proj[ects]/nlpl/data/translation/wmt17news
- WMT17 news task data preprocessed (tokenized, truecased and BPE-encoded) for the Helsinki submissions (5G, on Taito and Abel):
/proj[ects]/nlpl/data/translation/wmt17news_helsinki
Models
- Coming up (Helsinki WMT2017 models, pretrained Edinburgh SMT models, ...)
Using the Moses module
- Log into Taito or Abel
- Activate the NLPL module repository:
module use -a /proj/nlpl/software/modulefiles/ # Taito module use -a /projects/nlpl/software/modulefiles/ # Abel
- Load the most recent version of the Moses module:
module load moses
- Start using Moses, e.g. using the tutorial at http://statmt.org/moses/
- The module contains the standard installation as described at http://www.statmt.org/moses/?n=Development.GetStarted:
- cmph, irstlm, xmlprc
- with-mm
- max-kenlm-order 10
- max-factors 7
- SALM + filter-pt
- For word alignment, you can use GIZA++, Mgiza and fast_align. (The word alignment tools efmaral and eflomal are part of a separate module.)
If you need to specify absolute paths in your scripts, you can find them on the help page of the module:module help moses
Using the Efmaral module
- Log into Taito or Abel
- Activate the NLPL module repository:
module use -a /proj/nlpl/software/modulefiles/ # Taito module use -a /projects/nlpl/software/modulefiles/ # Abel
- Load the most recent version of the Efmaral module:
module load efmaral
- You can use the align.py script directly:
align.py ...
- You can use the efmaral module inside a Python3 script:
python3 >>> import efmaral
- You can test the example given at https://github.com/robertostling/efmaral by changing to the installation directory:
cd $EFMARALPATH python3 scripts/evaluate.py efmaral \ 3rdparty/data/test.eng.hin.wa \ 3rdparty/data/test.eng 3rdparty/data/test.hin \ 3rdparty/data/trial.eng 3rdparty/data/trial.hin
- The Efmaral module also contains eflomal. You can use the alignment scripts as follows:
align_eflomal.py ...
- You can also use the eflomal executable:
eflomal ...
- You can also use the eflomal module in a Python3 script:
python3 >>> import eflomal
- The atools executable (from fast_align) is also made available.
Using the HNMT module
- Log into Taito-GPU (Important: this module only runs on Taito-GPU, not on Taito!)
- The HNMT module can be loaded by activating the NLPL software repository:
module use -a /proj/nlpl/software/modulefiles/ module load nlpl-hnmt
- Module-specific help is available by typing:
module help nlpl-hnmt
- The main HNMT script can be called directly on the command line (
hnmt.py
), but for anything serious CUDA is required, which is only available from within SLURM scripts. - Because model training and testing is rather resource-intensive, we recommend to get started by using the example SLURM scripts, as explained below.
Example scripts
The directory /proj/nlpl/data/translation/hnmt_examples
contains a set of SLURM scripts for training and testing a baseline English-to-Finnish HNMT system. Copy the scripts to your own working directory before trying them out.
- Data preparation: The first script to launch is
prepare.sh
. It fetches the training, development and test data, extracts and reformats it, and calls themake_encode.py
script to create vocabulary files for the source and target languages. This script runs rather fast and can be executed directly on a (Taito-GPU) login shell. - Training: The second script is
train.sh
and callshnmt.py
to train a model. Launch it withsbatch train.sh
. The parameters are fairly standard, except training time, which is kept low for testing purposes here (we tend to max out the Taito limits with 71h of training time...).- The
training.*.out
file contains information about the training batches (training time and loss), and also shows translations of a small number of held-out sentences for examining the training process:
SOURCE / TARGET / OUTPUT at least for the time being , all of them will continue working at their current sites . ainakin toistaiseksi he kaikki jatkavat töitään nykyisissä toimipaikoissaan . ainakin kaikki ne tekevät työtä tällä hetkellä .
- The
training.log
andtraining.log.eval
files report additional information, as explained on [1]. - The training process creates a
train.model.final
file, which is then used for testing.
- The
- Testing: The last script is
test.sh
and callshnmt.py
to test the previously created model on held-out data. Launch it withsbatch test.sh
. HNMT includes evaluation scripts for chrF and BLEU and will report these scores if a reference file is given.- The resulting translations are written to
test.trans
. - In the
test.*.out
file, you should obtain scores close to the following (depending on the neural network initialization and the GPU used, results may vary slightly):BLEU = 0.057750 (0.303002, 0.086025, 0.032001, 0.013334, BP = 1.000000) LC BLEU = 0.057913 (0.303527, 0.086283, 0.032093, 0.013383, BP = 1.000000) chrF = 0.310397 (precision = 0.355720, recall = 0.306064)
- The resulting translations are written to
Troubleshooting
-
HNMT: WARNING: NLTK not installed, will not be able to use internal tokenizer
⇒ The installed version of HNMT does not include the NLTK tokenizer (which we don't use all that much here in Helsinki). This means that you will not be able to use the
word
option of [2]. We recommend you to use (a) already tokenized data, (b) the tokenizer included with Moses, or (c) your own tokenizer. -
Fatal error in PMPI_Init_thread: Other MPI error, error stack: MPIR_Init_thread(784).....: MPID_Init(1326)...........: channel initialization failed MPIDI_CH3_Init(120).......: MPID_nem_init_ckpt(852)...: MPIDI_CH3I_Seg_commit(307): PMI_Barrier returned -1
⇒ Even when using a SLURM script, the HNMT command has to be prefixed by
srun
:srun hnmt.py ...
-
ERROR (theano.gpuarray): Could not initialize pygpu, support disabled
⇒ HNMT does not run on the login shell, try running it through a SLURM script.
-
ERROR (theano.gof.opt): SeqOptimizer apply <theano.scan_module.scan_opt.PushOutScanOutput object at 0x7f7fa34fa7b8> ... theano.gof.fg.InconsistencyError: Trying to reintroduce a removed node
⇒ This message often occurs at the beginning of the training process and signals an optimization failure. It has no visible effect on training - the program continues running correctly. -
pygpu.gpuarray.GpuArrayException: b'cuMemAlloc: CUDA_ERROR_OUT_OF_MEMORY: out of memory'
⇒ This error can be prevented by decreasing the amount of pre-allocation (default is 0.9). Make sure to avoid overwriting the existing content of the THEANO_FLAGS variable:
export THEANO_FLAGS="$THEANO_FLAGS",gpuarray.preallocate=0.8
Using the Marian module (not quite ready yet...)
- Log into Taito-GPU (Important: this module only runs on Taito-GPU, not on Taito!)
- The Marian module can be loaded by activating the NLPL software repository:
module use -a /proj/nlpl/software/modulefiles/ module load nlpl-marian
- Module-specific help is available by typing:
module help nlpl-marian
- The Marian executables can be called directly on the command line, but longer-running tasks should be run with SLURM scripts.
- Marian comes with a couple of example scripts, which need to be adapted slightly for use on Taito. See below.
Example scripts
Marian provides a set of example scripts. These are best copied into your personal workspace before running them:
cp -r /proj/nlpl/software/marian/1.2.0/examples ./marian_examples
Training-basics
- In
scripts/preprocess-data.sh
, change lines 36-37 as follows (this is a workaround to force the scripts to run with Python 3, as Taito does not allow Python 2 and Python 3 to be charged simultaneously):| /appl/opt/python/3.4.5-gcc493-shared/bin/python3 ./scripts/normalise-romanian.py \ | /appl/opt/python/3.4.5-gcc493-shared/bin/python3 ./scripts/remove-diacritics.py \
- In
run-me.sh
, remove or comment the following line (the$MARIAN
environment variable is set to a different location when loading the module):MARIAN = ../..
- Add the SLURM headers at the top of the
run-me.sh
script. The following settings have worked well for me:#!/bin/bash #SBATCH -J training-basics #SBATCH -o training-basics.%j.out #SBATCH -e training-basics.%j.err #SBATCH -t 24:00:00 #SBATCH -N 1 #SBATCH -p gpu #SBATCH --mem=8g #SBATCH --gres=gpu:k80:1 module use -a /proj/nlpl/software/modulefiles module load nlpl-marian echo "Starting at `date`" #MARIAN=../.. ...
- Launch the script with
sbatch run-me.sh
.
Transformer
- As in the previous example, comment the
MARIAN line
and add the SLURM headers:#!/bin/bash #SBATCH -J transformer #SBATCH -o transformer.%j.out #SBATCH -e transformer.%j.err #SBATCH -t 24:00:00 #SBATCH -N 1 #SBATCH -p gpu #SBATCH --mem=8g #SBATCH --gres=gpu:k80:1 module use -a /proj/nlpl/software/modulefiles module load nlpl-marian echo "Starting at `date`" #MARIAN=../.. ...
- Launch the script with
sbatch run-me.sh
.
Translating-amun
- As in the previous examples, comment the
MARIAN line
and add the SLURM headers:#!/bin/bash #SBATCH -J transamun #SBATCH -o transamun.%j.out #SBATCH -e transamun.%j.err #SBATCH -t 24:00:00 #SBATCH -N 1 #SBATCH -p gpu #SBATCH --mem=8g #SBATCH --gres=gpu:k80:1 module use -a /proj/nlpl/software/modulefiles module load nlpl-marian echo "Starting at `date`" #MARIAN=../.. ...
- Launch the script with
sbatch run-me.sh
.
Contact: Yves Scherrer, University of Helsinki, firstname.lastname@helsinki.fi