Translation/home

From Nordic Language Processing Laboratory
Revision as of 13:28, 20 February 2018 by Yvessche (talk | contribs)
Jump to: navigation, search

Background

An experimentation environment for Statistical and Neural Machine Translations (SMT and NMT) is maintained for NLPL under the coordination of the University of Helsinki (UoH). Initially, the software and data are commissioned on the Finnish Taito supercluster.

Available software and data

Statistical machine translation and word alignment

  • Moses SMT pipeline with word alignment tools GIZA++, MGIZA, fast_align, with IRSTLM language model, with SALM:
    • Release 4.0, installed on Abel and Taito as moses/4.0-65c75ff (usage notes below)
    • Release mmt-mvp-v0.12.1, installed on Taito as "moses/mmt-mvp-v0.12.1-2739-gdc42bcb (not recommended)
  • Additional word alignment tools efmaral and eflomal:
    • Most recent version efmaral/0.1_2017_11_24, installed on Abel and Taito (usage notes below)
    • Previous version efmaral/0.1_2017_07_20, installed on Taito (not recommended)

Neural machine translation

  • Coming up (HNMT)

Datasets

  • IWSLT17 parallel data (0.6G, on Taito and Abel):
    • /proj[ects]/nlpl/data/translation/iwslt17
  • WMT17 news task parallel data (16G, on Taito and Abel):
    • /proj[ects]/nlpl/data/translation/wmt17news
  • WMT17 news task data preprocessed (tokenized, truecased and BPE-encoded) for the Helsinki submissions (5G, on Taito and Abel):
    • /proj[ects]/nlpl/data/translation/wmt17news_helsinki

Models

  • Coming up (Helsinki WMT2017 models, pretrained Edinburgh SMT models, ...)


Using the Moses module

  • Log into Taito or Abel
  • Activate the NLPL module repository:
module use -a /proj/nlpl/software/modulefiles/       # Taito
module use -a /projects/nlpl/software/modulefiles/   # Abel
  • Load the most recent version of the Moses module:
module load moses
  • Start using Moses, e.g. using the tutorial at http://statmt.org/moses/
  • The module contains the standard installation as described at http://www.statmt.org/moses/?n=Development.GetStarted :
    • cmph, irstlm, xmlprc
    • with-mm
    • max-kenlm-order 10
    • max-factors 7
    • SALM + filter-pt
  • For word alignment, you can use GIZA++, Mgiza and fast_align. (The word alignment tools efmaral and eflomal are part of a separate module.) If you need to specify absolute paths in your scripts, you can find them on the help page of the module:
module help moses

Using the Efmaral module

  • Log into Taito or Abel
  • Activate the NLPL module repository:
module use -a /proj/nlpl/software/modulefiles/       # Taito
module use -a /projects/nlpl/software/modulefiles/   # Abel
  • Load the most recent version of the Efmaral module:
module load efmaral
  • You can use the align.py script directly:
align.py ...
  • You can use the efmaral module inside a Python3 script:
python3
>>> import efmaral
cd $EFMARALPATH
python3 scripts/evaluate.py efmaral \
   3rdparty/data/test.eng.hin.wa \
   3rdparty/data/test.eng 3rdparty/data/test.hin \
   3rdparty/data/trial.eng 3rdparty/data/trial.hin
  • The Efmaral module also contains eflomal. You can use the alignment scripts as follows:
align_eflomal.py ...
  • You can also use the eflomal executable:
eflomal ...
  • You can also use the eflomal module in a Python3 script:
python3
>>> import eflomal
  • The atools executable (from fast_align) is also made available.


Using the HNMT module

  • Log into Taito-GPU (Important: this module only runs on Taito-GPU, not on Taito!)
  • The HNMT module can be loaded by activating the NLPL software repository:
module use -a /proj/nlpl/software/modulefiles/
module load nlpl-hnmt
  • Because model training and testing is rather resource-intensive, we recommend to get started by using the example SLURM scripts, as explained below.

Example scripts

Troubleshooting

1.

HNMT: WARNING: NLTK not installed, will not be able to use internal tokenizer
  • The installed version of HNMT does not include the NLTK tokenizer (which we don't use all that much here in Helsinki). We recommend you to use (a) already tokenized data, (b) the tokenizer included with Moses, or (c) your own tokenizer.

2.

Fatal error in PMPI_Init_thread: Other MPI error, error stack:
MPIR_Init_thread(784).....:
MPID_Init(1326)...........: channel initialization failed
MPIDI_CH3_Init(120).......:
MPID_nem_init_ckpt(852)...:
MPIDI_CH3I_Seg_commit(307): PMI_Barrier returned -1
  • Even when using a SLURM script, the HNMT command has to be prefixed by srun: srun hnmt.py ...

3.

ERROR (theano.gof.opt): SeqOptimizer apply <theano.scan_module.scan_opt.PushOutScanOutput object at 0x7f7fa34fa7b8>
...
theano.gof.fg.InconsistencyError: Trying to reintroduce a removed node
  • This message often occurs at the beginning of the training process and signals an optimization failure. It has no visible effect on training - the program continues running correctly.

4.

pygpu.gpuarray.GpuArrayException: b'cuMemAlloc: CUDA_ERROR_OUT_OF_MEMORY: out of memory'
  • This error can be prevented by decreasing the amount of pre-allocation (default is 0.9). Make sure to avoid overwriting the existing content of the THEANO_FLAGS variable:
    export THEANO_FLAGS="$THEANO_FLAGS",gpuarray.preallocate=0.8


Contact: Yves Scherrer, University of Helsinki, firstname.lastname@helsinki.fi