Infrastructure/replication
Background
The NLPL virtual laboratory (in mid-2018) is distributed over two superclusters, viz. the Abel and Taito systems in Norway and Finland, respectively. Furthermore, NLPL enjoys a generous storage allocation on the Norwegian Infrastructure for Research Data (NIRD), which is not directly accessible on either of the two computing systems. This page documents the (still emerging) project strategy to data and software replication across the three sites, i.e. the ‘on-line’ file systems on Abel and Taito and the ‘off-line’ storage on NIRD.
Back-Up
Replication between Abel and Taito
The data/corpora/ and data/vectors/ sub-directories of the ‘on-line’ project directories on Abel and Taito are automatically synchronized. The primary copy of these directories resided on Abel, and all changes must be applied there; changes made to these sub-directories on Taito will be overwritten.
Replication is accomplished through a set of scripts that are maintained in the SubVersion repository of the project, notably operation/mirror/cron.sh, operation/mirror/data/corpora/taito, and operation/mirror/data/vectors/taito. These scripts assume password-less rsync communication across the sites, which is accomplished vi ssh keys (for the user oe, on all systems). The top-level script is invoked by cron every night on an LTG-owned add-on node to Abel (ls.hpc.uio.no), so that the cron jobs need not be re-activated every time one of the Abel login nodes is reinstalled.