Difference between revisions of "Infrastructure/replication"
(→Replication between Abel and Taito) |
(→Replication between Abel and Taito) |
||
Line 43: | Line 43: | ||
the ‘on-line’ project directories on Abel and Taito are automatically | the ‘on-line’ project directories on Abel and Taito are automatically | ||
synchronized. | synchronized. | ||
− | The primary copy of | + | The primary copy of most of directories resides on Abel (with the excpetion of the <code>data/translation/</tt> |
− | must be applied there; changes made to these sub-directories on Taito | + | module), and all changes |
− | will be overwritten. | + | must be applied there; changes made to these sub-directories on the secondary |
+ | copy (i.e. Taito for most modules) will be overwritten. | ||
Replication is accomplished through a set of scripts that are maintained | Replication is accomplished through a set of scripts that are maintained |
Revision as of 13:19, 31 December 2018
Background
The NLPL virtual laboratory (in mid-2018) is distributed over two superclusters, viz. the Abel and Taito systems in Norway and Finland, respectively. Furthermore, NLPL enjoys a generous storage allocation on the Norwegian Infrastructure for Research Data (NIRD), which is not directly accessible on either of the two computing systems. This page documents the (still emerging) project strategy to data and software replication across the three sites, i.e. the ‘on-line’ file systems on Abel and Taito and the ‘off-line’ storage on NIRD.
Back-Up to NIRD
The NLPL project directory on Abel (/projects/nlpl/) is backed up daily at UiO, but the corresponding directory on Taito (/proj/nlpl/) is not; this is one of the reasons why the ‘on-line’ storage allocation on Taito can be considerably more generous than the one on Abel.
The NLPL project directories contain software and data installations that have been semi-manually created (in some cases following non-trivial analytical work and tinkering) and would be expensive to re-create from scratch. Thus, the complete contents of both copies of the virtual laboratory should be backed up to the ‘off-line’ NIRD storage area at least once per day, so as to be able to recover from data loss (which coulde include accidental deletion) quickly and without too much manual effort.
In mid-2018, the NLPL infrastructure task force has landed on a daily ‘back-up’ scheme using rsync, implemented by the script operation/mirror/nird in SVN. This should allow a response time of at least 24 hours upon inadvertent data loss. However, this scheme remains to be reliably (i.e. via cron) activated on Taito and more generally needs to be validated and made more robust (for example protecting against concurrent execution by use of file locking).
Replication between Abel and Taito
The data/corpora/,
data/parsing/, data/translation/, and data/vectors/ sub-directories of
the ‘on-line’ project directories on Abel and Taito are automatically
synchronized.
The primary copy of most of directories resides on Abel (with the excpetion of the data/translation/
module), and all changes
must be applied there; changes made to these sub-directories on the secondary
copy (i.e. Taito for most modules) will be overwritten.
Replication is accomplished through a set of scripts that are maintained
in the SubVersion repository of the project, notably the top-level ‘driver’
operation/mirror/cron.sh.
This script runs a sequence of module-specific replication scripts, e.g.
operation/mirror/data/corpora/taito,
operation/mirror/data/parsing/taito,
operation/mirror/data/translation/abel, and
operation/mirror/data/vectors/taito.
These scripts assume password-less rsync communication across the sites,
which is accomplished vi ssh keys (for the user oe, on all systems).
The top-level script is invoked by cron every night on an LTG-owned add-on
node to Abel (ls.hpc.uio.no), so that the cron jobs need not
be re-activated every time one of the Abel login nodes is reinstalled.