Difference between revisions of "Infrastructure/software/easybuild"
(→Usage Instructions) |
(→Installation Instructions) |
||
Line 26: | Line 26: | ||
= Installation Instructions = | = Installation Instructions = | ||
+ | |||
+ | To prepare the build environment, one needs the NLPL collection | ||
+ | of easyconfigs, which also provides a shell script to apply | ||
+ | some system-specific parameterization: | ||
<pre> | <pre> | ||
Line 31: | Line 35: | ||
cd easybuild | cd easybuild | ||
./setup.sh | ./setup.sh | ||
+ | </pre> | ||
+ | |||
+ | The script includes knowledge about the range of supported | ||
+ | target systems and corresponding parameters, e.g. the path | ||
+ | to the NLPL community directory and system-specific level | ||
+ | of CUDA compute capabilities. | ||
+ | If need be, the script will also bootstrap a local installation | ||
+ | of the EasyBuild environment and create a module for it in the | ||
+ | NLPL software target directory | ||
+ | (e.g. <code>/projappl/nlpl/software/eb/</code> on Puhti). | ||
+ | Thus, to install the NLPL virtual laboratory on a new system, | ||
+ | some customization of <code>setup.sh</code> will be required. | ||
+ | |||
+ | Once configured, automated compilation can be executed either | ||
+ | via a SLURM job, or interactively from the command line (e.g. | ||
+ | on a login node). | ||
+ | On Puhti, for example, resource limits on the login nodes | ||
+ | prevent interactive compilation of the full NLPL virtual laboratory. | ||
+ | On Saga, on the other hand, compute nodes do not have outside | ||
+ | network access, such that downloading installation sources by | ||
+ | EasyBuild would fail when run via SLURM. | ||
+ | <pre> | ||
sbatch build.slurm easyconfigs/{system,mkl}/*.eb | sbatch build.slurm easyconfigs/{system,mkl}/*.eb | ||
+ | </pre> | ||
+ | |||
+ | <pre> | ||
+ | bash build.slurm easyconfigs/{system,mkl}/*.eb | ||
</pre> | </pre> | ||
Revision as of 07:27, 9 July 2021
Contents
Background
With support from the EOSC-Nordic project, the NLPL use case has created a blueprint for fully automated installation of the (software core of the) NLPL virtual laboratory using the EasyBuild framework. Full automation means that the exact same software environment, using the exact same versions and dependencies, will be locally compiled on each target system, with relevant hardware-specific local optimizations. This approach guarantees maximum reproducibility and replicability across different systems, including the Finnish Puhti and the Norwegian Saga (and soon Betzy) superclusters.
Usage Instructions
module use -a /projappl/nlpl/software/eb/etc/all module --redirect avail | grep nlpl
module use -a /cluster/shared/nlpl/software/eb/etc/all module --redirect avail | grep nlpl
Installation Instructions
To prepare the build environment, one needs the NLPL collection of easyconfigs, which also provides a shell script to apply some system-specific parameterization:
git clone https://source.coderefinery.org/nlpl/easybuild.git cd easybuild ./setup.sh
The script includes knowledge about the range of supported
target systems and corresponding parameters, e.g. the path
to the NLPL community directory and system-specific level
of CUDA compute capabilities.
If need be, the script will also bootstrap a local installation
of the EasyBuild environment and create a module for it in the
NLPL software target directory
(e.g. /projappl/nlpl/software/eb/
on Puhti).
Thus, to install the NLPL virtual laboratory on a new system,
some customization of setup.sh
will be required.
Once configured, automated compilation can be executed either via a SLURM job, or interactively from the command line (e.g. on a login node). On Puhti, for example, resource limits on the login nodes prevent interactive compilation of the full NLPL virtual laboratory. On Saga, on the other hand, compute nodes do not have outside network access, such that downloading installation sources by EasyBuild would fail when run via SLURM.
sbatch build.slurm easyconfigs/{system,mkl}/*.eb
bash build.slurm easyconfigs/{system,mkl}/*.eb
Community Maintenance
The NLPL virtual laboratory is maintained collectively by a team of dedicated and enthusiastic experts from the NLPL partner sites, with Andrey Kutuzov (Oslo) as the chief cat herder. Once installed, a module never changes or is removed; instead, new modules are added to the collection.
To add a module to the virtual laboratory, it must be possible to
install it using EasyBuild.
This presupposes what is called an easyconfig, essentially the
recipe for fetching sources, compilation, and installation using
EasyBuild.
The NLPL collection of easyconfigs is maintained on the Nordic
GitLab instance at https://source.coderefinery.org/nlpl/easybuild.
To contribute, please make contact with the NLPL infrastructure
team at infrastructure@nlpl.eu
.
To become a contributor, in addition to the above GitLab instance, users must have access to at least Puhti and Saga, where the infrastructure task force must grant them write privileges to the NLPL community directory.
Interactive Installations
While developing and testing a new easyconfig, it will often be more convenient to work interactively at the command line (rather than submit EasyBuild jobs through the queue system).
The configuration from the initial fully automated build of the
NLPL virtual laboratory is preserved on each system, inside the
.../software/eb/build/
directory, where the
...
prefix must be replaced with the root of the
NLPL community directory, e.g.
/projapp/nlpl/
on Puhti and
/cluster/shared/nlpl/
on Saga.
module purge source .../software/eb/build/config.sh eb --show-config eb --robot --dry-run new-module.eb