Difference between revisions of "Infrastructure/software/glibc"

From Nordic Language Processing Laboratory
Jump to: navigation, search
(Installation)
(Usage on Abel)
 
(14 intermediate revisions by the same user not shown)
Line 1: Line 1:
 
= Background =
 
= Background =
  
= Installation =
+
The operating system on Abel dates back to the original year of installation (2012, for all we recall).
 +
While many development tools and libraries are available in newer versions (through the <tt>module</tt>
 +
system), the version of the basic GNU C Library is intricately linked to the Linux kernel.
 +
Some packages that are distributed in pre-compiled form (typically as Python wheels) require
 +
more recent versions of the C Library.
 +
A little bit of trickery makes it possible to get these binaries to execute on
 +
Abel, using a custom, NLPL-specific installation of the GNU C Library and its
 +
dynamic linker.
 +
 
 +
= Usage on Abel =
 +
 
 +
To make a binary use our custom installation of the GNU C Library, we need to
 +
make sure the modern version of the dynamic linker is used, with the location
 +
of the modern C Library at the front of the dynamic library load path.
 +
For example:
 +
<pre>
 +
#!/bin/sh
 +
 
 +
exec /projects/nlpl/software/glibc/2.18/lib/ld-linux-x86-64.so.2 \
 +
  --library-path /projects/nlpl/software/glibc/2.18/lib:${LD_LIBRARY_PATH} \
 +
  /projects/nlpl/software/tensorflow/1.11/bin/.python3.5 "$@"
 +
</pre>
 +
 
 +
The above recipe is available as a generic
 +
[http://svn.nlpl.eu/software/glibc/2.18/wrapper wrapper script], which in principle
 +
should be suitable to replace arbitrary binaries.
 +
The wrapper has been used since sometime mid-2018 without known issues,
 +
for example in the NLPL PyTorch and TensorFlow installations on Abel and Taito.
 +
The semi-automated NLPL
 +
[http://wiki.nlpl.eu/index.php/Infrastructure/software/python#Automation installer]
 +
for Python add-on modules will by default apply such wrapping on these systems.
 +
 
 +
= Installation on Abel =
  
 
<pre>
 
<pre>
Line 15: Line 47:
 
make install
 
make install
 
make localedata/install-locales
 
make localedata/install-locales
 +
ln -s /usr/share/zoneinfo/Europe/Oslo /projects/nlpl/software/glibc/2.18/etc/localtime
 +
</pre>
 +
 +
<pre>
 +
module purge; module load binutils/2.26 gcc/6.3.0 cuda/10.0
 +
wget https://ftp.gnu.org/gnu/glibc/glibc-2.23.tar.gz
 +
tar zpSxvf glibc-2.23.tar.gz
 +
mkdir glibc-2.23/build
 +
cd glibc-2.23/build/
 +
../configure --prefix=/projects/nlpl/software/glibc/2.23 --disable-werror
 +
make -j 8
 +
make install
 +
make localedata/install-locales
 +
ln -s /usr/share/zoneinfo/Europe/Oslo /projects/nlpl/software/glibc/2.23/etc/localtime
 
</pre>
 
</pre>
 +
 +
Finally, pre-configure the custom dynamic linker, allowing it to make use of
 +
‘standard’ library locations that need not be on <tt>$LD_LIBRARY_PATH</tt>.
 +
In mid-September 2018, at least, basic CUDA libraries appear to be installed
 +
into <tt>/usr/lib64/</tt>, but only on GPU-enabled compute nodes.
  
 
<pre>
 
<pre>
 +
qlogin --account=nn9106k --time=6:00:00 --partition=accel --gres=gpu:1
 
cp -pr /etc/ld.so.conf* /projects/nlpl/software/glibc/2.18/etc
 
cp -pr /etc/ld.so.conf* /projects/nlpl/software/glibc/2.18/etc
 
/projects/nlpl/software/glibc/2.18/sbin/ldconfig  
 
/projects/nlpl/software/glibc/2.18/sbin/ldconfig  
ln -s /usr/share/zoneinfo/Europe/Oslo /projects/nlpl/software/glibc/2.18/etc/localtime
 
 
</pre>
 
</pre>
  
= Usage =
+
In May 2019, TensorFlow 1.13 requires at least glibc version 2.23, so we installed that
 +
(with <code>binutils/2.26</code>, <code>gcc/5.3.0</code>, and <code>cuda/10</code> pre-loaded).
 +
 
 +
= Installation on Taito =
 +
 
 +
<pre>
 +
wget https://ftp.gnu.org/gnu/glibc/glibc-2.18.tar.bz2
 +
tar jpSxf glibc-2.18.tar.bz2
 +
cd glibc-2.18
 +
mkdir build
 +
cd build
 +
module purge
 +
module load gcc/4.9.3
 +
../configure --prefix=/proj/nlpl/software/glibc/2.18
 +
make -j 8
 +
make install
 +
make localedata/install-locales
 +
ln -s /usr/share/zoneinfo/Europe/Helsinki /proj/nlpl/software/glibc/2.18/etc/localtime
 +
</pre>
 +
 
 +
Finally, pre-configure the custom dynamic linker, allowing it to make use of
 +
‘standard’ library locations that need not be on <tt>$LD_LIBRARY_PATH</tt>.
 +
To include some CUDA libraries (installed into standard systems locations,
 +
e.g. <tt>/usr/lib64/</tt>), it appears we need to run on an actual gpu node.
 +
<pre>
 +
srun -n 1 -p gputest --gres=gpu:k80:1 --mem 1G -t 15 --pty /bin/bash
 +
cp -pr /etc/ld.so.conf* /proj/nlpl/software/glibc/2.18/etc
 +
echo "/lib64" >> /proj/nlpl/software/glibc/2.18/etc/ld.so.conf
 +
/proj/nlpl/software/glibc/2.18/sbin/ldconfig
 +
</pre>

Latest revision as of 12:34, 25 October 2019

Background

The operating system on Abel dates back to the original year of installation (2012, for all we recall). While many development tools and libraries are available in newer versions (through the module system), the version of the basic GNU C Library is intricately linked to the Linux kernel. Some packages that are distributed in pre-compiled form (typically as Python wheels) require more recent versions of the C Library. A little bit of trickery makes it possible to get these binaries to execute on Abel, using a custom, NLPL-specific installation of the GNU C Library and its dynamic linker.

Usage on Abel

To make a binary use our custom installation of the GNU C Library, we need to make sure the modern version of the dynamic linker is used, with the location of the modern C Library at the front of the dynamic library load path. For example:

#!/bin/sh

exec /projects/nlpl/software/glibc/2.18/lib/ld-linux-x86-64.so.2 \
  --library-path /projects/nlpl/software/glibc/2.18/lib:${LD_LIBRARY_PATH} \
  /projects/nlpl/software/tensorflow/1.11/bin/.python3.5 "$@"

The above recipe is available as a generic wrapper script, which in principle should be suitable to replace arbitrary binaries. The wrapper has been used since sometime mid-2018 without known issues, for example in the NLPL PyTorch and TensorFlow installations on Abel and Taito. The semi-automated NLPL installer for Python add-on modules will by default apply such wrapping on these systems.

Installation on Abel

wget https://ftp.gnu.org/gnu/glibc/glibc-2.18.tar.bz2
tar jpSxf glibc-2.18.tar.bz2
cd glibc-2.18
mkdir build
cd build
module purge
module load gcc/4.9.2 cuda/8.0
../configure --prefix=/projects/nlpl/software/glibc/2.18
make -j 8
make install
make localedata/install-locales
ln -s /usr/share/zoneinfo/Europe/Oslo /projects/nlpl/software/glibc/2.18/etc/localtime
module purge; module load binutils/2.26 gcc/6.3.0 cuda/10.0
wget https://ftp.gnu.org/gnu/glibc/glibc-2.23.tar.gz
tar zpSxvf glibc-2.23.tar.gz
mkdir glibc-2.23/build
cd glibc-2.23/build/
../configure --prefix=/projects/nlpl/software/glibc/2.23 --disable-werror
make -j 8
make install
make localedata/install-locales
ln -s /usr/share/zoneinfo/Europe/Oslo /projects/nlpl/software/glibc/2.23/etc/localtime

Finally, pre-configure the custom dynamic linker, allowing it to make use of ‘standard’ library locations that need not be on $LD_LIBRARY_PATH. In mid-September 2018, at least, basic CUDA libraries appear to be installed into /usr/lib64/, but only on GPU-enabled compute nodes.

qlogin --account=nn9106k --time=6:00:00 --partition=accel --gres=gpu:1
cp -pr /etc/ld.so.conf* /projects/nlpl/software/glibc/2.18/etc
/projects/nlpl/software/glibc/2.18/sbin/ldconfig 

In May 2019, TensorFlow 1.13 requires at least glibc version 2.23, so we installed that (with binutils/2.26, gcc/5.3.0, and cuda/10 pre-loaded).

Installation on Taito

wget https://ftp.gnu.org/gnu/glibc/glibc-2.18.tar.bz2
tar jpSxf glibc-2.18.tar.bz2
cd glibc-2.18
mkdir build
cd build
module purge
module load gcc/4.9.3
../configure --prefix=/proj/nlpl/software/glibc/2.18
make -j 8
make install
make localedata/install-locales
ln -s /usr/share/zoneinfo/Europe/Helsinki /proj/nlpl/software/glibc/2.18/etc/localtime

Finally, pre-configure the custom dynamic linker, allowing it to make use of ‘standard’ library locations that need not be on $LD_LIBRARY_PATH. To include some CUDA libraries (installed into standard systems locations, e.g. /usr/lib64/), it appears we need to run on an actual gpu node.

srun -n 1 -p gputest --gres=gpu:k80:1 --mem 1G -t 15 --pty /bin/bash
cp -pr /etc/ld.so.conf* /proj/nlpl/software/glibc/2.18/etc
echo "/lib64" >> /proj/nlpl/software/glibc/2.18/etc/ld.so.conf
/proj/nlpl/software/glibc/2.18/sbin/ldconfig