HPL ( High Performance Linpack) is a program that allows one to measure the performance of a high performance computing environment. Particularly, it uses MPI to distribute difficult computational work across nodes of a cluster. On our Rocks Cluster installation, we only need to install one additional thing, ATLAS, the Automatically Tuned Linear Algebra Software. This software will be the more difficult and longer install because it automatically tunes itself at compilation time for maximum performance. ATLAS was probably chosen so that hardware dependent features can be taken advantage of during the test.
Atlas can supposedly be tuned even further but for now we will do it the automated way. Be sure to turn off frequency scailing using the command before you compile, the configure step will likely inform you if you have it on. this command will turn it off /usr/bin/cpufreq-selector -g performance. Then simply create a directory change to it and execute the configure script in the parent directory. These commands should do it.
../configure #here you may get a warning about frequency scaling.
This is a long compile. Next we must configure HPLinpack, here on referred to as hpl. The minimum configuration will be to correct the location of mpich in the makefile. You can simply look at the files in the directory 'setup' and copy the one closest to your environment to a new file called Make.(something) where you can specify the something but you should probably make it indicative of your environment. I named mine Make.slu32 because there will be a 64 bit one in the future, so this one will be for our 32 bit cluster. My environment was a cluster of 32 bit AMD Athlon processors and I was using the C interface to BLAS so I copied the file Make.Linux_ATHLON_CBLAS. Correct the following lines
correct the arch variable to reflect the name you gave to the file.
ARCH = slu32
MPdir = /opt/mpich/gnu/
MPinc = -I$(MPdir)/include
MPlib = $(MPdir)/lib/libmpich.a
then to tell it where ATLAS left its archive files installed correct these lines. in our case, they were left in the directory 'lib' inside the directory we created.
LAdir = $(HOME)/ATLAS/SLU/lib/
LAlib = $(LAdir)/libcblas.a $(LAdir)/libatlas.a
finally, the file you copied may look for the linker for the FORTRAN internals to CBLAS
LINKER = /usr/bin/gfortran
Here, because I decompressed atlas in my home directory, atlas shared object archives were dropped here, those are 2 locations. The location of CBLAS, the C interface to the Basic Linear Algebra Subprograms library, and ATLAS. CBLAS was included with ATLAS. The space between these directory names is important, that makes LAlib a list consisting of 2 strings.
Now, to compile the program you pass the arch variable to make:
and it will will create a directory "hpl-2.0/bin/slu32/" and here will be the program xhpl with a file HPL.dat .
If there are errors referring to broken symlinks to hpl/bin/<arch>/* where arch is the architecture you chose, those the makefile stupidly creating symlinks to paths that arent' there. you must correct them by createing a symlink hpl that points to hpl-2.0
To tune ATLAS you must do so before compiling. You must pass variables and flags to the ../configure command mentioned before.cd
an example for compiling for a Core 2 Duo 2.4 Gz the command would be this
../configure -b 64 -D c -DPentiumCPS=2400
this seems to be "-b <pointer width>" pointer width=64 for 64 bit architectures
"-D c -DPentiumCPS< clock speed in megahertz of the compiling computer>" is for assuring correct timing for the compiling node.
make build # tune & build lib
the make file you copied for hpl may be close to your environment but there are many more options in the file for tuning. For example, if you prefer to use openmpi instead of mpich you must put different paths in the makefile. If you would also like to test the fortran performance of your mpi environment, you must enable a bunch of fortran paramaters documented in the makefiles's comments. After building hpl, there should be a folder named 'bin' in the hpl directory. In this directory there will be 2 files: xhpl, the program, and HPL.dat, a configuration file.
The configuration file, HPL.dat, has 31 lines. There is an HPL Tuning
guide that comes with the program. The program, xhpl, must be executed by the mpirun provided by the mpi environment it was compiled with. If you chose openmpi instead of mpich in the makefile you must use the mpirun provided by openmpi instead of mpich. You can also provide a machine file if you only want to benchmark a subset or superset of the machines recognized by your MPI installation or include machines not known to your installation.