Home

The RWTH HPC-Cluster User's Guide Version 8.2.6

image

Contents

1. Model Processor type Sockets Cores Memory Hostname Threads total Flops node Bull MPI S Intel Xeon X5675 2 12 24 24 GB linuxbmc0253 1350 1098 nodes Westmere EP 3 06 GHz 146 88 GFlops Bull MPI L Intel Xeon X5675 2 12 24 96 GB linuxbmc0001 0252 252 nodes Westmere EP 3 06 GHz 146 88 GFlops Bull MPI D Intel Xeon X5675 2 12 24 96 GB linuxbdc01 07 8 nodes Westmere EP 3 06 GHz 146 88 GFlops cluster x Bull SMP S BCS Intel Xeon X7550 4x4 128 128 256 GB linuxbesc01 63 67 nodes Beckton 2 00 GHz 1024 GFlops linuxbesc83 86 Bull SMP L BCS Intel Xeon X7550 4x4 128 128 1 TB linuxbesc68 82 15 nodes Beckton 2 00 GHz 1024 GFlops Bull SMP XL BCS Intel Xeon X7550 4x4 128 128 2TB linuxbesc64 65 2 nodes Beckton 2 00 GHz 1024 GFlops Bull SMP D BCS Intel Xeon X7550 2x4 64 64 256 GB cluster 2 nodes Beckton 2 00 GHz 512 GFlops cluster linux Bull ScaleMP Intel Xeon X7550 64 512 1024 4 TB linuxscalec3 1 node Beckton 2 00 GHz 4096 GFlops Sun Fire Intel Xeon X5570 2 8 16 36 GB linuxnc001 008 X4170 8 nodes Gainestown 2 93 GHz 93 76 GFlops Sun Blade Intel Xeon X5570 2 8 16 24 GB linuxnc009 200 X6275 192 nodes Gainestown 2 93 GHz 93 76 GFlops Sun Fire Intel Xeon 7460 4 24 128 256 GB linuxdc01 09 X 4450 10 nodes Dunnington 2 66 GHz 255 4 GFlops Fuji
2. Use esub for OpenMP shared memeory jobs BSUB a openmp Export an environment var export A_ENV_VAR 10 Change to the work directory cd home user workdirectory Execute your application a out The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 47 0 0 y O 0 FF OU N KB Bb FF FF U U Y Y Y Y Y Y Y Yb NHN N NHN N NY N NY N NY e e e e e E W H H H N e O OO 0 NO ota be WwW NY FP OO JO 0 WwW NY FP O oO eNO ote we Ye RP Oo 48 Listing 8 PSRC pis LSF openmpi_job sh f l usr bin env zsh Job name BSUB J OpenMPI64 File path where output will be written the J is the job id BSUB o OpenMPI64 4J OFF Different file for STDERR if not to be merged with STDOUT BSUB e OpenMPI64 e J Request the time you need for execution in minutes The format for the parameter is hour minute that means for 80 minutes you could also use this 1 20 BSUB W 1 42 Request vitual memory you need for your job in MB BSUB M 1024 OFF Specify your mail address BSUB u user rwth aachen de Send a mail when job is done BSUB N Request the number of compute slots you want to use BSUB n 64 Use esub for Open MPI BSUB a openmpi OFF load another Open MPI version than the default one module switch openmpi openmpi 1 4 3 Export an environment var export A_ENV_VAR 10 Change to the work directory cd home user
3. 0 single multi threading is not supported 1 funneled only the main thread which initializes MPI is allowed to make MPI calls 2 serialized only one thread may call the MPI library at a time 3 multiple multiple threads may call MPI without restrictions You can use the MPI Init thread function to query multi threading support of the MPI implementation Read more on this web page http www mpi forum org docs mpi22 report node260 htm In listing 17 on page 86 an example program is given which demonstrates the switching between threading support levels in case of a Fortran program This program can be used to test if a given MPI library supports threading Listing 17 MPIFC PSRC pis mpi_ threading support f90 a out PROGRAM tthr pi 2 USE MPI 3 IMPLICIT NONE 4 INTEGER REQUIRED PROVIDED IERROR 5 REQUIRED MPI_THREAD_MULTIPLE 6 PROVIDED 1 7f A ealt to MPI_IWNIT hes the same effect as a call to 8 1 MPI_INIT_THREAD with a required MPI_THREAD_SINGLE ICALE MPI_INIT IERROR CALL MPI_INIT_THREAD REQUIRED PROVIDED IERROR WRITE MPI_THREAD_SINGLE MPI_THREAD_FUNNELED amp amp MPI_THREAD_SERIALIZED MPI_THREAD_MULTIPLE 13 WRITE REQUIRED PROVIDED IERROR CALL MPI_FINALIZE IERROR END PROGRAM tthr Ko 4 o 1 an 1 N 1 A 1 al 6 3 1 Open MPI Lin The Open MPI community site announces untested support for thread safe operations
4. Kii Hardware Performance Counter Intel Trace Analyzer and Collector x Vampir x Scalasca x Table 8 24 Performance Analysis Tools 8 1 Oracle Sampling Collector and Performance Analyzer Lin The Oracle Sampling Collector and the Performance Analyzer are a pair of tools that you can use to collect and analyze performance data for your serial or parallel application The collect command line program gathers performance data by sampling at regular time intervals and by tracing function calls The performance information is gathered in so called experiment files which can then be displayed with the analyzer GUI or the er_ print command line after the program has finished Since the collector is part of the Oracle compiler suite the studio compiler module has to be loaded However you can analyze programs compiled with any x86 compatible compiler the GNU or Intel compiler for example work as well 8 1 1 The Oracle Sampling Collector At first it is recommended to compile your program with the g option debug information enabled if you want to benefit from source line attribution and the full functionality of the analyzer When compiling C code with the Oracle compiler you can use the g0 option instead if you want to enable the compiler to expand inline functions for performance reasons Link the program as usual and then start the executable under the control of the Sampling Collector with the command PSRC pex 810 collect
5. The support for threading is disabled by default We provide some versions of Open MPI with threading support enabled These versions have the letters mt in the module names e g openmpi 1 6 4mt However due to less tested status of this feature use it at own risk Note The actual Open MPI version 1 6 x is known to silently disable the InfiniBand transport iff the highest multiple threading level is activated In this case the hybride program runs over IPoIB transport offerung much worse performance than expected Please be aware of this and do not use the multiple threading level without a good reason 6 3 2 Intel MPI Lin Unfortunately Intel MPI is not thread safe by default 8 http www open mpi org faq category supported systems thread support 79 Configured and compiled with enable mpi threads option 86 The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 To provide full MPI support inside parallel regions the program must be linked with the option mt_ mpi Intel and GCC compilers or lmpi_ mt instead of Impi other compilers Note If you specify one of the following options for the Intel FORTRAN Compiler the thread safe version of the library is used automatically 1 openmp 2 parallel 3 threads 4 reentrancy 5 reentrancy threaded The funneled level is provided by default by the thread safe version of the Intel MPI library To activate other levels use the MPI Init thread fun
6. vngd start sh starts this server and will after possibly a few seconds return a line similar to Server listens on linuxscc005 rz RWTH Aachen DE 33071 The server is now ready and waits for a connection on linuxscc005 at port 33071 To connect to this server start a new console load the vampir module as described above and connect to the server through File gt Remote_ Open gt enter servername and port gt Insert Update gt Connect gt select Path of trace gt Open Both ways will start the Vampir GUI Take a look at the tutorials http www vampir eu tutorial Example in C summing up all three steps PSRC pex 860 vtcc vt cc MPICC FLAGS_DEBUG PSRC cmj c SPSRC pex 860 MPIEXEC np 4 a out PSRC pex 860 vampir a otf Note Vampir displays information for each process therefore the GUI will be crowded with more than about 16 processes and analysis may be not possible 8 4 Scalasca Lin Scalasca similar to Vampir is a performance analysis tool suite Scalasca is designed to automatically spot typical performance problems in parallel running applications with large counts of processes or threads Scalasca displays a large number of metrics in a tree view describing your application run Scalasca presents different classes of metrics to you generic MPLrrelated and OpenMP related ones Generic metrics e Total CPU allocation time e Execution time without overhead e Time spent in tasks related to measu
7. M 1024 OFF Specify your mail address BSUB u user rwth aachen de Send a mail when job is done BSUB N Request the number of compute slots you want to use consists of all host threads processes without those on the MIC The number of compute slots must be a multiple of the used hosts BSUB n 16 Use esub for Phi BSUB a phi Now specify the type of Phi job hosts gt MPI Job hosts a b mics c da HF a number of hosts b comma separated list of MPI processes on the ordered hosts HEF c number of MICs HEF d comma separated list of MPI processes on the ordered MICs example hosts 1 16 mics 2 10 22 BSUB Jd hosts 1 16 mics 2 10 22 load the right MPI Version on the host module switch openmpi intelmpi 4 1imic Export an environment var export A_ENV_VAR 10 Change to the work directory cd home user workdirectory Execute your MPI application MPIEXEC FLAGS_MPI_BATCH a out The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 2 5 3 6 Some special MPI Job Configurations If you want to run all your processes only on the MICs please follow the next example with two MICs each with 20 processes The number of compute slots must be gt the number of hosts BSUB n 1 Now specify the type of Phi job hosts gt MPI Job H hosts a b mics c d HHH a number of hosts HHH b comma separated list of MPI
8. Total View n np lt np gt starts lt np gt processes m lt nm gt starts exactly lt nm gt processes on every host except the last one S Spawn lt ns gt number of processes that can be spawned with MPI_ spawn np ns processes can be started in total listcluster prints out all available clusters cluster lt clname gt uses only cluster lt clname gt onehost starts all processes on one host listonly just writes the machine file without starting the program MPIHOSTLIST specifies which file contains the list of hosts to use if not specified the default list is taken MPIMACHINELIST if listonly is used this variable specifies the name of the created host file default is SHOME host list skip lt cmd gt advanced option skips the wrapper and executes the lt cmd gt with given arguments Default lt cmd gt with openmpi is mpiexec and with intelmpi is mpirun Table 6 21 The options of the interactive mpiexec wrapper We strongly recommend using the environment variables MPIFC MPICC MPICXX and MPIEXEC set by the module system in particular because the compiler driver variables are set according to the latest loaded compiler module Refer to the manual page for a detailed description of mpiexec It includes several helpful examples For quick reference we include some options here see table 6 22 on page 85 Open MPI provide a lot of tunables w
9. Westmere EP Processor 2 15 2 3 4 The Xeon E5 2650 Sandy Bridge Processor 15 Soo MEn ocres e aee ee ee be ee be ee be be eee 16 2390 Meek ocio Soe A e AR Ree ee oe ee a he 16 Zot Big SMP BCS systems o c 4 66 24 sie ARA 16 2 88 Omen este ori KER RE RE ee ee 16 2 4 Innovative Computer Architectures GPU Cluster 16 2 5 Innovative Computer Architectures Intel Xeon Phi Cluster 17 A 6 2 deca re a a oe de OA a Ree gee Be lack 17 252 Interactive Mode 2 224 254 bebe ek corsa ee EO 17 2 5 8 Programming Models o es cep aaaea HERE LR ra 18 3 Operating Systems 24 ol UE ck ee a a ee ed ee OS ore he ee a Wed we el ho 24 3 1 1 Processor Bindig 2 se roris pach bbe 4 Se ee Ee ee 24 Be WIC 00 a ee we ede a ME bk a Ge ee RA a RA a a a le 25 33 Addressing Mod s s lt os bc be be ee ee ee ee ee ee es 25 4 The RWTH Environment 27 AT UT SL a Se eek le ea op we ee Wa eee we a E 27 4 1 1 Command line Login s e saot 6 6 pb bee ee ee ee ee 27 4 1 2 Graphical Login s ee to se bow phoebe ee De ees 27 ALS MODE ontario hee RE RG bk oad he 28 o o o ow e wh te Ge eS a Ge a I L LN 28 12 Logn to Windows 2 24 440 bh ow eh Ee e ee Ee ee ee 28 4 2 1 Remote Desktop Connection 20020 00500 29 4 2 2 rdesktop the lings Ghetto siss sos s sdei e a eee Bee ee 29 aoe Apple Mac wees oca a aa Bed ae bee eS 29 4 3 The RWTH User File Management 0 00500 30 4 3 1 Tra
10. a ma n Job Management You can check your jobs status in the Active window When it is completed you will find it either in the table Finished or if EE a O it failed in Failed If your job does not have a recognizable name you can identify it with its Job ID which you will find out through a Windows balloon after submitting your job By selecting a job in the job management table further information is available given you have the necessary rights A job can be re configured by right clicking on it as long as it still awaits execution and it can be cancelled as well More information about computing on Windows and the Windows batch system is available on http www rz rwth aachen de hpc win web site For some software products particular web sites with instructions on how to use them in Windows batch system are available MATLAB http www rz rwth aachen de go id sxm Abaqus http www rz rwth aachen de go id sxn Ansys Ansys CFX http www rz rwth aachen de go id syh Gaussian 4 6 JARA HPC Partition The JARA HPC partition consists of contingents of the high performance computers and su percomputers installed at RWTH Aachen University HPC Cluster and Forschungszentrum J lich JUQUEEN The partition was established in 2012 It comprises a total computing power of about 600 TFLOP s of which 100 TFLOP s are provided by the HPC Cluster 4 6 1 Project application In order to apply for resou
11. additional options fsimple 0 or xnolibmopt can be added which however may reduce the execution speed see the SFLAGS FAST NO_FPOPT environment variable On the x86 nodes the rounding precision mode can be modified when compiling a pro gram with the option fprecision single double extended The following code snip pet demonstrates the effect Listing 13 CC FLAGS_ARCH32 PSRC pis precision c a out 1 include lt stdio h gt 2 int main int argc char argv e 4 double f 1 0 h 1 0 5 int i lt 6 for i 0 i lt 100 i 7 8 h h 23 9 if f h f break 10 11 printf f e bh e mantissa bits d n f h i 12 return 0 13 Results x86 32bit no SSE2 other 1 000000e 00 5 960464e 08 23 fprecision single n a 1 000000e 00 1 110223e 16 52 fprecision double default 1 000000e 00 5 421011e 20 63 fprecision extended default n a Table 5 18 Results of different rounding modes The results are collected in table 5 18 on page 66 The mantissa of the floating point numbers will be set to 23 52 or 63 bits respectively If compiling in 64bit or in 32bit with the usage of SSE2 instructions the option fprecision is ignored and the mantissa is always set to 52 bits The Studio FORTRAN compiler supports unformatted file sharing between big endian and little endian platforms see chapter 5 4 on page 61 with the option xfilebyteorder endianmazali
12. lt host1 hostN gt Synonym for host Specifies a list of execution hosts machinefile lt machinefile gt Where to find the machinefile with the execution hosts mca lt key gt lt value gt Option for the Modular Component Architecture This option e g specifies which network type to use nooversubscribe Does not oversubscribe any nodes nw Launches the processes and do not wait for their completion mpiexec will complete as soon as successful launch occurs tv Launches the MPI processes under the TotalView debugger old style MPI launch wdir lt dir gt Changes to the directory lt dir gt before the user s program executes x lt env gt Exports the specified environment variables to the remote nodes before executing the program Table 6 22 Open MPI mpiexec options Example MPIFC c prog f90 MPIFC prog o o prog exe MPIEXEC np 4 prog exe The Intel MPI can basically be used in the same way as the Open MPI except of the Open MPI specific options of course You can get a list of options specific to the startup script of Intel MPI by MPIEXEC h If you want to use the compiler drivers and startup scripts directly you can do this as shown in the following examples Example using an MPI compiler wrapper for the Intel FORTRAN compiler mpiifort c prog f90 mpiifort o prog exe prog o mpiexec np 4 prog exe Example using the Intel FORTRAN compiler directly ifort I MPI_I
13. ments This is reasonable on our HPC Cluster because not all of our machines support the same instruction set extensions e fp model fast 2 This option enables aggressive optimizations of floating point cal culations for execution speed even those which might decrease accuracy Other options which might be of particular interest to you are e openmp Turns on OpenMP support Please refer to Section 6 1 on page 76 for infor mation about OpenMP parallelization e heap arrays Puts automatic arrays and temporary arrays on the heap instead of the stack Needed if the maximum stack space 2 GB is exhausted e parallel Turns on auto parallelization Please refer to Section 6 1 on page 76 for information about auto parallelizing serial code e vec report Turns on feedback messages from the vectorizer If you instruct the com piler to vectorize your code e g by using axCORE AVX2 CORE AVX I you can make it print out information about which loops have successfully been vectorized with this flag Usually exploiting vector hardware to its fullest requires some code re structuring which may be guided by proper compiler feedback To get the most exten sive feedback from the vectorizer please use the option vec report3 As the compiler output may become a bit overwhelming in this case you can instruct the compiler to only tell about failed attempts to vectorize and the reasons for the failure by using vec reportd e convert b
14. 2008 The Intel C C and FORTRAN compilers are integrated into Visual Studio and can be used as well If you have an existing Visual Studio Project and want to m arme use the Intel compiler the project has to be converted to an G Intel project This can be done by right clicking the project and selecting the lowest context menu item Use Intel C To change the project options e g compiler or linker options open the Project Settings window by right clicking on the project and selecting the context menu item Properties To add additional compiler options select the compile menu FORTRAN or C C and add the options under Command Line Here are all used compiler options listed The most common options can also be selected in the rest of the menu OpenMP support can be enabled in the Project Settings window in the language options tab Please note that when using WONTIQUETION y aLuvevvuuy O Auve ven 192 Visual Studio 2008 with your Common Properties Disable Language Extensions No os Configuration Properti j exis ting projects these will au onfiguration Properties Default Char Unsigned No General Treat wehar_t as Built in Type Yes tomatically be converted and Debugging Force Conformance In For Loop Scope Yes a C C Ti cannot be used with Visual Stu Enebe Rur Time Tipe Tio General Optimization Preprocessor Code Generation Language OpenMP Support Yes openmp dio 2005 anymore We str
15. 3 Compilation Modules and Testing Before you start compiling you need to make sure that the environment is set up properly Because of different and even contradicting needs regarding software we offer the modules system to easily adapt the environment All the installed software packages are available as modules that can be loaded and unloaded The modules themselves are put into different categories to help you find the one you are looking for refer to chapter 4 4 2 on page 34 for more detailed information Directly after login some modules are already loaded by default You can list them with module list The output of this command looks like this module list Currently Loaded Modulefiles 1 DEVELOP 2 intel 13 1 3 openmpi 1 6 4 The default modules are in the category DEVELOP which contains compilers debuggers MPI libraries etc At the moment the Intel FORTRAN C C Compiler version 12 and Open MPI 1 4 3 are loaded by default The list of available modules can be printed with module avail In this case the command prints out the list of available modules in the DEVELOP category because this category is loaded and the list of all other available categories Let s assume that for some reason you d like to use the GNU compiler instead of the Intel compiler for our C OpenMP example All availble GCC versions can be listed by module avail gcc To use GCC version 4 8 do the following module switch intel gcc 4 8
16. BSUB a bcs openmp To minimize the influence of several jobs on the same node your job will be bound to the needed number of boards 32 cores The binding script will tell you on which boards your job will run E g Binding BCS job 0 2 means that your job will run on board 0 and 2 so that you can use up to 64 threads e For MPI jobs you have to specify BSUB a bcs openmpi or BSUB a bcs intelmpi module switch openmpi intelmpi depending on the MPI you want to use e For hybrid job you have additionally to specify the ptile which tells LSF how many processes you want to start per host Depending on the MPI you want to use you have to specify BSUB a bcs openmpi openmp BSUB n 64 BSUB R span ptile 16 or BSUB a bcs intelmpi openmp BSUB n 64 BSUB R span ptile 16 module switch openmpi intelmpi This will start a job with 64 MPI processes with 16 processes on each node This means the job will use 64 16 4 BCS nodes in sum The OMP_ NUM_ THREAD variable will be set to 128 16 8 automatically Note This way to define hybrid jobs is available on Big SMP BCS systems only On other nodes use the general procedure see page 39 The table 4 12 on page 43 give a brief overview of BCS nodes Max Mem means the recommended maximum memory per process if you want to use all slots of a machine It is not possible to use more memory per slot because the operating system and the LSF needs app
17. Be BP Be Be HHHH H 7a aa YF FC OCOH NRHP HHP SCHEMA DAE WOH KF FCO MN DAHA BR wWNH HO 50 Listing 10 PSRC pis LSF hybrid_job sh f l usr bin env zsh Job name BSUB J Hybrid64 6 File path where output will be written the J is the job id BSUB o Hybrid64 6 4J OFF Different file for STDERR if not to be merged with STDOUT BSUB e Hybrid64 6 e J Request the time you need for execution in minutes The format for the parameter is hour minute that means for 80 minutes you could also use this 1 20 BSUB W 1 42 Request vitual memory you need for your job in MB BSUB M 1024 OFF Specify your mail address BSUB u user rwth aachen de Send a mail when job is done BSUB N Hybrid Job with N MPI Processes in groups to M processes per node BSUB n 64 BSUB R span ptile 2 Request a certaion node type BSUB m mpi s Use nodes exclusive BSUB x Each MPI process with T Threads export OMP_NUM_THREADS 6 Choose a MPI either Open MPI or Intel MPI Use esub for Open MPI BSUB a openmpi OFF Use esub for Intel MPI BSUB a intelmpi Export an environment var export A_ENV_VAR 10 Change to the work directory cd home user workdirectory Execute your application MPIEXEC FLAGS_MPI_BATCH a out The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 oOo JO o V N 0 0
18. For convenient switching between compilers we added environment variables for the most important compiler flags These variables can be used to write a generic makefile that com piles with any loadable compiler The offered variables are listed below Values for different compilers are listed in tables 5 16 on page 59 and 6 20 on page 77 e SFC CC CXX a variable containing the appropriate compiler name e SFLAGS_ DEBUG enables debug information e FLAGS_ FAST includes the options which usually offer good performance For many compilers this will be the fast option But beware of possible incompatibility of binaries especially with older hardware e SFLAGS_FAST_NO_FPOPT equally to FAST but disallows any floating point optimizations which will have an impact on rounding errors e SFLAGS_ ARCH32 SFLAGS_ ARCHG64 builds 32 or 64 bit executables or libraries 3GCC the GNU Compiler Collection http gcc gnu org see chapter 4 4 2 on page 34 58 The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 e SFLAGS_AUTOPAR enable auto parallelization if supported by the compiler e FLAGS OPENMP enables OpenMP support if supported by the compiler e FLAGS_RPATH contains a set of directories addicted to loaded modules to add to the runtime library search path of the binary with a compiler specific command ac cording to the last loaded compiler to pass these paths to the linker In order to be a
19. Intel KMP_ AFFINITY Oracle SUNW_MP_ PROCBIND GNU GOMP_CPU_ AFFINITY PGI MP_ BLIST Table 4 13 Pinning Vendor specific environment variables In case of the Intel Compiler this could look like this export KMP_AFFINITY scatter For bug questions please contact the service desk servicedesk rz rwth aachen de ScaleMP system The ScaleMP machine see chapter 2 3 8 on page 16 is not running in normal production mode It belongs to our innovative computer architectures part of the cluster This means that we cannot guarantee the full stability and service quality Of course we do our best to provide a stable system but longer maintenance slots might be necessary or job failures might occur To get access to this system your account needs to be activated If you are interested in using this machine please write a mail to servicedesk rz rwth aachen de with your user ID and let us know that you want to use the ScaleMP system To submit shared memory jobs to the ScaleMP machine use BSUB a scalemp openmp The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 43 MPI Jobs are not supported on this system To minimize interference between different jobs running simultaneously we bind jobs to a subset of the 16 available boards A job asking for 96 cores for example will be bound to three boards and no other job will run on these boards This minimizes the interference of simulta neous jobs but it does not completely
20. JD 0 Bb OON 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 Listing 11 PSRC pis LSF non mpi_job sh f l usr bin env zsh Job name BSUB J Non MPI6 Request the time you need for execution in minutes The format for the parameter is hour minute that means for 80 minutes you could also use this 1 20 BSUB W 1 42 Request vitual memory you need for your job in MB BSUB M 1024 OFF Specify your mail address BSUB u user rwth aachen de Send a mail when job is done BSUB N Request the number of compute slots you want to use here distributed in chunks to 12 threads 12 threads for master process 24 threads for two slaves 4BSUB n 6 BSUB R span ptile 2 echo the envvars containing info on how the slots are distributed echo LSB_HOSTS HEHHHEEHHHEEHHHEEHHEEEHEE echo LSB_HOSTS echo LSB_MCPU_HOSTS HEHFHHHHHHHHHHHEEEEEEEEEE echo LSB_MCPU_HOSTS echo LSB_DJOB_HOSTFILE HEHHHHEEHHEEEHEE echo LSB_DJOB_HOSTFILE cat LSB_DJOB_HOSTFILE echo LSB_DJOB_NUMPROC HHHHHHEHHHHEEHHEEEHEE echo LSB_DJOB_NUMPROC echo R_DELIMITER script at your won risk get hostnames of master node and slave nodes from above variables master hostname tr gt master master 1 strip the do
21. MPI applications e g mprun or mpiexec e MPIFC MPICC MPICXX Compiler driver for the last loaded compiler module which automatically sets the include path and also links the MPI library automatically e FLAGS MPI_BATCH Options necessary for executing in batch mode This example shows how to use the variables PSRC pex 620 MPICXX I PSRC cpmp PSRC cpmp pi cpp o a out PSRC pex 620 MPIEXEC np 2 a out 6 2 1 Interactive mpiexec wrapper Lin On Linux we offer dedicated machines for interactive MPI tests These machines will be used automatically by our interactive mpiexec and mpirun wrapper The goal is to avoid overloading the frontend machines with MPI tests and to enable larger MPI tests with more processes The interactive wrapper works transparently so you can start your MPI programs with the usual MPI options In order to make sure that MPI programs do not hinder each other the wrapper will check the load on the available machines and choose the least loaded ones The chosen machines will get one MPI process per available processor However this default setting may not work for jobs that need more memory per process than there is available per core Such jobs have to be spread to more machines Therefore we added the m lt processes per node gt option which determines how many processes should be started per node You can get a list of the mpiexec wrapper options with mpiexec help which will print the list o
22. On Windows the Intel Compilers can be used either in the Visual Studio environment or on the Cygwin command line 5 5 1 Frequently Used Compiler Options Compute intensive programs should be compiled and linked with the optimization options which are contained in the environment variable SFLAGS FAST For the Intel compiler The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 61 SFLAGS_ FAST currently evaluates to echo FLAGS_FAST 03 ip axCORE AVX2 CORE AVX I fp model fast 2 These flags have the following meaning e 03 This option turns on aggressive general compiler optimization techniques Com pared to the less aggressive variants O2 and O1 this option may result in longer compilation times but generally faster execution It is especially recommended for code that processes large amounts of data and does a lot of floating point calculations e ip Enable additional interprocedural optimizations for single file compilation e axCORE AVX2 CORE AVX I This option turns on the automatic vectorizer of the compiler and enables code generation for processors which employ the vector op erations contained in the AVX2 AVX SSE4 2 SSE4 1 SSE3 SSE2 SSE SSSE3 and RDRND instruction set extensions Compared to the similar option xCORE AVX2 this variant also generates machine code which does not use the vector instruction set extensions so that the executable can also be run on processors without these enhance
23. Phi job leo gt OFFLOAD Job wo o aoe lLeo a b HHH a number of MICs HHH b number of threads on the MICs example leo 1 120 BSUB Jd leo 1 120 A e e U wu w e SCO amp 0 Export an environment var export A_ENV_VAR 10 p e p a e O Change to the work directory cd home user workdirectory e e e 0 3 Execute your offload application fa OR A o consists of all host threads processes without those on the MIC https wiki2 rz rwth aachen de download attachments 3801235 phi_native sh txt 20 https wiki2 rz rwth aachen de download attachments 3801235 phi_mpi sh txt 20 The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 Listing 2 PSRC pis LSF phi_native sh usr bin enu zsh Job name BSUB J PHI_NATIVE_JOB File path where STDOUT will be written the J is the job id BSUB o PHI_NATIVE_JOB 4J OFF Different file for STDERR if not to be merged with STDOUT 10 BSUB e PHI_NATIVE_J0B e4J 12 Request the time you need for execution in minutes The format for the parameter is hour minute that means for 80 minutes you could also use this 1 20 BSUB W 80 Request vitual memory you need for your job in MB BSUB M 1024 20 OFF Specify your mail address 21 BSUB u user rwth aachen de 23 Send a mail when job is done 24 BSUB N 26 Request the number of com
24. RWTH Aachen DE and start VTune Am plifier from the Start menu or Desktop To analyze your code click File New Project Then choose a project name directory to store the results and specify your application and its parameters After creating the project you can use the run button select an analysis type and press Start to collect experiment data For details on how to use VTune Amplifier please contact the HPC Group or attend one of our regular workshops 8 2 2 Intel Trace Analyzer and Collector ITAC The Intel Trace Collector ITC is primarily designed to investigate MPI applications The Intel Trace Analyzer ITA is a graphical tool that analyzes and displays the trace files generated by the ITC Both ITC and ITA are quite similar to Vampir 8 3 on page 99 The tools help to understand the behavior of the application and to detect inefficient com munication and performance problems Please note that these tools are designed to be used with Intel or GNU compilers in conjunction with Intel MPI On Linux initialize the environment with module load intelitac 85Do not forget to activate the X forwarding see chapter 4 1 1 on page 27 86See chapter 4 1 3 on page 28 98 The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 Profiling of dynamically linked binaries without recompilation This mode is appli cable to programs which use Intel MPI In this mode only MPI calls will be traced which often is sufficient for g
25. SMP systems consists actually from four separate boars connected together using the proprietary Bull Coherent Switch BCS technology see chapter 2 3 7 on page 16 Because of the fact that theses systems are kind of special you have to request them explic itly and you are not allowed to run serial or small OpenMP jobs there We decided to schedule only jobs in the granularity of a board 32 Cores as the smallest unit This means that you only should submit jobs with the size of 32 64 96 or 128 Threads For MPI jobs the nodes will be reserved always exclusive so that you should have a multiple of 128 MPI processes e g 128 256 384 to avoid a waste of resources Please note that the binding of MPI processes and threads is very important for the per formance For an easy vendor independed MPI binding you can use our mpi_ bind script see chapter 4 5 1 on page 43 In order to submit a job to the BCS queue you have to specify BSUB a bcs in your batch script in addition with the n parameter for the number of threads or processes http www L rz rwth aachen de manuals LSF 8 0 Isf_admin index htm job_array_create html main http www1 rz rwth aachen de manuals LSF 8 0 Isf_admin job_dependency html https doc zih tu dresden de hpe wiki bin view Compendium PlatformLSF skin plainjane nat 2cnat Chain_ Jobs The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 41 e For shared memory OpenMP jobs you have to specify
26. They can add colleagues and co workers that already have an account on the RWTH Compute Cluster via member g jara lt num gt add lt user gt where lt user gt stands for the username of the person to be added Please note it may take up to six hours for all changes to propagate in the system Directories named home jara lt num gt work jara lt num gt and hpcwork jara lt num gt has been created for your project and every member of the group has full read and write access to it In order to submit to your JARA HPC contingent you have to supply the P jara lt num gt option We advise that you use batch scripts in which you can use the BSUB sentinel to specify job requirements and in particular BSUB P jara lt num gt to select your contingent Software which should be available to the project group members should be installed in the home directory of the project and privileges set accordingly for the group 4 6 2 Resources Core hour quota 4 6 2 1 What is a core hour Usage of RWTH compute cluster s resources is measured in core hours One core hour equals one CPU core being used for the duration of one hour of execution time The latter is always measured by the wall clock from the job start to the job finish time and not by the actual CPU time Also note that jobs in the JARA HPC queue use compute nodes exclusively hence usage is always equal to the number of CPU cores on the node times the execution time regardless of the actual
27. Unloading openmpi 1 6 4 Unloading Intel Suite 13 1 1 163 Loading gcc 4 8 0 Loading openmpi 1 6 4 for gcc compiler oe ee oe O0O0O0O00O AAA w ee Please observe how Open MPI is first unloaded then loaded again In fact the loaded version of Open MPI is different from the unloaded version because the loaded version is suitable for being used together with the GNU compiler whereas the unloaded is built to be used with the Intel compiler The module system takes care of such dependencies Of course you can also load an additional module instead of replacing an already loaded one For example if you want to use a debugger you can do a module load totalview In order to make the usage of different compilers easier and to be able to compile with the same command several environment variables are set You can look up the list of variables in chapter 5 2 on page 58 Usually though we d recommend using the Intel PGI or Oracle compilers for production because they offer better performance in most cases 122 The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 Often there is more than one step needed to build a program The make tool offers a nice way do define these steps in a Makefile We offer such Makefiles for the examples which use the environment variables Therefore when starting gmake the example will be built and executed according to the specified rules Have a look at the Makefile if you are interested i
28. a out The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 93 or with the analyzer GUI select Collect Experiment in the File menu By default profile data will be gathered every 10 milliseconds and written to the exper iment file test 1 er The filename number will be automatically incremented on subsequent experiments In fact the experiment file is an entire directory with a lot of information You can manipulate these with the regular Linux commands but it is recommended to use the er_ mv er_ rm er_cp utilities to move remove or copy these directories This ensures for example that time stamps are preserved The g experiment _group erg option bundles experiments to an experiment group The result of an experiment group can be displayed with the Analyzer see below analyzer experiment_group By selecting the options of the collect command many different kinds of performance data can be gathered Just invoking collect h will print a complete list including available hardware counters The most important collect options are listed in table 8 25 on page 94 Various hardware counter event types can be chosen for collecting The maximum number of theoretically simultaneously usable counters on available hardware platforms ranges between 4 AMD Barcelona and 7 Intel Nehalem However it is hardly possible to use more than 4 counters in the same measurement because some counters use the same resources and thus conflict with each
29. and selecting Dive in New Window A 2 2 2 Debugging of large jobs Each MPI process consumes a Total View license token Due to the fact that RWTH has only 50 licenses the number of debuggable processes is limited to this number The best way to debug a MPI appli cation is to debug using a limited small number of processes ideally only one or two The debug session is neat commu nication pattern is simple and you save license tokens If the debugging with a small number of x processes is impossible e g because the er Options Action Points Launch Strings Bulk Launch Dynamic Libraries ror you are searching for occurs in a large job Parallel Fonts Formatting Pointer Dive Replayengine only you can attach to a subset of a whole E Enable use of dbfork job Open File Preferences Parallel peWhen a job goes parallel or calls exec Stop the group in the When a job goes parallel menu set Run the group the checkbox on Ask what to do instead eee Y When a job goes parallel o of Attach to all de Attach to all The next time a parallel job is started a e toners u Je Ask what to do Attach Subset dialog box turns up Choose a subset of processes in the menu The pro a OK Cancel Help gram will start with the requested number of cece ak processes whereas TotalView debugger con HERRIEEMES ae Config active Debug
30. cache and TLB misses This especially af fects multi dimensional arrays and structures In particular note the difference between FORTRAN and C C in the arrangement of arrays Tools like Intel VTune Ampli fier chapter 8 2 1 on page 98 or Oracle Sampling Collector and Performance Analyzer chapter 8 1 1 on page 93 and 8 1 3 on page 96 may help to identify problems easily 60 The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 e Use a profiling tool see chapter 8 on page 93 like the Oracle Sun Collector and Analyzer Intel VTune Amplifier or gprof to find the computationally intensive or time consuming parts of your program because these are the parts where you want to start optimization e Use optimized libraries e g the Intel MKL the Oracle Sun Performance Library or the ACML library see chapter 9 on page 105 e Consider parallelization to reduce the runtime of your program 5 4 Endianness In contrast to e g the UltraSPARC architecture the x86 AMD and Intel processors store the least significant bytes of a native data type first little endian Therefore care has to be taken if binary data has to be exchanged between machines using big endian like the UltraSPARC based machines and the x86 based machines Typically FORTRAN compilers offer options or runtime parameters to write and read files in different byte ordering For other programming languages than FORTRAN the programmer has to take care of s
31. dive on its name in the Stack Trace Pane first Select Action Points At Location and enter the function s name A 1 5 Starting Stopping and Restarting your Program Start your program by selecting Go on the icon bar and stop it by selecting Halt Set a breakpoint and select Go to run the program until it reaches the line containing the breakpoint Select a program line and click on Run To on the icon bar Step through a program line by line with the Step and Next commands Step steps into and Next jumps over function calls Leave the current function with the Out command To restart a program select Restart A 1 6 Printing a Variable 114 The values of simple actual variables are displayed in the Stack Frame Pane of the Process Window You may use the View Lookup Variable command When you dive middle click on a variable a separate Variable Window will be opened You can change the variable type in the Variable Window type casting If you are displaying an array the Slice and Filter fields let you select which subset of the array will be shown examples Slice 3 5 1 10 2 Filter gt 30 One and two dimensional arrays or array slices can be graphically displayed by selecting Tools Visualize in the Variable Window The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 e If you are displaying a structure you can look at substructures by rediving or by selecting Dive after clicking on the rig
32. eliminate interference So if you do benchmarking on this machine you should always reserve the complete machine Example Scripts Below you can find some general example scripts for LSF Some ap plication specific e g Gaussian examples can be found in the Wiki Note We do not recommend to copy the scripts from this PDF file by Ctrl C Ctrl V Instead use the scripts from PSRC pis LSF directory or download from the Wiki e Serial Job listing 5 on page 45 or in the Wiki e Array Job listing 6 on page 46 or in the Wiki e Shared memory OpenMP parallelized Job listing 7 on page 47 or in the Wiki e MPI Jobs Open MPI Example listing 8 on page 48 or in the Wiki Intel MPI Example listing 9 on page 49 or in the Wiki Hybrid Example listing 10 on page 50 or in the Wiki e Non MPI Job over multiple Nodes listing 11 on page 51 or in the Wiki https wiki2 rz rwth aachen de display bedoku Installed Software 1h ttps wiki2 rz rwth aachen de download attachments 458782 serial_job sh txt https wiki2 rz rwth aachen de download attachments 458782 array job sh txt https wiki2 rz rwth aachen de download attachments 458782 omp_job sh txt https wiki2 rz rwth aachen de download attachments 458782 openmpi_job sh txt 50 https wiki2 rz rwth aachen de download attachments 458782 intelmpi_job sh txt https wiki2 rz rwth aachen de download attachments 458782 hybrid_job s
33. for FORTRAN C and C and consists of compiler directives resp pragmas runtime routines and environment variables In the parallel regions of a program several threads are started They execute the contained program segment redundantly until they hit a worksharing construct Within this construct the contained work usually do or for loops or task constructs since OMPv3 0 is distributed among the threads Under normal conditions all threads have access to all data shared data But pay attention If data which is accessed by several threads is modified then the access to this data must be protected with critical regions or OpenMP locks Besides private data areas can be used where the individual threads hold their local data Such private data in OpenMP terminology is only visible to the thread owning it Other threads will not be able to read or write private data Hint In a loop that is to be parallelized the results must not depend on the order of the loop iterations Try to run the loop backwards in serial mode The results should be the same This is a necessary though not sufficient condition to parallelize a loop correctly Note For cases in which the stack area for the worker threads has to be increased OpenMP 3 0 introduced the OMP_STACKSIZE environment variable Appending a lower case v de notes the size to be interpreted in MB The shell builtins ulimit s xxx zsh shell specification in kilobytes or limit s xxx C she
34. gprof Lin oe ses steke dataa a es 103 9 Application Software and Program Libraries 105 0 1 Application Softwate o occ ee a ee 105 9 2 BLAS LAPACK BLACS ScaLAPACK FFT and other libraries 105 93 MKL Intel Math Kernel Library es e aoa e a a ea Pe ae 105 931 Imel MIRE TAO is ir Gia bh doe ak be we AR 106 933 Imel MIRL WEA eos oe eee GE EE pE pipi ai 106 9 4 The Oracle Sun Performance Library Lin 106 9 5 ACML AMD Core Math Library Lin aa 107 96 NAG Numerical Libraries Lin o cooooooorrrases ee 107 9 7 TBB Intel Threading Building Blocks Lin Win 108 AA II 109 Oo BAS oe ee IA 109 US 2 Processor Dinding scsi we OE eS Se eee a 109 Hos Memory Migration o s s socios as ea eee ae ee Re Po Boo 110 Gee ther Funetions oo os Llica ek Re ee aed eS 110 90 TAD eee iia a Ae ee eR eh ere oe ew oe AAA 110 GIO Wee oe ce eee eS che eh da oe BYR eRe EERE RES 110 10 Miscellaneous 112 101 Useful Commands LIN oces a cr Dior ORE WE QU bE a A 112 102 Usel Commands Win oo csi a pe E SEES 112 A Debugging with TotalView Quick Reference Guide Lin 113 A 1 Debugging Serial Programs e 113 A 1 1 Some General Hints for Using TotalView 113 ALZ Compiling and Linke lt lt soos a e etei ee p ae O E es 113 AAS Starting TotalView ce coi e e sacie bee eee ee ee we es 113 A L4 Setting a Breakpoint oo oc sa oao 24404 ear ee a he 114 A 1 5 Sta
35. libfcollector has to be linked If this program is started by collect S off a out performance data is only collected between the collector resume and the collec 96 The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 10 11 12 tor_terminate_expt calls No periodic sampling is done but single samples are recorded whenever collector sample is called When the experiment file is evaluated the fil ter mechanism can be used to restrict the displayed data to the interesting pro gram parts The timelines display includes the names of the samples for bet ter orientation Please refer to the libcollector manual page for further information Listing 18 f90 PSRC pis collector f90 a out program testCollector double precision x call collector_pause call PreProc x call collector_resume call collector_sample Work1 call Work1 x call collector_sample Work2 call Work2 x call collector_terminate_expt call PostProc x end program testCollector 8 2 Intel Performance Analyze Tools Lin Win The Intel Corporation offers a variety of goods in the software branch including many very useful tools compilers and libraries However due to agile marketing division you never can be shure what the name of a particular product today is and what it will be the day after tomorrow We try to catch up this evolution But don t panic if you see some outdated and or shortened names The Intel Studio pr
36. my_ Host if the line 10 in mpihelloworld f90 is hit PSRC pex al7 MPIFC g PSRC psr mpihelloworld f90 tvscript mpi Open MPI np 2 starter_args FLAGS_MPI_BATCH create_actionpoint mpihelloworld f90 10 gt print my_MPI_Rank print my_Host a out A 2 Debugging Parallel Programs A 2 1 Some General Hints for Parallel Debugging Get familiar with using TotalView by debugging a serial toy program first If possible make sure that your serial program runs fine first Debugging a parallel program is not always easy Use as few MPI processes OpenMP threads as possible Can you reproduce your problem with only one or two processes threads Many typical multithreaded errors may not or not comfortable be found with a debugger for example race condition gt Use threading tools refer to chapter 7 4 on page 91 A 2 2 Debugging MPI Programs More hints on debugging of MPI programs can be found in the TotalView Setting Up MPI Programs Guide The presentation of Ed Hinkel at ScicomP 14 Meeting is interesting in the context of large jobs A 2 2 1 Starting TotalView There are two ways to start the debugging of MPI programs New Launch and Classic Launch The New Launch is the easy and intuitive way to start a debugging session Its disadvan tage is the inability to detach from and reattach to running processes Start Total View as for serial debugging and use the Parallel pane in the Startup Parameters w
37. number of node slots allocated to the job For jobs submitted to the BCS partition this would amount to 128 core hours per one hour of run time for each BCS node used by the job 4 6 2 2 Usage model Accounting is implemented as a three months wide sliding window Each month your project is granted a monthly quota of MQ core hours Unused quota from the previous month up to your monthly allowance is transferred automatically to the current one Because of the limit on the amount of quota transferred it is not possible to save compute time and accumulate it for later usage It is also possible to borrow compute time from the next month s allowance which results in negative quota allowance being transferred to the next month Transfer and borrow occur only if the respective month is within the accounting period The core hours quota available in the current month is computed as follows The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 55 1 The monthly allowance for the previous the current and the next month are added 2 The consumed core hours for the previous and for the current month are added 3 The difference between both values is the amount of core hours available in the current month Once the quota has been fully consumed all new and pending jobs will only get dispatched if there are no jobs from other projects with unused CPU quota pending a low priority mode Jobs that run in low priority mode are still count
38. order to try to improve the load balance among all CPUs of a single node The higher the system load is the higher is the probability of processes or threads moving around In an optimal case this should not happen because according to our batch job scheduling strategy the batch job scheduler takes care not to overload the nodes Nevertheless operating systems sometimes do not schedule processors in an optimal manner for HPC applications This may decrease performance considerably because cache contents may be lost and pages may reside on a remote memory location where they have been first touched This is particularly disadvantageous on NUMA systems because it is very likely that after several movement many of the data accesses will be remote thus incurring higher latency Processor Binding means that a user explicitly enforces processes or threads to run on certain processor cores thus preventing the OS scheduler from moving them around On Linux you can restrict the set of processors on which the operating system scheduler may run a certain process in other words the process is bound to those processors This property is called the CPU affinity of a process The command taskset allows you to specify the CPU affinity of a process prior to its launch and also to change the CPU affinity of a running process You can get the list of available processors on a system by entering cat proc cpuinfo The following examples show the usage of taskset
39. other Favorite choices are given in table 8 26 on page 95 for Barcelona CPUs in table 8 27 on page 95 for Harpertown Tigerton and Dunnington CPUs and in table 8 28 on page 96 for Nehalem and Westmere CPUs p on off hi lo Clock profiling hi needs to be supported on the system H on off Heap tracing m on off MPI tracing h counter0 on Hardware Counters j on off Java profiling S on off seconds Periodic sampling default interval 1 sec o experimentfile Output file d directory Output directory g experimentgroup Output file group L size Output file size limit MB F on off Follows descendant processes C comment Puts comments in the notes file for the experiment Table 8 25 Collect options This example counts the floating point operations on different units in addition to the clock profiling on Nehalem processors SPSRC pex 811 collect p on h cycles on fp_comp_ops_exe x87 on fp_comp_ops_exe mmx on fp_comp_ops_exe sse_fp a out 8 1 2 Sampling of MPI Programs Sampling of MPI programs is something for toughies because of additional complexity dimen sion Nevertheless it is possible with collect in at least two ways Wrap the MPI binary Use collect to measure each MPI process individually mpiexec lt opt gt collect lt opt gt a out lt opt gt This technique is no longer supported to coll
40. p eee p e SEH POS eR OS 87 7 Debugging 88 TI Static Program Analysis occ ca eisa raii bae a a Pee SE 88 72 Dynamie Program Analysis so 02 0 8 5 datas Ki ae d Sia e aai 89 Ta Webae erd our dare el ls ee pie Seo yua kin d ee eS 90 Tol IAN n oe wo ore oe AEE OR EER ee hee E ee 90 fae Circle Solanis Stadio Din e ek ee RR Pe EE oe RE ES 90 The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 5 Tas SARUM Win ecb bee a ee ee tee Oe SS Ee ee 91 Toe pedbe Lin e lt me ed bee eee Re eee hee A 91 Too See LE ok pe eee ed oe 2 ORS eee EER Oe e 91 7 4 Runtime Analysis of OpenMP Programs 2 91 74 1 Oracle s Thread Analyzer Lin oo o re 6525 264 se bbe ee es 91 FAZ Intel Inspector Lin 7 Win occ oe ert eae bee ee eR ee ee 92 8 Performance Runtime Analysis Tools 93 8 1 Oracle Sampling Collector and Performance Analyzer Lin 93 8 1 1 The Qracle Sampling Collector c lt s aome ensa a raia koe a u aerei 93 8 1 2 Sampling of MPI Programs coso cirios si ip ee 94 8 1 3 The Oracle Performance Analyzer naaa aaa 96 8 1 4 The Performance Tools Collector Library API 96 8 2 Intel Performance Analyze Tools Lin Win 0 97 8 2 1 Intel VTume Amplifier recs secta r bbe kaa h oa eee a 98 8 2 2 Intel Trace Analyzer and Collector ITAC 98 VME oc So eye be a ea ce EE Ee Eee EA Oe e ee 99 Sd Balaca fa oo cee aoi a CEA eek e a 102 85 Rinne Analysis with
41. scripted in a file say simplejob sh in which the options of bsub are saved with the magic cookie BSUB usr bin env zsh BSUB J TEST BSUB o ouput txt BSUB n 2 BSUB R span hosts 1 BSUB W 15 BSUB M 700 BSUB a openmp BSUB u lt your_email_address gt BSUB N module switch intel gcc 4 6 jacobi exe lt input To submit a job use bsub lt simplejob sh Please note the lt in the command line It is very important to pipe the script into the bsub executable because otherwise none of the options specified with magic cookie will be interpreted You can also mix both ways to define options the options set over commandline are preferred 101 This is not the recommended way to submit jobs however you do not need a job script here You can find several example scripts in chapter 4 5 on page 35 The used options are explained there as well 124 The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 Index analyzer 96 bash 34 batchsystem 35 boost 110 c89 65 cache 13 15 CC 58 65 cc 65 collect 93 CPI 95 csh 33 34 CXX 58 data race 91 DTLB 95 endian 61 example 9 export 33 f90 65 95 65 FC 58 flags 58 arch32 58 arch64 58 autopar 59 77 debug 58 fast 58 fast_no_fpopt 58 mpi_ batch 83 openmp 59 77 FLOPS 95 96 g 69 g77 69 gcc 69 gdb 91 gfortran 69 gprof 103 guided 79 hardware overview 13 HDF
42. see to which options a macro expands use the v or formerly Sun 6 Currently on Linux the environment variables F LAGS FAST and FLAGS_ FAST _NO_FPOPT contain flags which optimize for the Intel Nehalem CPU s On older chips there may be errors with such optimized binaries due to lack of SSE4 units Please read the compiler man page carefully to find out the best optimization flag for the chips you want your application to run on The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 65 options On our Nehalem machines this looks like CC v fast PSRC cpsp pi cpp c command line files and options expanded HHH v x05 xarch sse4_2 xcache 32 64 8 256 64 8 8192 64 16 xchip nehalem xdepend yes fsimple 2 fns yes ftrap none xlibmil xlibmopt xbuiltin all D__MATHERR_ERRNO_DONTCARE nofstore xregs frameptr Qoption CC iropt Qoption CC xcallee64 rwthfs rz SW HPC examples cpsp pi cpp c Qoption ube xcallee yes The compilers on x86 do not use automatic prefetching by default Turning prefetching on with the xprefetch option might offer better performance Some options you might want to read up on are xalias_ level xvector xspfconst and xprefetch These options only offer better performance in some cases and are therefore not included in the fast macro Note High optimization can have an influence on floating point results due to differ ent rounding errors To keep the order of the arithmetic operations
43. selecting Set Barrier in the pull down menu A 2 3 5 Starting Stopping and Restarting your Program You can perform stop start step and examine single threads or the whole process group Choose Group default or Process or Thread in the first pull down menu of the toolbar A 2 3 6 Printing a Variable You can examine the values of variables of all threads by selecting View Show Across Threads in a variable window or alternatively by right clicking on a variable and selecting Across Threads The values of the variable will be shown in the array form and can be graphically visualized One dimensional arrays or array slices can be also shown across threads The thread ID is interpreted as an additional dimension 120 The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 B Beginner s Introduction to the Linux HPC Cluster This chapter contains a short tutorial for new users about how to use the RWTH Aachen Linux HPC Cluster It will be explained how to set up the environment correctly in order to build a simple example program Hopefully this can easily be adapted to your own code In order to get more information on the steps performed you need to read the referenced chapters The first step you need to perform is to log in to the HPC Cluster B 1 Login You have to use the secure shell protocol ssh to log in Therefore it might be necessary to install an ssh client on your local machine If you are running Windows pleas
44. so that data races can be detected at runtime The Thread Analyzer also supports nested OpenMP programs Make sure you have the version 12 or higher of the studio module loaded to set up the environment Add the option xinstrument datarace to your compiler command line Since additional functionality for thread checking is added the executable will run slower and need more memory Run the program under the control of the collect command SPSRC pex 740 CC FLAGS_OPENMP xinstrument datarace PSRC C omp pi pi c 82 more details are given in the analyzer section 8 1 on page 93 The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 91 1m FLAGS_DEBUG PSRC pex 740 collect r on a out You have to use more than one thread while executing since only occurring data races are reported The results can be viewed with tha which contains a subset of the analyzer functionality or the analyzer PSRC pex 740 tha tha 1 er 7 4 2 Intel Inspector Lin Win The Intel Inspector tool is an easy to use thread and memory debugger for serial and parallel applications and is able to verify the correctness of multithreaded programs It is bundled into the Intel Parallel Studio and provides a graphical and also command line interfaces GUI and CLI for Linux and Windows On Linux you can run it by module load intelixe inspxe gui To get a touch of how to use the command line interface type inspxe cl help On Windows
45. specify exactly which processors to attach your process to For example if you have a quad socket dual core system 8 CPUs you can set the blist so that the processes are interleaved across the 4 sockets MP_BLIST 2 4 6 0 1 3 5 7 or bound to a particular MP_ BLIST 6 7 6 1 6 2 Autoparallelization Just like the Intel and Oracle compilers the PGI compilers are able to parallelize certain loops automatically This feature can be turned on with the option Mconcur option option which must be supplied at compile and link time Some options of the Mconcur parameter are e bind Binds threads to cores or processors e levels n Parallelizes loops nested at most n levels deep the default is 3 e numa nonuma Uses doesn t use thread processor affinity for NUMA architectures Mconcur numa will link in a numa library and objects to prevent the operating system from migrating threads from one processor to another Compiler feedback about autoparallelization is enabled with Minfo The number of threads started at runtime may be specified via OMP_ NUM_ THREADS or NCPUS When the option Minline is supplied the compiler tries to inline functions so even loops with function calls may be successfully parallelized automatically 6 2 Message Passing with MPI MPI Message Passing Interface is the de facto standard for parallelization on distributed memory parallel systems Multiple processes explicitly exchange data and coordinate thei
46. the load balancing The value for each host defines the number of processes on this host NOT the compute slots 16 processes on the host and 10 processes spanning both coprocessors MPIEXEC H cluster phi 16 cluster phi mic0 10 cluster phi mic1 10 lt exec gt 2 5 3 4 Batch Mode For job submission you can use the bsub command bsub options command arguments We advise to use a batch script within you can use the magic cookie BSUB to specify the job requirements bsub lt jobscript sh Please note that the coprocessor s will be rebooted for every batch job so that it can take some time until your application will start and you can see some output using bpeek For general details on job submission refer to chapter 4 5 1 on page 35 To submit a job for the Intel Xeon Phis you have to add BSUB a phi to your submission script Furthermore you have to specify a special job description to deter mine the job type offload LEO native or MPI job e For Language Extension for Offload LEO set BSUB Jd leo a b where ais the number of MICS b is the number of threads on the MICs e For native job use BSUB Jd native e For MPI specify BSUB Jd hosts a b mics c d where a is the number of hosts b is a comma separated list of MPI processes on the hosts cis the number of MICs d is a comma separated list of MPI processes on the MICs 2 5 3 5 Example Scripts Below you can find some genera
47. the other half with the wrong options If the program then fails you will know which part causes the problem Likewise if the program runs fine afterwards repeat the process for the part of the program causing the failure 7 2 Dynamic Program Analysis Many compilers offer options to perform runtime checks of the generated program e g array bound checks or checks for uninitialized variables Please study the compiler documentation and look for compiler options which enable additional runtime checks Please note that such checks usually cause a slowdown of your application so do not use them for production runs The Intel FORTRAN compiler allows you to turn on various runtime checks with the check flag You may also enable only certain conditions to be checked e g check bounds please consult the compiler manual for available options The Oracle FORTRAN compiler does array bound checking with the option C and global program analysis with the option Xlist Compiling with xcheck init local initializes local variables to a value that is likely to cause an arithmetic exception if it is used before it is assigned by the program Memory allocated by the ALLOCATE statement will also be initialized in this manner SAVE variables module variables and variables in COMMON blocks are not initialized Floating point errors like division by zero overflows and underflows are reported with the option ftrap all The Oracle compilers also offer th
48. using uninitialized memory will produce errors Furthermore you can detect memory leaks e Enable the memory debugging tool before you start your program by selecting the Debug entry from the tools menu and click the Enable memory debugging button e Set a breakpoint at any line and run your program into it e Open the Memory Debugging Window select Debug gt Open MemoryScape e Select the Memory Reports Leak Detection tab and choose Source report or Backtrace report You will then be presented with a list of Memory blocks that are leaking Memory debugging of MPI programs is also possible The Heap Interposition Agent HIA interposes itself between the user program and the system library containing malloc realloc and free This has to be done at program start up and sometimes it does not work in MPI cases We recommend to use the newest MPI and TotalViev versions the Classic Launch cf chapter A 2 2 1 on page 117 and to link the program against the debugging libraries to make sure that it captured properly Example MPICC g o mpiprog mpiprog c L TVLIB ltvheap_64 Wl rpath TVLIB http www roguewave com Portals 0 products totalview family totalview docs 8 10 wwhelp wwhimpl js html wwhelp htm href User_ Guides LinkingYourApplicationWithAgent28 html The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 115 A 1 9 ReplayEngine TotalView provides the possibility of reversely debugging your code by record
49. which you can use the magic cookie BSUB to specify the job requirements bsub lt jobscript sh Attention Please note the left lt arrow If you do not use it the job will be submitted but all resource requests will be ignored because the BSUB is not interpreted by the workload management Example scripts can be found in chapter 4 5 1 on page 44 Job Output stdout stderr The job output stdout is written into a file during the runtime of a job The job error output stderr is merged into this file if no extra option for a stderr file is given If the user does not set a name for the output file s the LSF system will set it during submission to output J_ I txt located in the working directory of the job where J and I are the batch job and the array IDs Please do not specify the same output file for stdout and stderr files but just omit the definition of stderr file if you want the output merged with stdout The output file s are available only after the job is finished Nevertheless using the com mand bpeek the output of a running job can be displayed as well Parameter Function J lt name gt Job name o lt path gt Standard out and error if no option e lt path gt used e lt path gt Standard error Table 4 5 Job output options Mail Dispatching Mail dispatching needs to be explicitly requested via the options shown in the table 4 6 on page 36 Parameter Fun
50. will be used to connect to cluster win Connection WIn HPC abx00000 eeceee Domain WIN HPC Computer cluster win rz rwth aachen de User name None specified You will be asked for credentials when you connect Connect Cancel Help Options gt gt I Remember my credentials s ing the program you need to enter the client name e g cluster win rz rwth aachen de into the appearing window and then enter your username and password Note Make sure to connect to the WIN HPC domain because a local login will not work You can export local drives or printers to the remote desktop session Choose Options and then the Local Resources tab on the login window of the remote desktop connection and check the local devices you want to export If the Remote Desktop Connection program is not installed it can be downloaded from the Microsoft homepage Note Administrator privileges are required for installation 4 2 2 rdesktop the Linux Client To log into a Windows system from a Linux client the rdesktop program is used By calling rdesktop cluster win rz rwth aachen de you will get a graphical login screen Frequently used rdesktop options are listed in table 4 4 on page 30 Note Make sure to connect to the WIN HPC domain because a local login will not work If called without parameters rdesktop will give information about further options The following line gives
51. 5 110 home 30 hpework 30 icc 61 icl 61 icpc 61 ifort 61 interactive 8 JARA 54 kmp _ affinity 25 ksh 33 latency 16 library collector 96 efence 89 Linux 24 login 8 27 LSF 35 memalign 74 memory 16 bandwidth 16 memusage 73 MIPS 95 module 34 MPICC 83 MPICXX 83 mpiexec 83 MPIFC 83 NAG Numerical Libraries 107 nested 80 network 16 OMP_NUM_ THREADS 76 OMP_STACKSIZE 76 Opteron 13 pgCC 70 pgcc 70 pgf77 70 pgf90 70 processor 12 chip 12 core 12 logical 12 socket 12 quota 31 r_lib 109 rdesktop 29 rounding precision 66 scalasca 102 125 screen 27 ssh 27 sunc89 65 sunCC 65 suncc 65 sunf90 65 sunf95 65 tesh 34 thread hardware 12 inspector 92 tmp 31 totalview 90 113 ulimit 90 uname 24 uptime 72 vampir 99 Visual Studio 71 work 30 Workload Management 35 Xeon 13 zsh 33 zshenv 33 zshrc 33 126 The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013
52. AGS MPI BATCH variable is intentionally left empty To specify the Open MPI use BSUB a openmpi Intel MPI In order to get access to Intel MPI you need to specify it and to switch the MPI module BSUB a intelmpi module switch openmpi intelmpi Hybrid Parallelization Hybrid jobs are those with more than one thread per MPI process The Platform LSF built in mechanism for starting such jobs supports only one single MPI process per node which is mostly insufficient because the sweet spot often is to start an MPI process per socket A feature request for support of general hybrid jobs is open Nevertheless you can start hybrid jobs by the following procedure e Request a certain node type see table 4 8 on page 37 e Request the nodes for exclusive use with x e Set the number of MPI processes as usually with n e Define the grouping of the MPI processes over the nodes with R span ptile e Manually set the OMP_ NUM_ THREADS environment variable to the desired number of threads per process with export OMP_NUM_THREADS Note For correct function of such jobs the LSF affinity capabilities see page 41 must be disabled If the LSF s built in binding is active all threads will be pinned to the single slot reserved for the MPI process which is probably not what you want Note For hybrid jobs the MPI library must provide threading support See chapter 6 3 on page 86 for details Note The described procedure to start of
53. AGS_BOOST_INCLUDE PSRC psr example cpp c SPSRC pex 992 CXX example o o example SPSRC pex 992 echo 1 2 3 example However these Boost libraries are built separately and must be linked explicitly atomic chrono context date time exception filesystem graph graph_ parallel iostreams locale math mpi program_ options python random regex serialization signals system test thread timer wave E g in order to link say the Boost MPI library you have to add the lboost__mpi flag to the link line and so forth Example SPSRC pex 994 MPICXX FLAGS_BOOST_INCLUDE PSRC psr pointer_test cpp c PSRC pex 994 MPICXX FLAGS_BOOST_LINKER pointer_test o lboost_mpi PSRC pex 994 MPIEXEC np 2 a out The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 111 10 Miscellaneous 10 1 Useful Commands Lin csplit Splits C programs fsplit Splits FORTRAN programs nm Prints the name list of object programs ldd Prints the dynamic dependencies of executable programs ld Runtime linker for dynamic objects readelf Displays information about ELF format object files vmstat Status of the virtual memory organization iostat I O statistics sar Collects reports or saves system activity information mpstat Reports processor related statistics lint 9 More accurate syntax examination of C programs dumpstabs Analysis of an object program included in Oracle Studio pstack A
54. Center for Computing and Communication During the nighttime and on weekends they are available for GPU compute batch jobs The four remaining nodes enable on the one hand GPU batch computing all day and on the other hand interactive access to GPU hardware to prepare the GPU compute batch jobs and to test and debug GPU applications The software environment on the GPU cluster is now as similar as possible to the one on the RWTH Compute Cluster Linux part GPU related software like NVIDIA s CUDA Toolkit PGI s Accelerator Model or a CUDA debugger is additionally provided In the future the software stack including Linux version may drift apart due to experimental status of the GPGPU cluster Furthermore there is also the possibility to use a couple of high end GPUs under Windows 2 5 Innovative Computer Architectures Intel Xeon Phi Cluster Note All information in this chapter may be subject to change For latest info take a look at this wiki https wiki2 rz rwth aachen de display bedoku Intel Xeon Phi Cluster The Intel Xeon Phi Cluster comprises 9 nodes each with two Intel Xeon Phi coprocessors MIC One of these nodes is used as frontend and the other 8 nodes run in batch mode In detail each node consists of two MICs with 60 cores running at 1 05 GHz with 8 GB of memory and two Intel Xeon E5 2650 codename Sandy Bridge CPUs with 8 cores running at 2 0 GHz with 32 GB of memory 2 5 1 Access To get access to this system
55. GS NAG_ LINKER Example PSRC pex 970 FC FLAGS_MATH_INCLUDE FLAGS_MATH_LINKER PSRC psr usenag f Note All above mentioned libraries are installed as 64bit versions Note For FORTRAN FORTRAN 90 and c libraries both FLAGS MATH_ or FLAGS NAG _ environment variables can be used Note The FORTRAN 90 libraries are available for Intel and Oracle Studio compilers only Note The smp library needs an implementation of a BLAS LAPACK library If using Intel compiler the enclosed implementation of Intel MKL will be used automati cally if you use the FLAGS MATH INCLUDE and FLAGS MATH _LINKER flags The FLAGS NAG INCLUDE and FLAGS NAG _LINKER variables provide a possibility of us ing NAG smp with other compilers and BLAS LAPACK implementations Note The parallel library needs an implementation of a BLACS ScaLAPACK and those need a MPI library If using the Intel compiler the enclosed implementation of Intel MKL will be used automatically to provide BLACS ScaLAPACK if you use the FLAGS MATH INCLUDE and FLAGS MATH LINKER flags However the MKL imple mentation of BLACS ScaLAPACK is known to run with Intel MPI only so you have to switch your MPI by typing module switch openmpi intelmpi before loading the NAG parallel library The usage of any another compiler and or BLACS ScaLAPACK library with the NAG parallel library is in principle possible but not supported through the modules now Would You Like To Know More http www nag co uk num
56. HPC Cluster consists of Intel Xeon based 8 to 128 way SMP nodes The nodes are either running Linux or Windows a complete overview is given in table 2 3 on page 14 Thus the cluster provides two different platforms Linux denoted as Lin and Windows denoted as Accordingly we offer different frontends into which you can log in for interactive access Besides the frontends for general use there are frontends with special features access to specific hardware Harpertown Gainestown Barcelona graphical login X Win32 and NX Sofware servers or for performing big data transfers See table 1 1 on page 9 To improve the cluster s operating stability the frontend nodes are rebooted weekly typi cally on Monday early in the morning All the other machines are running in non interactive mode and can be used by means of batch jobs see chapter 4 5 on page 35 1 2 Development Software Overview A variety of different development tools as well as other ISV software is available However this primer focuses on describing the available software development tools Recommended tools are highlighted in bold blue An overview of the available compilers is given below All compilers support serial pro gramming as well as shared memory parallelization autoparallelization and OpenMP e Intel F95 C C 2nm Win 3see appendix B on page 121 for a quick introduction to the Linux cluster Independent Software Vendor See a list of installed prod
57. MP program built with the Intel compilers starts as many threads as there are processors available The worker threads stack size may be set using the environment variable KMP_STACKSIZE e g KMP_STACKSIZE megabytesM Dynamic adjustment of the number of threads and support for nested parallelism is turned off by default when running an executable built with the Intel compilers Please use the environ ment variables OMP_ DYNAMIC and OMP_ NESTED respectively to enable those features 6 1 3 1 Thread binding Intel compilers provide an easy way for thread binding Just set the environment variable KMP_ AFFINITY to compact or scatter e g export KMP_AFFINITY scatter Setting it to compact binds the threads as closely as possible e g two threads on different cores of one processor chip Setting it to scatter binds the threads as far away as possible e g two threads each on one core on different processor sockets Explicitly assigning OpenMP threads to a list of OS proc IDs is also possible with the explicit keyword For details please refer to the compiler documentation on the Intel website The default behavior is to not bind the threads to any particular thread contexts however if the operating system supports affinity the compiler still uses the OpenMP thread affinity interface to determine machine topology To get a machine topology map specify export KMP_AFFINITY verbose none 6 1 3 2 Autoparallelization The autoparallelization f
58. Microsoft completed its Windows product portfolio for HPC applications It includes an mpich 2 based MPI environment Microsoft MPI and a batch system with a graphical user interface for job submission and management The batch system has two important restrictions Your program can not accept any user input if so it must read it from a file nor can it use any elements of the Graphical User Interface GUI system A user s guide is available via Start All Programs Microsoft HPC Pack HPC Job Manager Help To submit a job you have to start the Cluster Job Manager For this purpose choose Start All Programs Microsoft HPC Pack HPC Job Manager from the Start menu To submit a new job click on New Job detade Actions A D Job The next step is to enter a job name Him T and to select whether you want to use just a ed pa Printy Normal core a socket or a whole node You should amp ew Single Task 30b TAN A E D New Parametric Sweep Job Job raroucee E mii also add a limitation how long your job may D a fom Selo he S thi run A job must consist of at least one task POTE Ein which is the actual execution of a user pro E Enter the tasks for this Task Task Name Comr gram or script Click on Task List to add the tasks you want to submit to the cluster In the new window enter your commands into the command line to add a new line
59. NCLUDE c prog f90 ifort prog o o prog exe L MPI_LIBDIR lmpi mpiexec np 4 prog exe A 6 2 4 Microsoft MPI Win Microsoft MPI is based on mpich2 To use Microsoft MPI you have to prepare your build environment for compilation and linking You have to provide C Program Files Microsoft HPC Pack 2008 SDK Include as an include directory during compile time These are the directories for the headers mpi h for C C programs and mpif h for FORTRAN programs Additionally there is a FORTRAN 90 module available with mpi f90 You also have to provide C Program Files Microsoft HPC Pack 2008 SDK Lib i386 AMD64 as an additional library directory To create 32bit programs you have to choose the subdirectory i386 for 64bit programs you have to choose AMD64 The required library is msmpi lib which you have to link To add the paths and files to Visual Studio open your project properties Project Prop erties and navigate to C C or Fortran General for the include directory Linker General for the library directory and Linker Input for the libraries The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 85 6 3 Hybrid Parallelization The combination of MPI and OpenMP and or autoparallelization is called hybrid paralleliza tion Each MPI process may be multi threaded In order to use hybrid parallelization the MPI library has to support it There are four stages of possible support
60. Nehalem family Nehalem and Wesmere of cores The Sandy Bridge CPUs are produced in 32 nm process The unique feature of the Sandy Bridge CPUs is the availability of the Advanced Vector Extensions AVX vectors units with 256 bit instruction set 2 3 1 The Xeon X5570 Gainestown Nehalem EP Processor The Intel Xeon X5570 processors codename Gainestown formerly also Nehalem EP are quadcore processors where each core can run two hardware threads hyperthreading Each core has a L1 and a L2 cache and all cores share one L3 cache Processor Thread Binding means explicitly enforcing processes or threads to run on certain processor cores thus preventing the OS scheduler from moving them around The Center for Computing and Communication offers institutes of the RWTH Aachen University to in tegrate their computers into the HPC Cluster where they will be maintained as part of the cluster The computers will be installed in the center s computer room where cooling and power is provided Some institutes choose to share compute resources with others thus being able to use more machines when the demand is high and giving unused compute cycles to others Further Information can be found at http www rz rwth aachen de go id pgo Mhttp software intel com en us intel isa extensions http en wikipedia org wiki Advanced_ Vector_ Extensions The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 13
61. RC pis addressingModes c a out and 8 twice in the 64 bit mode CC FLAGS_ARCH64 PSRC pis addressingModes c a out taskset OxFFFFFFFF a out If the bitmask is invalid the program will not be executed An invalid bitmask is e g 0x00000010 on a 4 way machine 24 Note the environment variables F LAGS ARCH64 and FLAGS _ ARCH32 which are set for compilers by the module system see chapter 5 2 on page 58 The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 25 26 Listing 4 Show length of pointers and long integer variables include lt stdio h gt int main int argc char argv al int p long int li printf Aiu Pluna unsigned long int sizeof p unsigned long int sizeof li return 0 The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 4 The RWTH Environment 4 1 Login to Linux 4 1 1 Command line Login The secure shell ssh is used to log into the Linux systems Usually ssh is installed by default on Linux and Unix systems Therefore you can log into the cluster from a local Unix or Linux machine using the command ssh 1 username cluster rz rwth aachen de For data transfers use the scp command A list of frontend nodes you can log into is given in table 1 1 on page 9 To log into the Linux cluster from a Windows machine you need to have an SSH client installed Such a client is provided for example by the cygwin http www cygwin com envi ronment
62. The RWTH HPC Cluster User s Guide Version 8 2 6 Release August 2013 Build August 15 2013 Dieter an Mey Christian Terboven Paul Kapinos Dirk Schmidl Sandra Wienke Tim Cramer Michael Wirtz Rechen und Kommunikationszentrum der RWTH Aachen Center for Computing and Communication RWTH Aachen University anmey terboven kapinos schmidl wienke cramer wirtz QOrz rwth aachen de The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 What s New These topics are added or changed significantly compared to the prior minor release 8 2 5 of this primer e As some older nodes reached the EOL end of live timeline the chapters 2 4 The older Xeon based Machines 2 5 IBM eServer LS42 has been removed e As the idb debugger is deprecated by Intel chapter 7 3 3 Intel idb Lin has been removed e As the Intel Thread Checker and Profiler tools are superseded by Intel Inspector and VTune tools chapters 7 4 2 Intel Thread Checker Lin Win 8 2 2 Intel Thread Profiler has been removed e As the Acumem software won t be updated chapter 8 3 Acumem ThreadSpotter Lin has been removed e As our Open MPI now do not support XRC eXtended Reliable Connection the how to activate XRC war removed from chapter 6 2 2 on page 84 e The description of the X Win32 software added cf chapter 4 1 2 on page 27 e An additional RZ Cluster frontend dedicated to big data transfer operations cluster
63. The number of available files is rather small by contrast with the home and work filesystems Furthermore the tmp directory is available for session related temporary scratch data Use the TMP environment variable on the Linux or TMP on Windows command line The directory will be automatically created before and deleted after a terminal session or batch job Each terminal session and each computer has its own tmp directory so data sharing is not possible this way Usually the tmp file system is mapped onto a local hard disk which provides fast storage Especially the number of file operations may be many times higher than on network mounted work and home file systems However the size of the tmp file system is rather small and depends on the hardware platform Some computers have a network mounted tmp file system because they do not have sufficient local disk space We also offer an archive service to store large long term data e g simulation result files for future use A description how to use the archive service can be found at http www rz rwth aachen de li k qgy 4 3 1 Transferring Files to the Cluster To transfer files to the Linux cluster the secure copy command scp on Unix or Linux or the Secure File Transfer Client on Windows can be used Usually the latter is located in Start Programs SSH Secure Shell Secure File Transfer Client if installed To connect to a system use t
64. U Compilers Lin eo ee sonig maane Wite d DOO ewe EE SEE e 69 5 7 1 Frequently Used Compiler Options 69 E a II 70 58 PGI Compilers Lin ccs cba hed edi eee dee A AAA 70 5 9 Microsoft Visual Studio Win lt si84 63408 ook ee oe EE EG 71 ol Time measurements 24 64 6 Ga ee bee Sh hee Cee ee ee eek A 72 old Memory Wises os ce we ER eS BR SR eee eae 73 512 Memory AECA RA ee ee ee A A 74 5 13 Hardware Performance Counters lt lt Ree dee Ee ee 74 e TAMU 2 doa de Esti eRe RR Ee ee od eb So we oe BR 74 Ad WINE acia ras eR ae hae Bae he Pee 75 6 Parallelization 76 6 1 Shared Memory Programming ses sacra bee ee ee ee ee 76 6 1 1 Automatic Shared Memory Parallelization of Loops Autoparallelization 77 6 1 2 Memory access pattern and NUMA 0 78 Glee Intel Compilers Lin 7 Win 2 2 2d cole ek rr oe oS 78 B14 Oracle comple TL coccion bide bt eee Peo se Sa ods 79 616 GNU Compilers Lin og opore sce he RRMA Re A 81 BLA PGI Compilers Lin 3 soetra mce oe e be e e o ie he ee a N 81 6 2 Message Passing with MPI pa 2 epee ewe eee ea i eee e a 82 6 2 1 Interactive mpiexec wrapper Lin aooaa aaa 83 Go Open MPTIDIAN eins pierre 83 6 2 3 Intel s MPI Implementation Lin 84 G24 Miergsot MPI Win lt lt lt cris sosa ses dara 85 6 3 Hybrid Parallelizatlon o so ccr coccion a e a aa 86 A A 86 Maa Jee Gin oh es oe be She BES SOE GSE is a 86 0S3 Wier RIP Cid e ok e
65. U cluster in July 2011 Because of its innovative character this cluster does not yet run in real production mode nevertheless it will be tried to keep it as stable and reliable as possible 13On Bull s advise the Hyperthreading is OFF on all BCS systems https sharepoint campus rwth aachen de units rz HPC public Lists Bull Cluster Configuration Phase 2 October 2011 AllItems aspx https sharepoint campus rwth aachen de units rz HPC public Shared Documents RWTH PPCES 2012 pdf IS http www scalemp com 16 The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 Acess to the GPU cluster is open to all cluster users but need additional registration If you are interested in using GPUs make a request to servicedesk rz rwth aachen de We will grant access to the GPU cluster or the Windows GPU machines and to the GPGPU Wiki which contains detailed documentation about the systems and how to program them The GPU cluster comprises 28 nodes each with two GPUs and one head node with one GPU In detail there are 57 NVIDIA Quadro 6000 GPUs i e NVIDIA s Fermi architecture Furthermore each node is a two socket Intel Xeon Westmere EP X5650 server which con tains a total of twelve cores running at 2 7 GHz and 24GB DDR3 memory All nodes are conntected by QDR InfiniBand The head node and 24 of the double GPU nodes are used on weekdays at daytime for interactive visualizations by the Virtual Reality Group of the
66. We use the more convenient option c to set the affinity with a CPU list e g 0 5 7 9 11 instead of the old style bitmasks 23 The CPUs on which a process is allowed to run are specified with a bitmask in which the lowest order bit corresponds to the first CPU and the highest order bit to the last one Running the binary a out on only the first processor taskset 0x00000001 a out Run on processors 0 and 2 SPSRC pex 320 taskset 0100000005 a out Run on all processors 24 The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 SPSRC pex 321 taskset c 0 3 a out You can also retrieve the CPU affinity of an existing task taskset c p pid Or set it for a running program taskset c p list pid Note that the Linux scheduler also supports natural CPU affinity the scheduler attempts to keep processes on the same CPU as long as this seems beneficial for system performance Therefore enforcing a specific CPU affinity is useful only in certain situations If using the Intel compilers with OpenMP programs processor binding of the threads can also be done with the KMP_ AFFINITY environment variable see chapter 6 1 3 on page 78 Similar environment variables for the Oracle compiler are described in section 6 1 4 on page 79 and for the GCC compiler in section 6 1 5 on page 81 The MPI vendors also offer binding functionality in their MPI implementations please refer to the documentation Furthermore we offer th
67. a e A faster but less detailed profile mode is selected by scan p default which gathers statistical data of your application like function visits and percentage of total runtime After execution there will be a directory called epik_ lt YourApplicationName gt in your working directory containing the results of the analysis run e The second mode scan t will trigger the more detailed tracing mode which will gather very detailed information This will almost certainly increase your execution time by a substantial amount up to a factor of 500 for function call intensive and template codes In this tracing mode Scalasca automatically performs a parallel analysis after your application s execution As with profiling there will be a new directory containing the data with the name of epik_ lt YourApplicationName gt _ lt NumberOfProcesses gt _trace scan t MPIEXEC np 4 a out will start the executable a out with four processes and will trace its behavior generating a data directory epik_a_4_trace There are several environment options to control the behavior of the measurement facility within the binary Note Existing measurement directories will not be overwritten and will block program execution Visualization To start analysis of your trace data call square scalascaDataDirectory where scalascaDataDirectory is the directory created during your program execution This will bring up the cube3 GUI and display performance data about your a
68. a large selection of views like global timeline process timeline counter display summary chart summary timeline message statistics collective communication statistics counter timeline I O event display and call tree compare figure 8 1 on page 101 Setup Before you start using Vampir the appropriate environment has to be set up All Vampir modules only become accessible after loading the UNITE module module load UNITE To do some tracing you have to load the vampirtrace module module load vampirtrace Later once you have traced data that you want to analyze use module load vampir to load the visualization package vampir Alternatively you have the choice to load vampir next generation module load vampirserver 100 The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 Vampir lt linuxncO01 rz RWTH Aachen DE gt File Global Displays Process Displays Filters Pre s EPI Gpplication BT_API 212 jacobimod jacobi_ 212 X X 2 jacobimod jacobi Ne Process 2 24M 2 jacobimod jacobi_ 0 1 30 038 ms Vampir Call Tree lt OlinuxncO001 rz RWTH Aachen D E jacobi otf Global Times incl Sorted poe 5 gt checkerror_ aeons ms 19 705 ms L gt MPI_Reduce 3 0 249 ms 0 586 ms gt finish_ 26 673 ms 33 814 m3 L gt HPI_Finalize N REI Boast MPI_Comm_rank 1 988 s 2 26 ps 2MP1_Comm_size 1 088 ps 1 268 ps 0 608 5 0 672 5 1 lt 1 1lreduce gt jacobimod e
69. a nominal clock speed of 2 00 GHz 2 33 The Xeon X5675 Westmere EP Processor The Westmere formerly Nehalem C CPUs are produced in 32 nm process instead of 45 nm process used for older Nehalems This die shrink of Nehalem offers lower energy consumption and a bigger number of cores Each processor has six cores With Intel s Hyperthreading technology each core is able to execute two hardware threads The cache hierarchy is the same as for the other Nehalem processors beside the fact that the L3 cache is 12MB in size and the nominal clock speed is 3 00 GHz e Level 1 on chip 32 KB data cache 32 KB instruction cache 8 way associative e Level 2 on chip 256 KB cache for data and instructions 8 way associative e Level 3 on chip 12 MB cache for data and instructions shared between all cores 16 way associative 2 3 4 The Xeon E5 2650 Sandy Bridge Processor Xeon E5 2650 is one of early available Sandy Bridge server CPUs Each processor has eight cores With Intel s Hyperthreading technology each core is able to execute two hardware threads The nominal clock speed is 2 00 GHz The cache hierarchy is the same as for the Nehalem processors beside the fact that the L3 cache is 20MB in size e Level 1 on chip 32 KB data cache 32 KB instruction cache 8 way associative e Level 2 on chip 256 KB cache for data and instructions 8 way associative e Level 3 on chip 20 MB cache for data and instr
70. active machines are not meant for time consuming jobs Please keep in mind that there are other users on the system which are affected if the system gets overloaded B 2 The Example Collection As a first step we show you how to compile an example program from our Example Collection chapter 1 3 on page 9 The Example Collection is located at rwthfs rz SW HPC examples This path is stored in the environment variable PSRC To list the contents of the examples directory use the command s with the content of that environment variable as the argument ls PSRC The examples differ in the parallelization paradigm used and the programming language which they are written in Please refer to chapter 1 3 on page 9 or the README file for more information less PSRC README txt The examples need to be copied into your home directory because the global directory is read only This is can be done using Makefiles contained in the example directories Let s 8If you do not yet have an account for our cluster system you can create one in Tivoli Identity Manager TIM http www rz rwth aachen de tim The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 121 assume you want to run the example of a jacobi solver written in C and parallelized with OpenMP Just do the following cd PSRC C omp jacobi gmake cp The example is copied into a subdirectory of your home directory and a new shell is started in that new subdirectory B
71. alyzer test 1 er Intel MPI example same as above PSRC pex 815 OMP_NUM_THREADS 2 collect h cycles on insts on M INTEL mpiexec np 2 H hostname a out analyzer test 1 er When collect is run with a large number of MPI processes the amount of experiment data might become overwhelming Try to start your program with as few processes as possible 8 1 3 The Oracle Performance Analyzer Collected experiment data can be evaluated with the analyzer GUI PSRC pex 810 analyzer test 1 er A program call tree with performance information can be displayed with the locally developed utility er view PSRC pex 810 1 er_view test 1 er There is also a command line tool er_ print Invoking er_ print without options will print a command overview Example PSRC pex 810 2 er_print fsummary test 1 er less If no command or script arguments are given er_ print enters interactive mode to read commands from the input terminal Input from the input terminal is terminated with the quit command 8 1 4 The Performance Tools Collector Library API Sometimes it is convenient to group performance data in self defined samples and to collect performance data of a specific part of the program only For this purpose the libcollectorA PI library can easily be used In the example FORTRAN program in listing 18 on page 97 performance data of the sub routines work1 and work2 is collected The libcollectorAPI library or when using FORTRAN
72. am Analysis an exact static analysis of the program is recommended for error detection Today s compilers are quite smart and can detect many problems Turn on a high verbosity level while compiling and watch for compiler warnings Please refer to Chapter 5 for various compiler options regarding warning levels Furthermore the tools listed in table 7 23 on page 88 can be used for static analysis lint syntax check of C programs distributed with Oracle Studio compilers module load studio cppcheck syntax check of C programs downloadable at http sourceforge net projects cppcheck ftnchek syntax check of FORTRAN 77 programs with some FORTRAN 90 features directly available at our cluster Forcheck Fortran source code analyzer and programming aid commercial http www forcheck nl plusFORT a multi purpose suite of tools for analyzing and improving Fortran programs commercial http www polyhedron com pf plusfort0html Table 7 23 Static program analysis tools Lin Sometimes program errors occur only with high or low compiler optimization This can be a compiler error or a program error If the program runs differently with and without compiler 88 The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 optimizations the module causing the trouble can be found by systematic bisecting With this technique you compile half of the application with the right options and
73. am libraries form several ISVs at https wiki2 rz rwth aachen de display bedoku Installed Software As for the compiler and MPI suites we also offer environment variables for the mathematical libraries to make usage and switching easier These are FLAGS MATH _ INCLUDE for the include options and FLAGS MATH _ LINKER for linking the libraries If loading more than one mathematical module the last loaded will overwrite and or modify these variables However almost each module sets extra variables that will not be overwritten 9 2 BLAS LAPACK BLACS ScaLAPACK FFT and other libraries If you want to use BLAS LAPACK BLACS ScaLAPACK or FFT you are encouraged to read the chapters about optimized libraries Intel MKL recommended see 9 3 on page 105 Oracle Sun Performance Library see 9 4 on page 106 ACML see 9 5 on page 107 The optimized libraries usually provide very good performance and do not only include the above mentioned but also some other libraries Alternatively you are free to use the native Netlib implementations just download the source and install the libraries in your home Note The self compiled versions from Netlib usually provide lower performance than the optimized versions 9 3 MKL Intel Math Kernel Library The Intel Math Kernel Library Intel MKL is a library of highly optimized extensively threaded math routines for science engineering and financial applications This library is optimized for Intel pro
74. an example of rdesktop usage rdesktop a 24 g 90 r sound local r disk tmp tmp k de d WIN HPC cluster win rz rwth aachen de 4 2 3 Apple Mac users Apple Mac users have two alternatives They can either use rdesktop as de scribed above or a native Remote Desktop Connection Client for Mac Please refer http www microsoft com mac products remote desktop default mspx for more information 3 some versions of rdesktop need the 4 option to work with our Windows frontend The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 29 Parameter Description u user Login as user d domain Use Windows domain domain for authentication g WxH Desktop geometry W width x H height in pixel g P Use P of you current screen resolution a depth Color depth depth 8 16 24 24 recommended 8 default f Full screen mode r device Enable specified device or directory redirection k value Keyboard layout e g de or us Table 4 4 rdesktop options overview 4 3 The RWTH User File Management Every user owns directories on shared file systems home work and hpework directories a scratch directory tmp and is also welcome to use the archive service Permanent long term data has to be stored in the home directory HOME home username or on Windows H drive Please do not use the home directory for significant amounts of short lived data because repeated writing and
75. and implemented using the performance simplicity and flexibility of FORTRAN 90 95 These are equivalent to well over 440 routines in the NAG FORTRAN Library 4 NAG SMP Library A numerical library containing over 220 routines that have been optimized or enhanced for use on Symmetric Multi Processor SMP computers The NAG SMP Library also includes the full functionality of the NAG FORTRAN Library It is easy to use and link due to identical interface to the NAG FORTRAN Library On his part the NAG SMP library uses routines from the BLAS LAPACK library 5 NAG Parallel Library A high performance computing library consisting of 180 routines that have been developed for distributed memory systems The interfaces have been The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 107 designed to be as close as possible to equivalent routines in the NAG FORTRAN Library The components of the NAG Parallel Library hide the message passing MPI details in underlying tiers BLACS ScaLAPACK To use the NAG components you have to load the LIBRARIES module environment first module load LIBRARIES To find out which versions of NAG libraries are available use module avail nag To set up your environment for the appropriate version use the module load command e g for the NAG FORTRAN library Mk22 module load nag fortran_mark22 This will set the environment variables FLAGS MATH INCLUDE FLAGS MATH_ LINKER and also FLAGS NAG INCLUDE FLA
76. ber and selecting Set Barrier in the pull down menu It is a good starting point to set and run into a barrier somewhere after the MPI initialization phase After initially calling MPI Comm _ rank the rank ID across the processes reveals whether the MPI startup went well This can be done by right clicking on the variable for the 118 The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 rank in the source pane then selecting either Across Processes or Across Threads from the context menu A 2 2 4 Starting Stopping and Restarting your Program You can perform stop start step and examine single processes or groups of processes Choose Group default or Process in the first pull down menu of the toolbar A 2 2 5 Printing a Variable You can examine the values of variables of all MPI processes by selecting View Show Across Processes in a variable window or alternatively by right clicking on a variable and selecting Across Processes The values of the variable will be shown in the array form and can be graphically visualized One dimensional arrays or array slices can also be shown across processes The thread ID is interpreted as an additional dimension A 2 2 6 Message Queues You can look into outstanding message passing operations un expected messages pending sends and receives with the Tools Message Queue Use Tools Message Queue Graph for visualization you will see pending messages and communication patterns Find
77. ble to determine the scope of a variable the corresponding parallel region will be serialized However the compiler will report the result of the autoscoping process so that the programmer can easily check which variables could not be automatically scoped and add suitable explicit scoping clauses for just these variables to the OpenMP parallel directive Add the compiler option vpara to get warning messages and a list of variables for which autoscoping failed Add the compiler option g to get more details about the effect of autoscoping with the er_ src command SPSRC pex 610 90 g 03 xopenmp vpara c PSRC psr jacobi_autoscope f95 SPSRC pex 610 er_src jacobi_autoscope o Find more information about autoscoping in http download oracle com docs cd E19059 01 stud 9 817 6703 5_ autoscope html 6 1 4 3 Autoparallelization The option to turn on autoparallelization with the Oracle compilers is xautopar which includes depend 03 and in case of FORTRAN also stackvar In case you want to combine autoparallelization and OpenMP we strongly suggest using the xautopar xopenmp combination With the option xreduction automatic parallelization of reductions is also permitted e g accumulations dot products etc whereby the modification of the sequence of the arithmetic operation can cause different rounding error accumulations Compiling with the option xloopinfo makes the compiler emit information about the parallelization If the nu
78. ble to mix different compilers all these variables except FLAGS RPATH also exist with the compiler s name in the variable name such as GCC_CXX or FLAGS GCC_FAST Example PSRC pex 520 CXX FLAGS_FAST FLAGS_ARCH64 FLAGS_OPENMP PSRC cpop pi cpp The makefiles of the example programs also use these variables see chapter 1 3 on page 9 for further advice on using these examples Flag Compiler gt Oracle Intel GCC FLAGS DEBUG g g0 8 8 SFLAGS _ FAST fast axCORE AVX2 CORE AVX I 03 O3 ip fp model fast 2 ffast math FLAGS_FAST_NO_FPOPT fast axCORE AVX2 CORE AVX I 03 fsimple 0 O3 ip fp model precise FLAGS_ ARCH32 64 m32 m64 m32 m64 m32 m64 Table 5 16 Compiler options overview In general we strongly recommend using the same flags for both compiling and linking Otherwise the program may not run correctly or linking may fail The order of the command line options while compiling and linking does matter The rightmost compiler option in the command line takes precedence over the ones on the left e g cc O3 O2 In this example the optimization flag O3 is overwritten by 02 Special care has to be taken if macros like fast are used because they may overwrite other options unintentionally Therefore it is advisable to enter macro options at the beginning of the command line If you get unresolved symbols while linking this may be caused by a wrong o
79. cessors void r_ processorbind int p binds current thread to a specific CPU void r_mpi_ processorbind void binds all MPI processes void r_omp_processorbind void binds all OpenMP threads The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 109 void r_ompi processorbind void binds all threads of all MPI processes Print out current bindings void r_mpi_processorprint int iflag void r_omp_processorprint int iflag void r_ompi_ processorprint int iflag 9 8 3 Memory Migration int r_movepages caddr_t addr size_t len Moves data to the processor where the calling process thread is running addr is the start address and en the length of the data to be moved in byte intr _madvise caddr_t addr size_t len int advice If the advise equals 7 the specified data is moved to the thread that uses it next 9 8 4 Other Functions char r_ getenv char envnam Gets the value of an environment variable int r_gethostname char hostname int len Returns the hostname int r_getcpuid void Returns processor ID void r_system char cmd Executes a shell command Details are described in the manual page man r_ lib If you are interested in the r_lib sources please contact us 9 9 HDF5 Lin HDF5 is a data model library and file format for storing and managing data It supports an unlimited variety of datatypes and is designed for flexible and efficient I O and for high volume and complex data More info
80. cessors but it works on AMD Opteron machines as well Intel MKL contains an implementation of BLAS BLACS LAPACK and ScaLAPACK Fast Fourier Transforms FFT complete with FFTW interfaces Sparse Solvers Direct PARDISO Iterative FGMRES and Conjugate Gradient Solvers Vector Math Library and Vector Random Number Generators The Intel MKL contains a couple of OpenMP parallelized routines and up to version 10 0 3 020 runs in parallel by default if it is called from a non threaded program Be aware of this behavior and disable parallelism of the MKL if needed The number of threads the MKL uses is set by the environment variable OMP NUM_ THREADS or MKL_NUM_ THREADS There are two possibilties for calling the MKL routines from C C 1 Using BLAS You can use the Fortran style routines directly Please follow the Fortran style calling conventions call by reference column major order of data Example S PSRC pex 950 CC FLAGS_MATH_INCLUDE c PSRC psr useblas c PSRC pex 950 FC FLAGS_MATH_LINKER PSRC psr useblas o 2 Using CBLAS Using the BLAS routines with the C style interface is the preferred way because you don t need to know the exact differences between C and Fortran and the compiler is able to report errors before runtime Example PSRC pex 950 1 CC FLAGS_MATH_INCLUDE c PSRC psr usecblas c PSRC pex 950 1 CC FLAGS_MATH_LINKER PSRC psr usecblas o 87http www fitw org The RWTH HPC Cluster Us
81. copy2 rz RWTH Aachen DE has been added cf chapter 1 1 on page 8 and table 1 1 on page 9 e New book recommendations cf chapter 5 3 on page 59 e The chapter 4 6 on page 54 JARA HPC Partition has been updated e We installed a 9 node cluster equipped with 2 Intel Xeon Phi MIC Architecture coprocessors Information about this cluster can be found in section 2 5 on page 17 e The paragraph Compute Unints in chapter 4 5 1 on page 37 has been updated e Short description of Sandy Bridge CPUs added cf chapter 2 3 4 on page 15 The last changes are marked with a change bar on the border of the page http www open mpi org faq category openfabrics ib xrc The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 3 Table of Contents 1 Introduction 8 Ll The HPCACIOSOT o ne a da AE BR Oe Ok oO ee Eke 8 1 2 Development Software Overview 0 0 00002 eee eee 8 Lo Examples oca 04484 odres bee hee ALY ad eh es 9 LA Further Iniormaton o cs ba ed Rae bE a RES Ee ee eS 11 2 Hardware 12 2 1 Terms and Definitions 45 6 6 44 8804 GAR Eve eee Re RES 12 2 1 1 Non Uniform Memory Architecture 2 00 4 12 22 CLonteuration or HPC tlister es a avp eae ee Re EB ee A 13 2 3 The Intel Xeon based Machines o su oaa e iy a 13 2 3 1 The Xeon X5570 Gainestown Nehalem EP Processor 13 2 3 2 The Xeon X7550 Beckton Nehalem EX Processor 15 2 3 3 The Xeon X5675
82. ction 6 3 3 Microsoft MPI Win Microsoft MPI currently supports up to MPI serialized The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 87 7 Debugging If your program is having strange problems there s no need for immediate despair try leaning back and thinking hard first 7 1 First Which were the latest changes that you made A source code revision system e g SVN CVS or RCS might help Reduce the optimization level of your compilation Choose a smaller data set Try to build a specific test case for your problem Look for compiler messages and warnings Use tools for a static program analysis see chapter 7 1 on page 88 Try a dynamic analysis with appropriate compiler options see chapter 7 2 on page 89 Reduce the number of CPUs in a parallel program try a serial program run if possible Use a debugger like TotalView see chapter 7 3 1 on page 90 Use the smallest case which shows the error In case of an OpenMP program use a thread checking tool like the Oracle Thread An alyzer see chapter 7 4 1 on page 91 or the Intel Inspector see chapter 7 4 2 on page 92 If it is an OpenMP program try to compile without optimization e g with g 00 xopenmp noopt for the Oracle compilers In case of an MPI program use a parallel debugger like TotalView Try another MPI implementation version and or release Try a different compiler Maybe you have run into a compiler bug Static Progr
83. ction B Send mail when when job is dispatched starts running N Send mail when job is done u lt mailaddress gt Recepient of mails Table 4 6 Mail dispatching options If no mail address is given the Email is redirected to the mail account defined for the user in the user administration system TIM The Email size is restricted to a size of 1024kB Job Limits Resources If your job needs more resources or higher job limits than the preconfigured defaults you need to specify these Please note that your application will be killed if it consumes more resources than specified To get an idea how much memory your application needs you can use memusage see chapter 5 11 on page 73 Note that there is less memory per slot available than the naive calculation memory size number of slots may suggest A part of memory 0 5 2 0 GB is not accessible at all due to addressing restriction The operating system also need some 36 Tivoli Identity Manager TIM http www rz rwth aachen de tim 36 The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 Parameter Function Default Set the runtime limit in format hour minute W lt runlimit gt After the expiration of this time the job will be killed 00 15 Note No seconds can be specified M lt memlimit gt Set the per process memory limit in MB 512 Set a per process stack size limit in MB S lt stacklimit gt Try to increase this limit if your applica
84. d consider optimizing it for parallel file systems You can of course use the known POSIX APIs fopen fwrite fseek but MPI as of version 2 0 offers high level I O APIs that allow you to describe whole data structures matrices records and I O operations across several processes An MPI implementation may choose to use this high level information to reorder and combine I O requests across processes to increase performance The biggest benefit of MPI s parallel I O APIs is their convenience for the programmer Recommended reading e Using MPI 2 Gropp Lusk and Thakus MIT Press Explains in understandable terms the APIs how they should be used and why e MPI A Message Passing Interface Standard Version 2 0 and later Message Passing Interface Forum The reference document Also contains rationales and advice for the user 32 The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 4 3 2 4 Tweaks The lfs utility controls the operation of Lustre You will be interested in lfs setstripe since this command can be used to change the stripe size and stripe count A directory s parameters are used as defaults whenever you create a new file in it When used on a file name an empty file is created with the given parameters You can safely change these parameters your data will remain intact Please do use sensible values though Stripe sizes should be multiples of 1 MiB due to characteristics of the underlying
85. deadlocks by selecting Options Cycle Detection in an opened Message Queue Graph window A 2 3 Debugging OpenMP Programs A 2 3 1 Some General Hints for Debugging OpenMP Programs Before debugging an OpenMP program the corresponding serial program should run correctly The typical OpenMP parallelization errors are data races which are hard to detect in a debugging session because the timing behavior of the program is heavily influenced by debugging You may want to use a thread checking tool first see chapter 7 4 on page 91 Many compilers turn on optimization when using OpenMP by default This default should be overwritten Use e g the xopenmp noopt suboption for the Oracle compilers or openmp 00 flags for the Intel compiler For the interpretation of the OpenMP directives the original source program is transformed The parallel regions are outlined into separate subroutines Shared variables are passed as call parameters and private variables are defined locally A parallel region cannot be entered stepwise but only by running into a breakpoint If you are using FORTRAN check that the serial program does run correctly compiled with e automatic option Intel ifort compiler or e stackvar option Oracle Studio 95 compiler or e frecursive option GCC gfortran compiler or e Mrecursive option PGI pgf90 compiler A 2 3 2 Compiling Some options e g the ones for OpenMP support cause certain com pilers to turn on optimizatio
86. default please append the following lines at THE END of the zshrc file in your home directory if o login then bash exit El 4 4 1 Z Shell zsh Configuration Files This section describes how to configure the zsh to your needs The user configuration files for the zsh are zshenv and zshrc which are sourced in this order during login The file zshenv is sourced on every execution of a zsh If you want to initialize something e g in scripts that use the zsh to execute put it in zshenv Please be aware that this file is sourced during login too Note Never use a command which calls a zsh in the zshenv as this will cause an endless recursion and you will not be able to login anymore Note Do not write to standard output in zshenv or you will run into problems using scp The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 33 In login mode the file zshrc is also sourced therefore zshre is suited for interactive zsh configuration like setting aliases or setting the look of the prompt If you want more information like the actual path in your prompt export a format string in the environment variable PS1 Example export PSi nC m 7 This will look like this userOcluster directory You can find an example zshre in PSRC psr zshre You can find further information in German about zsh configuration here http www rz rwth aachen de go id owu 4 4 2 The Module Package The Modu
87. e option dalign The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 67 Listing 14 f90 dalign PSRC pis badDalignFortran f90 a out Program verybad call subi call sub2 end Program subroutine subi integer a b c d common very_bad a b c d d 1 end subroutine subi subroutine sub2 integer a d real 8 x common very_bad a x d print d end subroutine sub2 Note The option dalign is actually required for FORTRAN MPI programs and for programs linked to other libraries like the Oracle Sun Performance Library and the NAG libraries Inlining of routines from the same source file xinline routine1 routine2 However please remember that in this case automatic inlining is disabled It can be restored through the auto option We therefore recommend the following xinline auto routine _ list With optimization level x04 and above this is automatically attempted for functions subroutines within the same source file If you want the compiler to perform inlining across various source files at linking time the option xipo can be used This is a compile and link option to activate interprocedural optimization in the compiler Since the 7 0 release xipo 2 is also supported This adds memory related optimizations to the interprocedural analysis In C and C programs the use of pointers frequently limits the compiler s optimization capability Through compiler options xrestrict and xalias le
88. e ta nvidia Enables PGI accelerator code generation for a GPU e ta nvidia cc20 Enables PGI accelerator code generation for a GPU supporting Com pute Capability 2 0 or higher e Mcuda Enables CUDA FORTRAN for a GPU supporting Compute Capability 1 3 or higher e Mcuda cc20 Enable CUDA FORTRAN for a GPU supporting Compute Capability 2 0 or higher If you need more information on our GPU Cluster please refer to 2 4 on page 16 In order to read or write big endian binary data in FORTRAN programs you can use the compiler option Mbyteswapio You can use the option Ktrap when compiling the main function pro gram in order to enable error trapping For information about shared memory parallelization with the PGI compilers refer to chapter 6 1 6 on page 81 The PGI compiler offers several options to help you find problems with your code e g Puts debugging information into the object code This option is necessary if you want to debug the executable with a debugger at the source code level cf Chapter 7 on page 88 e O0 Disables any optimization This options speeds up the compilations during the development debugging stages e w Disable warning messages 5 9 Microsoft Visual Studio Win Visual Studio offers a set of development tools including an IDE Integrated Development Environment and support for the programming languages C C Visual Basic and Java The current release version of Visual Studio is Visual Studio
89. e R_ Lib library It contains portable functions to bind processes and threads see 9 8 on page 109 for detailed information 3 2 Windows The nodes of the Windows part of the HPC Cluster run Windows Server 2008 HPC Edition All interactive services are disabled on the compute nodes in order to not interfere with compute jobs We decided to put some parts of the Windows related cluster documentation online since this text book is not well suited for descriptions with many images We then refer to http www rz rwth aachen de hpc win However the most important facts and tasks are described in this document as well 3 3 Addressing Modes All operating systems on our machines Linux and Windows support 64 bit addressing Programs can be compiled and linked either in 32 bit mode or in 64 bit mode This affects memory addressing the usage of 32 or 64 bit pointers but has no influence on the capacity or precision of floating point numbers 4 or 8 byte real numbers Programs requiring more than 4 GB of memory have to use the 64 bit addressing mode You have to specify the addressing mode at compile and link time The default mode is 32 bit on Windows and 64 bit on Linux Note long int data and pointers in C C programs are stored with 8 bytes when using 64 bit addressing mode thus being able to hold larger numbers The example program shown below in listing 4 on page 26 prints out 4 twice in the 32 bit mode CC FLAGS_ARCH32 PS
90. e count indicates that software 68 The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 pipelining has been applied which in general results in better performance A person knowl edgeable of the chip architecture will be able to judge by the additional information whether further optimizations are possible With a combination of er_src and grep successful subroutine inlining can also be easily verified PSRC pex 541 er_src o grep inline 5 6 3 Interval Arithmetic Lin The Oracle FORTRAN and C compilers support interval arithmetic In FORTRAN this is implemented by means of an intrinsic INTERVAL data type whereas C uses a special class library The use of interval arithmetic requires the use of appropriate numerical algorithms For more information refer to http download oracle com docs cd E19422 01 819 3695 web pages 5 7 GNU Compilers Lin On Linux a version of the GNU compilers is always available because it is shipped with the operating system although this system default version may be heavily outdated Please use the module command to switch to a non default GNU compiler version The GNU FORTRAN C C compilers can be accessed via the environment variables CC CXX SFC if the gcc module is the last loaded module or directly by the commands gcc g g77 gfortran The corresponding manual pages are available for further information The FORTRAN 77 compiler understands some FORTRAN 90 enhanc
91. e does not change during runtime of the program With Profile Guided Optimization the compiler can additionally gather information during program runs dynamic information You can instrument your code for Profile Guided Optimization with the prof gen flag When the executable is run a profile data file with the dyn suffix is produced If you now compile the source code with the prof use flag all the data files are used to build an optimized executable 5 5 3 Debugging The Intel compiler offers several options to help you find problems with your code e g Puts debugging information into the object code This option is necessary if you want to debug the executable with a debugger at the source code level cf Chapter 7 on page 88 Equivalent options are debug debug full and debug all e warn FORTRAN only Turns on all warning messages of the compiler e O0 Disables any optimization This option accelerate the compilations during the development debugging stages e gen interfaces FORTRAN only Creates an interface block a binary mod file and the corresponding source file for each subroutine and function 64 The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 e check FORTRAN only Turns on runtime checks cf Chapter 7 2 on page 89 traceback Tells the compiler to generate extra information in the object file to provide source file traceback information when a severe error occurs at run time
92. e em64t or ia32 for 64bit or 32bit programs e vc8 Visual Studio 2005 or vc9 Visual Studio 2008 9 8 R_Lib Lin The r_ lib is a Library that provides useful functions for time measurement processor binding and memory migration among other things It can be used under Linux An r_lib library version for Windows is under development Example PSRC pex 960 CC L usr local_rwth 1ib64 L usr local_rwth lib lr_lib I usr local_rwth include PSRC psr rlib c The following sections describe the available functions for C C and FORTRAN 9 8 1 Timing double r_ ctime void returns user and system CPU time of the running process and its children in seconds double r_ rtime void returns the elapsed wall clock time in seconds char r_ time void returns the current time in the format hh mm ss char r_ date void returns the current date in the format yy mm dd Example in C include r_lib h Real and CPU time in seconds as double double realtime cputime realtime r_rtime cputime r_ctime and in FORTRAN Real and CPU time in seconds REAL 8 realtime cputime r_rtime r_ctime realtime r_rtime cputime r_ctime Users CPU time measurements have a lower precision and are more time consuming In case of parallel programs real time measurements should be preferred anyway 9 8 2 Processor Binding The following calls automatically bind processes or threads to empty pro
93. e memory the right way see chapter 6 1 2 on page 78 and by launching the application Binding see chapter 3 1 1 on page 24 2 2 Configuration of HPC Cluster Table 2 3 on page 14 lists all the nodes of the HPC Cluster The node names reflect the operating system running The list contains only machines which are dedicated to general usage In the course of the proceeding implementation of our integrative hosting concept there are a number of hosted machines that sometimes might be used for batch production jobs These machines can not be found in the list The Center for Computing and Communication s part of the HPC Cluster has an accumu lated peak performance of about 325 TFlops The in 2011 new installed part of the cluster reached rank 32 in the June 2011 Top500 list http www top500 org list 2011 06 100 The hosted systems have an additional peak performance of about 40 TFlops 2 3 The Intel Xeon based Machines The Intel Xeon Nehalem and Westmere based Machines provide the main compute capacity in the cluster Nehalem and Westmere are generic names so different but related proces sors types are available These processors support a wide variety of x86 instruction extensions up to SSE4 2 nominal clock speed vary from 1 86 GHz to 3 6 GHz most types can run more than one thread per core hyperthreading Sandy Bridge is the codename for a microarchitecture developed by Intel to replace the
94. e option xcheck stkovf to detect stack overflows at runtime In case of a stack overflow a core file will be written that can then be analyzed by a debugger The stack trace will contain a function name indicating the problem The GNU C C compiler offers the option fmudflap to trace memory accesses during runtime If an illegal access is detected the program will halt With fbounds check the array bound checking can be activated To detect common errors with dynamic memory allocation you can use the library libefence Electric Fence It helps to detect two common programming bugs software that overruns the boundaries of a malloc memory allocation and software that touches a memory allocation that has been released by free If an error is detected the program stops with a segmentation fault and the error can easily be found with a debugger To use the library link with lefence You might have to reduce the memory consumption of your application to get a proper run Furthtermore note that for MPI programs the Intel MPI is the only MPI that is working with the efence library in our environment Using other MPIs will cause error messages For more information see the manual page man libefence Memory leaks can be detected using TotalView see chapter A 1 8 on page 115 the sam pling collector collect H see chapter 8 1 on page 93 or the open source instrumentation framework Valgrind please refer to http valgrind org If a pr
95. e output will be written to output_ J_ I tut HR the 4J is the job ID I is the array ID BSUB o ARRAYJOB 4I 41 Request the time you need for execution in minutes The format for the parameter is hour minute that means for 80 minutes you could also use this 1 20 BSUB W 1 42 Request vitual memory you need for your job in MB BSUB M 1024 which one array job is this echo LSB_JOBINDEX LSB_JOBINDEX HHH for 1 and 2 run a out with yet another parameters for all other values use it directly as input parameter case LSB_JOBINDEX in 1 a out first 2 a out second a out num LSB_JOBINDEX esac The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 Listing 7 PSRC pis LSF omp_job sh k l usr bin env zsh HH Job name BSUB J OMP12J0B File path where STDOUT will be written the 4J is the job id BSUB o OMP12JOB 4J OFF Different file for STDERR if not to be merged with STDOUT BSUB e OMP12J0B efJ Request the time you need for execution in minutes The format for the parameter is hour minute that means for 80 minutes you could also use this 1 20 BSUB W 1 42 Request vitual memory you need for your job in MB BSUB M 1024 OFF Specify your mail address BSUB u user rwth aachen de Send a mail when job is done BSUB N Request the number of compute slots you want to use BSUB n 12
96. e refer to chapter 4 1 on page 27 to get such an ssh client Depending on the client you use there are different ways to enter the necessary information The name of the host you need to connect to is cluster rz rwth aachen de other frontend nodes can be found in table 1 1 on page 9 and your user name is usually your TIM ID On Unix or Linux systems ssh is usually installed or at least included in the distribution If this is the case you can open a terminal and enter the command ssh Y lt username gt Ocluster rz rwth aachen de After entering the password you are logged in to the HPC Cluster and see a shell prompt like this ab123456 cluster 1 The first word is your user name in this case ab123456 separated by an Q from the machine name cluster After the colon the current directory is prompted in this case which is an alias for home ab123456 This is your home directory for more information on available directories please refer to chapter 4 3 on page 30 Please note that your user name contained in the path is of course different from ab123456 The number in the brackets counts the entered commands The prompt ends with the character If you want to change your prompt please take a look at chapter 4 4 on page 33 You are now logged in to a Linux frontend machine The cluster consists of interactively accessible machines and machines that are only acces sible by batch jobs Refer to chapter 4 5 on page 35 The inter
97. eature of the Intel compilers can be turned on for an input file with the compiler option parallel Qparallel on Windows which must also be supplied as a linker option when an autoparallelized executable is to be built The number of threads to be used at runtime may be specified in the environment variable OMP_NUM_ THREADS just like for OpenMP We recommend turning on serial optimization via O2 or O3 when using parallel to enable automatic inlining of function subroutine calls within loops which may help in automatic parallelization You may use the option par report to make the compiler emit messages about loops which have been parallelized If you want to exploit the autoparallelization feature of the Intel has open sourced the production OpenMP runtime under a BSD license to support tool developers and others http openmprtl org 78 The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 Intel compilers it is also very helpful to know which portions of your code the compiler tried to parallelize but failed Via par report3 you can get a very detailed report about the activities of the automatic parallelizer during compilation Please refer to the Intel compiler manuals about how to interpret the messages in such a report and how to subsequently re structure your code to take advantage of automatic parallelization 6 1 4 Oracle compilers Lin The Oracle FORTRAN C C compilers support OpenMP via the compiler linker op
98. ect MPI trace data but it can still be used for all other types of data Each process write its own trace thouch resulting in multiple test er 83Tn our environment the hardware counters are again available only from the version studio 12 3 on In older versions of Oracle Studio collect use a kernel path which is not available now 94 The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 h cycles on insts on Cycle count instruction count The quotient is the CPI rate clocks per instruction The MHz rate of the CPU multiplied with the instruction count divided by the cycle count gives the MIPS rate Alternatively the MIPS rate can be obtained as the quotient of instruction count and runtime in seconds h fpadd on fpmul on Floating point additions and multiplications The sum divided by the runtime in seconds gives the FLOPS rate h cycles on dtlbm on dtlbh on Cycle count data translation look aside buffer DTLB misses and hits A high rate of DTLB misses indicates an unpleasant memory access pattern of the program Large pages might help h der on dcm on 2dr on 12dm on L1 and L2 D cache refedences and misses A high rate of cache misses indicates an unpleasant memory access pattern of the program Table 8 26 Hardware counter available for profiling with collect on AMD Barcelona CPUs h cycles on insts on Same meaning as in table 8 26 on page 95 h fp_comp_ops_ exe on The co
99. ed The above example could result in 1 4 2 3 If the execution order is crucial e g in case of different computation stages you have to define the order explicitly e Submit the follow up job s from within a batch job after the computation Submitting after the computation ensure the genuine sequence but will prolong pending times e Make the follow up s start dependent on predecessor s jobs ending using the job depen dencies feature with the bsub option w lt condition gt Besides being very flexible job dependencies are complex and every single dependency has to be defined explicitly Example the job second will not start until the job first is is done bsub J first echo I am FIRST bsub J second w done first echo I have to wait When submitting a lot of chain jobs scripted production is a good idea in order to minimize typos An example for can be found on the pages of TU Dresden 40 The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 Parameter Function P lt projectname gt Assign the job to the specified project G lt usergroup gt Associate the job with the specified group for fairshare scheduling Table 4 11 Project options Project Options Project Options e g helpful for ressource management are given in the table 4 11 on page 41 Integrative Hosting Users taking part in the integrative hosting who are member of a project group can submi
100. ed as replacement for each Unfortunately different vendors use the same terms with various meanings 6A chip is one piece of silicon often called die Intel calls this a processor 8The term n way is used in different ways For us n is the number of logical processors which the operating system sees 12 The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 other The future development in computer architectures can lead to a rise of non cache coherent NUMA systems As far as we only have ccNUMA computers we use cCNUMA and NUMA terms interchangeably Each processor can thus directly access those memory banks that are attached to it local memory while accesses to memory banks attached to the other processors remote memory will be routed over the system interconnect Therefore accesses to local memory are faster than those to remote memory and the difference in speed may be significant When a process allocates some memory and writes data into it the default policy is to put the data in memory which is local to the processor first accessing it first touch as long as there is still such local memory available To obtain the whole computing performance the application s data placement and memory access pattern are crucial Unfavorable access patterns may degrade the performance of an application considerably On NUMA computers arrangements regarding data placement must be done both by programming accessing th
101. ed by module avail To find out in which category a module modulename is located try module apropos modulename If your environment seems to be insane e g the environment variable LD_ LIBRARY _ PATH is not set properly try out module reload You can add a directory with your own module files with module use path By default only the DEVELOP software category module is loaded to keep the available mod ules clearly arranged For example if you want to use a chemistry software you need to load the CHEMISTRY category module After doing that the list of available modules is longer and you can now load the software modules from that category On Linux the Intel compilers and Open MPI implementation are loaded by default Note If you loaded module files in order to compile a program and subsequently logged out and in again you probably have to load the same module files before running that program Otherwise some necessary libraries may not be found at program startup time The same situation arises when you build your program and then submit it as a batch job You may need to put the appropriate module commands in the batch script Note We strongly discourage the users from loading any modules defaultly in your envi ronment e g by adding any module commands in the zshenv file The modification of the standard environment may lead to unpredictable strong to discover behaviour Instead you can define a module loading script con
102. ed towards the project s core hour usage for the current month Note that according to this model usage in the current month of either transferred or borrowed time has a negative impact on the next month s allowance For example the current month is italicised January February March April Monthly allowance 50000 50000 50000 50000 Consumed core hrs 0 120000 up to 30000 In this scenario 50000 unused core hours from January were transferred to and consumed in February Also 20000 core hours were borrowed from March In March the project could only use up to 30000 core hours 3 x 50000 120000 The capacity to use the monthly allowance in its entirety will be restored again in April Therefore it is recommended that you try to spread your usage evenly throughout the compute period 4 6 2 3 Check utilization You can query the status of your core hours quota using the q_cpuquota command q_cpuquota jara4321 Group jara4321 Start of Accounting Period 01 01 2013 End of Accounting Period 30 06 2013 State of project active Quota monthly core h 1000 Remaining core h of prev month 200 Consumed core h act month 0 Consumable core h 120 Consumable core h 2200 In the example above 1000 hours per month are available In the previous month only 800 hours have been used leaving a total of 1200 core hours 120 for the current month Borrowing all next month s quota up to 2200 cores hours ca
103. efore the executable MPIEXEC FLAGS_MPI_BATCH memusage hostname The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 73 Used cluster is bulldc If you want another cluster read help linuxbdcO5 rz RWTH Aachen DE linuxbdcO5 rz RWTH Aachen DE rank 0 VmPeak 13748 kB rank 1 VmPeak 13748 kB 5 12 Memory alignment The standard memory allocator malloc allocates the memory not aligned to the beginning of the addres space and thus to any system boundary e g start of a memory page In some cases e g transferring data using InfiniBand on some machines the unaligned memory is being processed slower than memory aligned to some magic number usually a power of two Aligned memory can be allocated using memalign instead of malloc however this is tedious needs change of program code and recompilation C C and is not available at all in Fortran where system memory allocation is wrapped to calls of Fortran ALLOCATE by compiler s libraries Another way is to wrap the calls to malloc to memalign using a wrapper library This library is provided to the binary by LD_PRELOAD environment variable We provide the memalign32 script which implement this leading all allocated memory being aligned by 32 Example memalign32 sleep 1 For MPI programs you have to insert the wrapper just before the executable MPIEXEC FLAGS_MPI_BATCH memalign32 a out Note Especially if memory is allocated in very small c
104. el TBB Intel Cilk Plus 2 5 3 3 MPI An MPI program with ranks only on processors may employ offload to access the performance of the coprocessors An MPI program may run in a native mode with ranks on both processors and coprocessors So MPI can be used for reduction of parallel layers For compiling a MPI program on the host the MPI module must be switched module switch openmpi intelmpi 4 1mic The module defines the following variables I_MPI_ MIC enable I_MPI_MIC_ POSTFIX mic After that two different versions must be build One with the mmic switch and another without MPICC micproc c o micproc MPICC micproc c o micproc mic mmic In order to start MPI applications over multiple MICs the interactive MPIEXEC wrapper can be used The wrapper is only allowed to start processes on MICs when you are logged in on a MIC containg host e g cluster phi rz rwth aachen de The MPlexec wrapper can be used as normal with dynamic load balancing In order to distinguish between processes on the host and processes on the MICs there are 2 different command line parameters 18 The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 Start 2 processes on the host MPIEXEC nph 2 micproc Start 2 processes on the coprocessors MPIEXEC npm 2 micproc mic The parameters can be used simultaneously MPIEXEC nh 2 nm 30 micproc Additionally there is the possibility to start MPI application on coprocessors and hosts without
105. ements when called with the parameters ff90 ffree form Sometimes the option fno second underscore helps in linking The FORTRAN 95 Compiler gfortran is available since version 4 5 7 1 Frequently Used Compiler Options Compute intensive programs should be compiled and linked with the optimization options which are contained in the environment variable SFLAGS FAST For the GNU compiler 4 4 SFLAGS_ FAST currently evaluates to echo FLAGS_FAST 03 ffast math mtune native These flags have the following meaning e 03 The Ox options control the number and intensity of optimization techniques the compiler tries to apply to the code Each of these techniques has individual flags to turn it on the Ox flags are just summary options This means that O which is equal to O1 turns some optimizations on O2 a few more and O3 even more than O2 e ffast math With this flag the compiler tries to improve the performance of floating point calculations while relaxing some correctness rules ffast math is a summary option for several flags concerning floating point arithmetic e mtune native Makes the compiler tune the code for the machine on which it is running You can supply this option with a specific target processor please consult the GNU compiler manual for a list of available CPU types If you use march instead of mtune the generated code might not run on all cluster nodes anymore because the compiler is free to use cer
106. eneral investigation of the communication behaviour Run the program under the control of ITC by using the trace command line argument of the Intel mpiexec A message from the Trace Collector should appear indicating where the collected information is saved in form of an stf file Use the ITA GUI to analyze this trace file On Linux start the Analyzer GUI with traceanalyzer lt somefile gt stf Example PSRC pex 890 MPIEXEC trace np 2 a out traceanalyzer a out stf There also exists a command line interface of the Trace Analyzer on Linux Please refer to the manual On Windows start the Analyzer GUI by Start Programs gt Intel Software Development Tools Intel Trace Analyzer and Collector Intel Trace Analyzer and open the trace file Trace files produced on Linux may be analyzed on Windows and vice versa Compiler driven Subroutine Instrumentation allows you to trace the whole program additionally to the MPI library In this mode the user defined non MPI functions are traced as well Function tracing can easily generate huge amounts of trace data especially for function call intensive and object oriented programs For the Intel compilers use the flag tcollect on Linux or Qtcollect on Windows to enable the collecting The switch accepts an optional argument to specify the collecting library to link For example for non MPI applications you can select libV Tes tcollect VTcs The default value is VT Use the f
107. ent functions are selected by the corresponding C preprocessor definitions The time is measured in seconds as double precision floating point number Alternatively you can use all the different time measurement functions directly Linux example in C tinclude lt sys time h gt struct timeval tv double second gettimeofday amp tv struct timezone 0 second double tv tv_sec double tv tv_usec 1000000 0 In FORTRAN you also can use the gettimeofday Linux function but it must be wrapped An example is given in listings 15 on page 72 and 16 on page 73 After the C wrapper and the Fortran code are compiled link and let the example binary run FC rwthtime o use_gettimeofday o a out Listing 15 CC c PSRC psr rwthtime c 1 tinclude lt sys time h gt 2 Htinclude lt stdio h gt 3 This timer returns current clock time in seconds 4 double rwthtime_ 4 5 struct timeval tv 6 int ierr 7 ierr gettimeofday amp tv NULL 8 if ierr 0 printf gettimeofday ERR ierr 4d n ierr 9 return double tv tv_sec double tv tv_usec 1000000 0 10 7 You can use the uptime command on Linux to check the load 72 The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 Listing 16 FC c PSRC psr use_ gettimeofday f90 1 PROGRAM ti 2 IMPLICIT NONE 3 REAL 8 rwthtime a WRITE Wrapped gettimeofday rwthtime 5 END PROGRAM t1 The Orac
108. environment variables FLAGS ACML_ INCLUDE and FLAGS ACML_ LINKER for compiling and linking which are the same as the FLAGS MATH _ if the ACML module was loaded last Example PSRC pex 941 CC FLAGS_MATH_INCLUDE c PSRC psr useblas c PSRC pex 941 FC FLAGS_MATH_LINKER PSRC psr useblas o However given the current dominance of Intel based processors in the cluster we do not recommend using ACML and propose to use the Intel MKL instead 9 6 NAG Numerical Libraries Lin The NAG Numerical Libraries provide a broad range of reliable and robust numerical and sta tistical routines in areas such as optimization PDEs ODEs FFTs correlation and regression and multivariate methods to name just a few The following NAG Numerical Components are available 1 NAG C Library A collection of over 1 000 algorithms for mathematical and statistical computation for C C programmers Written in C these routines can be accessed from other languages including C and Java 2 NAG FORTRAN Library A collection of over 1 600 routines for mathematical and statis tical computation This library remains at the core of NAG s product portfolio Written in FORTRAN the algorithms are usable from a wide range of languages and packages including Java MATLAB NET C and many more 3 NAG FORTRAN 90 Library A collection of over 200 generic user callable procedures giving easy access to complex and highly sophisticated algorithms each designed
109. er of slots allocated to the job Table 4 14 LSF environment variables Job Monitoring You can use the bjobs command to display information about jobs bjobs options job_ID The output prints for example the state the submission time or the job ID JOBID USER STAT QUEUE FROM_HOST EXEC_HOST JOB_NAME SUBMIT_TIME 3324 tc53084 RUN serial linuxtc02 ib_bull BURN_CPU_1 Jun 17 18 14 3325 tc53084 PEND serial linuxtc02 ib_bull BURN_CPU_1 Jun 17 18 14 3326 tc53084 RUN parallel linuxtc02 12 ib_bull RN_CPU_12 Jun 17 18 14 3327 tc53084 PEND parallel linuxtc02 12 ib_bull RN_CPU_12 Jun 17 18 14 Some useful options of the bjobs command are denoted in the table 4 15 on page 52 Please note especially the p option you may get a hint to the reason why your job is not starting Option Description 1 Long format displays detailed information for each job W Wide format displays job information without truncating fields r Displays running jobs p Displays pending job and the pending reasons s Displays suspended jobs and the suspending reason Table 4 15 Parameters of bjobs command Further Information More documentation on Platform LSF is available here http www1 rz rwth aachen de manuals LSF 8 0 index html Also there is a man page for each LSF command 52 The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 4 5 2 Windows Batch System Win By introducing the Microsoft HPC Pack
110. er s Guide Version 8 2 6 August 2013 105 Please refer to Chapter Language specific Usage Options in the Intel MKL User s Guide for details with mixed language programming 9 3 1 Intel MKL Lin Starting with version 11 of Intel compiler a version of MKL is included in the compiler distri bution and the environment is initialized if the compiler is loaded We strongly recommend to use the included version of Intel MKL with the Intel compilers To use Intel MKL with another compiler load this compiler at last and then load the MKL environment To initialize the Intel MKL environment use module load LIBRARIES module load intelmkl This will set the environment variables FLAGS MKL INCLUDE and FLAGS MKL_ LINKER for compiling and linking which are the same as the FLAGS MATH _ if the MKL module was loaded last These variables let you use at least the BLAS and LAPACK routines of Intel MKL To use other capabilities of Intel MKL please refer to the Intel MKL documentation http software intel com en us articles intel math kernel library documentation The BLACS and ScaLAPACK routines use Intel MPI so you have to load the Intel MPI before compiling and running a program which uses BLASCS or ScaLAPACK 9 3 2 Intel MKL Win On Windows Intel MKL comes bundled with the Intel compilers Please refer to the Intel MKL Link Line Advisor at http software intel com en us articles intel mk1 link line advisor to learn how to use and
111. eric numerical_libraries asp 9 7 TBB Intel Threading Building Blocks Lin Win Intel Threading Building Blocks is a runtime based threaded parallel programming model for C code It consists of a template based runtime library to help you to use the performance of multicore processors More information can be found at http www threadingbuildingblocks org On Linux a release of TBB is included into Intel compiler releases and thus no additional module needs to be loaded Additionally there are alternative releases which may be initialized by loading the corresponding modules module load inteltbb Use the environment variables LIBRARY PATH and CPATH for compiling and linking To link TBB set the ltbb flag With ltbb debug you may link a version of TBB which provides some debug help Linux Example SPSRC pex 961 CXX 02 DNDEBUG I CPATH o ParSum ParallelSum cpp 1tbb SPSRC pex 961 ParSum Use the debug version of TBB 108 The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 SPSRC pex 962 CXX 00 g DTBB_DO_ASSERT CXXFLAGS I CPATH o ParSum_debug ParallelSum cpp 1tbb_debug PSRC pex 962 ParSum_debug On Windows the approach is the same i e you have to link with the TBB library and set the library and include path The Intel TBB installation is located in C Program Files x86 Intel TBB lt VERSION gt Select the appropriate version of the library according to your environment
112. execute the example The script includes all necessary initializations Or you can do the initialization yourself and then run the command after the pipes in this case echo Hello World However most of the scripts are offered for Linux only The example programs demonstrating e g the usage of parallelization paradigms like OpenMP or MPI are available on a shared cluster file system The environment variable PSRC points to its base directory On our Windows systems the examples are located on drive P The code of the examples is usually available in the programming languages C C and FORTRAN F The directory name contains the programming language the parallelization paradigm and the name of the code e g the directory PSRC C omp pi contains the Pi example written in C and parallelized with OpenMP Available paradigms are The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 9 Tool Ser ShMem MPI Debugging TotalView Lin X X X Allinea DDT tin X X X MS Visual Studio Wi X X X Oracle Thread Analyzer X Intel Inspector Win x GNU gdb H X PGI pgdbg H x Analysis a Tuning Oracle Performance Analyzer M X X X GNU gprof Lin X Intel Thread Profiler Lin Win X Intel VTune Amplifier Win x Intel Trace Analyzer and Collector Lin Win X Vampir Pin X Scalasca H X Table 1 2 Development Software Overview Ser Serial Programming ShMem Shared memory parallelization A
113. f mpiexec wrapper options some of which are shown in table 6 21 on page 84 followed by help of native mpiexec of loaded MPI module Passing environment variables from the master where the MPI program is started to the other hosts is handled differently by the MPI implementations We recommend that if your program depends on environment variables you let the master MPI process read them and broadcast the value to all other MPI processes The following sections show how to use the different MPI implementations without those predefined module settings 6 2 2 Open MPI Lin Open MPI http www openmpi org is developed by several groups and vendors To set up the environment for the Open MPI use module load openmpi This will set environment variables for further usage The list of variables can be obtained with module help openmpi The compiler drivers are mpicc for C mpif77 and mpif90 for FORTRAN mpicxx and mpiCC for C To start MPI programs mpiexec is used 6 Currently a version of Open MPI is the standard MPI in the cluster environment so the corresponding module is loaded by default The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 83 help h prints this help and the help information of normal mpiexec show v prints out which machines are used d prints debugging information about the wrapper mpidebug prints debugging information of the MPI lib only Open MPI needs
114. fore Studio 10 where idle threads spin can use SUNW_MP_THR_IDLE spin to change the behavior Please be aware that having threads spin will unnecessarily waste CPU cycles Note The environment variable SUNW_MP_GUIDED_WEIGHT can be used to set the weighting value used by libmtsk for loops with the guided schedule The libmtsk library uses the following formula to compute the chunk sizes for guided loops chunk_size num_unassigned_iterations weight num_threads where num _ unassigned iterations is the number of iterations in the loop that have not yet been assigned to any thread weight is a floating point constant default 2 0 and num_ threads is the number of threads used to execute the loop The value specified for SUNW_MP_GUIDED_ WEIGHT must be a positive non zero floating point constant We recommend to set SUNW_MP_WARN TRUE while developing in order to enable ad ditional warning messages of the OpenMP runtime system Do not however use this during production because it has performance and scalability impacts We also recommend the use of the option vpara FORTRAN or xvpara C which might allow the compiler to catch errors regarding incorrect explicit parallelization at compile time Furthermore the option xcommonchk FORTRAN can be used to check the consistency of thread private declarations 6 1 4 1 Thread binding The SUNW_MP_PROCBIND environment variable can be used to bind threads in an OpenMP program to specific virtual proces
115. ftrapuv Initializes stack local variables to an unusual value to aid error detection This helps to find uninitialized local variables 5 6 Oracle Compilers Lin On the Linux based nodes the Oracle Studio 12 3 development tools are now in production mode and available after loading the appropriate module with the module command refert to section 4 4 2 on page 34 They include the FORTRAN 95 C and C compilers If necessary you can use other versions of the compilers by modification of the search path through loading the appropriate module with the module command refer to section 4 4 2 on page 34 module switch studio studio 12 2 Accordingly you can use preproduction releases of the compiler if they are installed You can obtain the list of all available versions by module avail studio We recommend that you always recompile your code with the latest production version of the used compiler due to performance reasons and bug fixes Check the compiler version that you are currently using with the compiler option v The compilers are invoked with the commands cc c89 c99 90 95 CC and since Studio 12 additional Oracle specific names are available suncc sunc89 sunc99 sunf90 sunf95 sunCC You can get an overview of the available compiler flags with the option flags We strongly recommended using the same flags for both compiling and linking Since the Sun Studio 7 Compiler Collection release a separate FORTRAN 77 comp
116. ging stages e pedantic Is picky about the language standard and issues warnings about non standard constructs pedantic errors treats such problems as errors instead of warnings 5 8 PGI Compilers Lin Use the module command to load the compilers of The Portland Group into your environment The PGI C C FORTRAN 77 FORTRAN 90 compilers can be accessed by the commands pgcc pgCC pgf77 pgf90 Please refer to the corresponding manual pages for further information Extensible documentation is available on The Portland Group s website The following options provide a good starting point for producing well performing machine code with these compilers e fastsse Turns on high optimization including vectorization e Mconcur compiler and linker option Turns on auto parallelization e Minfo Makes the compiler emit informative messages including those about successful and failed attempts to vectorize and or auto parallelize code portions e mp compiler and linker option Turns on OpenMP Of those PGI compiler versions installed on our HPC Cluster the 11 x releases include support for Nvidia s CUDA architecture via the PGI Accelerator directives and CUDA FORTRAN The following options enable this support and must be supplied during compile and link steps The option Minfo described above is helpful for CUDA code generation too http www pgroup com 70 The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013
117. gn spec where endian can be one of little big or native maxalign can be 1 2 4 8 or 16 specifying the maximum byte alignment for the target plat form and spec is a filename a FORTRAN IO unit number or all for all files The default is 64 Note this works only if the program is compiled in 32bit and does not use SSE2 instructions The man page of Oracle compiler does not say this clear 66 The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 xfilebyteorder native all which differs depending on the compiler options and platform The different defaults are listed in table 5 19 on page 67 32 bit addressing 64 bit addressing architecture little4 all little16 all x86 big8 all big16 all UltraSPARC Table 5 19 Endianness options The default data type mappings of the FORTRAN compiler can be adjusted with the xtypemap option The usual setting is xtypemap real 32 double 64 integer 32 The REAL type for example can be mapped to 8 _ bytes with xtypemap real 64 double 64 integer 32 The option g writes debugging information into the generated code This is also useful for runtime analysis with the Oracle Sun Performance Analyzer that can use the debugging information to attribute time spent to particular lines of the source code Use of g does not substantially impact optimizations performed by the Oracle compilers On the other hand the correspondence between the binary program and the s
118. h txt 52 https wiki2 rz rwth aachen de download attachments 458782 non mpi_job sh txt 44 The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 Listing 5 PSRC pis LSF serial_job sh k l usr bin env zsh Job name BSUB J SERIALJOB File path where STDOUT will be written the 4J is the job id BSUB o SERIALJOB 4JI OFF Different file for STDERR if not to be merged with STDOUT BSUB e SERIALJOB ex I Request the time you need for execution in minutes The format for the parameter is hour minute that means for 80 minutes you could also use this 1 20 BSUB W 1 42 Request vitual memory you need for your job in MB BSUB M 1024 OFF Specify your mail address BSUB u user rwth aachen de Send a mail when job is done BSUB N Export an environment var export A_ENV_VAR 10 Change to the work directory cd home user workdirectory Execute your application a out The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 45 o 3D 0 A Ww N RA w 0 v Y Y wo won NN NY NN NN NN N N N No ee e A A A A A KF AA a oa Bb N e O O ODN DT FF U Nn O 0 ON DoT A O N eO 46 Listing 6 PSRC pis LSF array_job sh k l usr bin env zsh Job name and array definition run jobs with ID 1 2 3 5 Note all jobs may run parallely BSUB J myArray 1 3 5 OFF File path where STDOUT will be written by default th
119. he menu File Quick Connect Enter the host name and user name and select Connect You will get a split window The left half represents the local computer and the right half the remote system Files can be exchanged by drag and drop As an alternative to Secure File Transfer Client the PS FTP program can be used refer to http www psftp de If you log into the Windows cluster you can export your local drives or directories and access the files as usual See chapter 4 2 1 on page 29 or 4 2 2 on page 29 for more details Furthermore you can use the hot keys ctrl c and ctrl v to copy files to and from the remote host 32Currently only the Sun Blade X6275 computers see table 2 3 on page 14 have a network mounted tmp directory on a Lustre file system See 4 3 2 on page 32 33 Although the data transfer is possible over any HPC Clusterfrontend we recommend the usage of the dedicated cluster copy rz RWTH Aachen DE node The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 31 4 3 2 Lustre Parallel File System 4 3 2 1 Basics Lustre is a file system designed for high throughput when working with few large files Note When working with many small files e g source code the Lustre file system may be many times slower than the ordinary network file systems used for HOME To the user it is presented as an ordinary file system mounted on every node of the cluster as HPCWORK Note There is no backup of the Lustre fi
120. hich may be adjusted in order to get more performance for an actual job type on an actual platform We set some Open MPI tunables by default usually using OMPI environment variables 6 2 3 Intel s MPI Implementation Lin Intel provides a commercial MPI library based on MPICH2 from Argonne National Labs It may be used as an alternative to Open MPI On Linux Intel MPI can be initialized with the command module switch openmpi intelmpi This will set up several environment variables for further usage The list of these variables can be obtained with module help intelmpi In particular the compiler drivers mpiifort mpifc mpiicc mpicc mpiicpc and mpicxx as well as the MPI application startup scripts mpiexec and mpirun are included in the search path The compiler drivers mpiifort mpiicc and mpiicpc use the Intel Compilers whereas mpifc mpicc and mpicxx are the drivers for the GCC compilers The necessary include directory MPI_ INCLUDE and the library directory MPI_LIBDIR are selected automatically by these compiler drivers We strongly recommend using the environment variables MPIFC MPICC MPICXX and MPIEXEC set by the module system for building and running an MPI application 77Currently these are not directly accessible but obscured by the wrappers we provide 84 The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 Option Description n lt A gt Number of processes to start H
121. ht mouse button A 1 7 Action Points Breakpoints Evaluation Points Watchpoints e The program will stop when it hits a breakpoint e You can temporarily introduce some additional C or FORTRAN style program lines at an Evaluation Point After creating a breakpoint right click on the STOP sign and select Properties Evaluate to type in your new program lines Examples are shown in table A 29 on page 115 An additional print statement FORTRAN write is not printf x f n x 20 accepted Conditional breakpoint if i 20 Sstop Stop after every 20 executions count 20 Jump to program line 78 goto 78 Visualize an array visualize a Table A 29 Action point examples e A watchpoint monitors the value of a variable Whenever the content of this variable memory location changes the program stops To set a watchpoint dive on the variable to display its Variable Window and select the Tools gt Watchpoint command You can save reload your action points by selecting Action Point Save All resp Load All A 1 8 Memory Debugging TotalView offers different memory debugging features You can guard dynamically allocated memory so that the program stops if it violates the boundaries of an allocated block You can hoard the memory so that the program will keep running when you try to access an already freed memory block Painting the memory will cause errors more probably especially reading and
122. hunks the aligned allocation lead to memory waste and thus can lead to significant increase of the memory footprint Note We cannot give a guarantee that the application will still run correctly if using memalign32 script Use at your own risk 5 13 Hardware Performance Counters Hardware Performance Counters are used to measure how certain parts like floating point units or caches of a CPU or memory system are used They are very helpful in finding performance bottlenecks in programs The Opteron and Xeon processor core offers 4 programmable 48 bit performance counters 5 13 1 Linux At the moment we offer the following interfaces for accessing the counters e Intel VTune Amplifier see chapter 8 2 1 on page 98 e Oracle Sun Collector see chapter 8 1 on page 93 e Vampir ch 8 3 on page 99 and Scalasca ch 8 4 on page 102 over PAPI Library Note At present the kernel module for use with Intel VTune is available on a few specific machines 74 The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 5 13 2 Windows At the moment we offer only Intel VTune Amplifier to access hardware counters on Windows please refer to chapter 8 2 1 on page 98 for more information The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 75 6 Parallelization Parallelization for computers with shared memory SM means the automatic distribution of loop iterations over several processors automatic parallelization the explicit d
123. hybrid jobs is general and can be used for all available node types For Big SMP BCS systems there is also an alternative way to start the hybrid jobs see page 42 Non MPI Jobs Over Multiple Nodes It is possible to run jobs using more than one node which do not use MPI for communication e g some client server application In this case the user has to start and terminate the partial processes on nodes advised by LSF manually The distribution of slots over machines can be found in environment variables set by LSF see table 4 14 on page 52 An example script can be found in listing 11 on page 51 Note that calls for SSH are wrapped in the LSF batch Array Jobs Array jobs are the solution for running jobs which only differ in terms of the input e g running different input files in the same program in the context of parameter study sensitivity analysis Essentially the same job will be run repeatedly only differing by an environment variable The LSF option for array jobs is J The following example would print out Job 1 Job 10 bsub J myArray 1 10 echo Job LSB_JOBINDEX The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 39 The variable LSB_ JOBINDEX contains the index value which can be used to choose input files from a numbered set or as input value directly See example in listing 6 on page 46 Another way would be to have parameter sets stored one per row in a file The index can be used to select a corre
124. ig endian Read or write big endian binary data in FORTRAN programs Table 5 17 on page 63 provides a concise overview of the Intel compiler options 56Intel says for the Intel Compiler vectorization is the unrolling of a loop combined with the generation of packed SIMD instructions 571f the compiler fails to vectorise a piece of code you can influence it using pragmas e g pragma ivdep indicate that there is no loop carried dependence in the loop or pragma vector always aligned unaligned compiler is instructed to always vectorize a loop and ignore internal heuristtics There are more compiler pragmas available For more information please refer to the compiler documentation In Fortran there are compiler directives instead of pragmas used with the very same meaning Note Using pragmas may lead to broken code e g if mocking no loop dependence in a loop which has a dependence For this option the syntax ObN is still available on Linux but is deprecated 61 Objects compiled with ipo are not portable so do not use for libraries 62 The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 Linux Windows Description C Je compile but do not link o filename Fo filename specify output file name 00 Od no optimization useful for debugging O1 O1 some speed optimization 02 02 default speed optimization the generated code can be significantly larger 03 03 highest optimizatio
125. iler is not available anymore f77 is a wrapper script used to pass the necessary compatibility options like f77 to the f95 compiler This option has several suboptions Using this option without any explicit suboption list expands to ftrap none f77 all which enables all compatibility features and also mimics FORTRAN 77 s behavior regarding arithmetic exception trapping We recommend adding f77 ftrap common in order to revert to f95 settings for error trapping which is considered to be safer When linking to old f77 object binaries you may want to add the option xlang f77 at the link step For information about shared memory parallelization refer to chapter 6 1 4 on page 79 5 6 1 Frequently Used Compiler Options Compute intensive programs should be compiled and linked with the optimization options which are contained in the environment variable FLAGS _FAST Since the Studio compiler may produce 64bit binaries as well as 32bit binaries and the default behavior is changing across compiler versions and platforms we recommend setting the bit width explicitly by using the FLAGS ARCH64 or SFLAGS_ARCH32 environment variables The often used option fast is a macro expanding to several individual options that are meant to give the best performance with one single compile and link option Note however that the expansion of the fast option might be different across the various compilers compiler releases or compilation platforms To
126. ily in version 11 1 now provides the default FOR TRAN C C compilers on our Linux machines Although the Intel compilers in general generate very efficient code it can be expected that AMD s processors are not the main focus of the Intel compiler team As alternatives the Oracle Studio compilers and PGI compilers are available on Linux too Depending on the code they may offer better performance than the Intel compilers The Intel compiler offers interesting features and tools for OpenMP programmers see chapter 6 1 3 on page 78 and 7 4 2 on page 92 The Oracle compiler offers comparable tools see chapter 7 4 1 on page 91 A word of caution As there is an almost unlimited number of possible combinations of compilers and libraries and also the two addressing modes 32 and 64 bit we expect that there will be problems with incompatibilities especially when mixing C compilers On Windows the Microsoft Visual Studio environment is installed supporting the Mi crosoft Visual C compiler as well as Intel FORTRAN 95 and C compilers 5 2 General Hints for Compiler and Linker Usage Lin To access non default compilers you have to load the appropriate module file You can then access the compilers by their original name e g g gcc gfortran or via the environment variables CXX CC or FC However when loading more than one compiler module you have to be aware that the environment variables point to the last compiler loaded
127. in virtually error free environment Due to fact that Lustre works over InfiniBand IB it also is troubled any times when IB is impacted If your batch job uses the HPCWORK file system you should set this parameter BSUB R select hpcwork This will ensure that the job will run on machines with up n running Lustre file system On some machines mainly the hardware from pre Bull installation and some machines from Integrative Hosting the HPCWORK is connected via ethernet instead of InfiniBand providing no advantage in terms of speed in comparison to the HOME and WORK file system If your batch job do a lot of input output in HPCWORK you should set this parameter BSUB R select hpcwork_fast This will ensure that the job will run on machines with a fast connection to the Lustre file system Parallel Jobs If you want to run a job in parallel you need to request more compute slots To submit a parallel job with the specified number of processes use the option n lt min_proc gt max_proc Shared Memory Parallelization Nowadays shared memory parallelized jobs are usu ally OpenMP jobs Nevertheless you can use other shared memory parallelisation paradigms like pthreads in a very similar way In order to start a shared memory parallelized job use BSUB a openmp in your script in addition with the n parameter for the number of threads Note This option will set R span hosts 1 which ensures that you get the requested compute
128. indow to enable startup of a parallel program run The relevant items to adjust are the Tasks item number of MPI processes to start and the Parallel System item The latter has to be set according to the MPI vendor used The Classic Launch helps to start a debug session from command line without any su perfluous clicks in the GUI It is possible to attach to a subset of processes and to detach reattach again The arguments that are to be added to the command line of mpiexec depend on the MPI vendor For Intel MPI and Open MPI use the flag tv to enable the Classic Launch SPSRC pex a20 MPIEXEC tv np 2 a out lt input When the GUI appears type g for go or click Go in the TotalView window TotalView may display a dialog box stating Process is a parallel job Do you want to stop the job now Click Yes to open the TotalView debugger window with the source window and leave all processes in a traced state or No to run the parallel application directly http www idris fr su Scalaire vargas tv MPI pdf Thttp www spscicomp org ScicomP14 talks hinkel tv pdf The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 117 You may switch to another MPI process by e Clicking on another process in the root window e Circulating through the attached processes with the P or P buttons in the process window Open another process window by clicking on one of the attached processes in the root window with your right mouse button
129. ing the execution history The ReplayEngine restores the whole program states which allows the developer to work back from a failure error or even a crash The ability of stepping forward and backward through your code can be very helpful and reduce the amount of time for debugging dramat ically because you do not need to restart your application if you want to explore a previous program state Furthermore the following replay items are supported e Heap memory usage e Process file and network I O Thread context switches e Multi threaded applications e MPI parallel applications Distributed applications e Network applications The following functionality is provided First you need to activate the ReplayEngine Debug Enable ReplayEngine GoBack runs the program backwards to a previous breakpoint e Prev jumps backwards to the previous line function call Unstep steps backwards to the previous instruction within the function Caller jumps backwards to the caller of the function A 1 10 Offline Debugging TVScript If interactive debugging is impossible e g because the program has to be run in the batch system due to problem size an interesting feature of the TotalView debugger called TVScript can be helpful Use the tvscript shell command to define points of interest in your program and corresponding actions for TotalView to take TVScript supports serial multithreaded and MPI programming models and has full access to
130. ing your program s source code e the Stack Trace Pane displaying the call stack e the Stack Frame Pane displaying all the variables associated with the selected stack routine e the Tabbed Pane showing the threads of the current process Threads subpane the MPI processes Processes subpane and listing all breakpoints action points and evaluation points Action Points Threads subpane e the Status Bar displaying the status of current process and thread e the Toolbar containing the action buttons The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 113 File Edit View Group Process Thread Action Point Debug Tools Window Group Control 7 Pus 3 3 Halt Kill Re aout A gt 20000000 Ord 1212400 lt Bad address 0x7fff00000001 gt 97011 756e6542 OS oo 41 0x00000029 7 Registers for the frames Function machs in bsp_PPCES_1 f90 16 INTEGER IN tt vi i 0x00403481 17 INTEGER n Ox00403482 si 48 REAL ALI 0040348 1 Action Points Processes Threads EEE 1 bsp_PPCES_1 90020 machs 0x178 A 1 4 Setting a Breakpoint e If the right function is already displayed in the Source Pane just click on a boxed line number of an executable statement once to set a breakpoint Clicking again will delete the breakpoint e Search the function with the View Lookup Function command first e If the function is in the current call stack
131. instrument function flag with GNU Compilers to compile the object files that contain functions to be traced ITC is then able to obtain information about the functions in the executable Run the compiled binary the usual way After the program terminates you get a message from the Trace Collector which says where the collected information is saved an stf file This file can be analyzed with the ITA GUI in an usual way Linux Example PSRC pex 891 MPICC tcollect pi c MPIEXEC trace np 2 a out traceanalyzer a out stf There are a lot of other features and operating modes e g binary instrumentation with itcpin tracing of non correct programs e g containing deadlocks tracing MPI File IO and more More documentation on ITAC may be found in opt intel itac lt VERSION gt doc and at http www intel com cd software products asmo na eng cluster tanalyzer index htm 8 3 Vampir Lin Vampir is a framework for the collection and visualization of event based performance data The collection of events is managed by a set of libraries that are activated at link time It consists of two separate units the instrumentation and measurement package vampirtrace and the visualization package vampir or vampir next generation This tool is currently deployed in collaboration with the VI HPS group Measurement Vampir is a tool suitable for the analysis of parallel and distributed applica tions and allows the tracing of MPI communication a
132. ion can be improved by integrating frequently called small subrou tines into the calling subroutines inlining This will not only eliminate the cost of a function call but also give the compiler more visibility into the nature of the operations performed thereby increasing the chances of generating more efficient code Consider the following general program tuning hints e Turn on high optimization while compiling The use of SFLAGS FAST options may be a good starting point However keep in mind that optimization may change rounding errors of floating point calculations You may want to use the variables supplied by the compiler modules An optimized program runs typically 3 to 10 times faster than the non optimized one e Try another compiler The ability of different compilers to generate efficient executables varies The runtime differences are often between 10 and 30 e Write efficient code that can be optimized by the compiler We offer a lot of materials videos presentations talks tutorials etc that are a good introduction into this topic please refer to https sharepoint campus rwth aachen de units rz HPC public Lists Presentations and Training Material Events aspx e Try to perform as little input and output as possible and bundle it into larger chunks e Try to allocate big chunks of memory instead of many small pieces e g use arrays instead of linked lists if possible e Access memory continuously in order to reduce
133. ionally leave the FLAGS _AUTOPAR environment variable empty see 6 1 5 2 on page 81 The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 77 6 1 2 Memory access pattern and NUMA Today s modern computer systems have a NUMA architecture see chapter 2 1 1 on page 12 The memory access pattern is crucial if a shared memory parallel application should not only run multithreaded but also perform well on NUMA computers The data accessed by a thread should be located locally in order to avoid performance penalties of remote memory access A typical example for a bad bad memory access pattern is to initialize all data from one thread i e in a serial program part before using the data with many threads Due to the standard first touch memory allocation policy in current operating systems all data initialized from one thread is placed in the local memory of the current processor node All threads running on a different processor node have to access the data from that memory location over the slower link Furthermore this link may be overloaded with multiple simultaneous memory operations from multiple threads You should initialize the in memory data in the same pattern as it will be used during computation 6 1 3 Intel Compilers Lin Win The Intel ForTRAN C C compilers support OpenMP via the compiler linker option openmp Qopenmp on Windows This includes nested OpenMP and tasking too If OMP_ NUM _THREADS is not set an Open
134. is bro ken if using NX e g the LD_ LIBRARY PATH environment variable is not set properly To repair the environment use the module reload command 4 1 3 Kerberos Kerberos is a computer network authentication protocol It is not extensively used in HPC Cluster but became more and more important A Kerberos ticket is needed to get acess to any services using Kerberos It will be granted automatically if you are logged in using ssh unless you are using a self made ssh user key This ticket has limited lifetime typically 24h Note You can obtain a valid ticket by calling the command kinit This utility will ask for your cluster password and will create a ticket valid for another 24 hours Note With the klist utility you can check your Kerberos ticket 4 1 4 cgroups Control Groups cgroups provide a mechanism which can be used for partitioning ressources between tasks for resource tracking purposes on Linux We have now activated the cgroups memory subsystem on a range of HPC Clusterfrontends This means that there are now limits on how much physical memory and swap space a single user can expend Current usage and limits are shown by the command memquota The cgroups CPU subsystem is also active on the frontends and ensure the availability of minimal CPU time for all users 4 2 Login to Windows We use a load balancing system for the cluster win rz rwth aachen de frontend that for wards any connection transparently to o
135. is supported using the standard OpenMP environment variables Note The support for OpenMP v3 0 nesting features is available as of version 4 4 of GCC compilers 6 1 6 PGI Compilers Lin To build an OpenMP program with the PGI compilers the option mp must be supplied during compile and link steps Explicit parallelization via OpenMP compiler directives may be combined with automatic parallelization cf 6 1 6 2 on page 82 although loops within parallel OpenMP regions will not be parallelized automatically The worker thread s stack size can be increased via the environment variable MPSTKZ megabytesM or via the OMP_STACKSIZE environment variable Threads at a barrier in a parallel region check a semaphore to determine if they can proceed If the semaphore is not free after a certain number of tries the thread gives up the processor for a while before checking again The MP_ SPIN variable defines the number of times a thread checks a semaphore before idling Setting MP_ SPIN to 1 tells the thread never to idle This can improve performance but can waste CPU cycles that could be used by a different process if the thread spends a significant amount of time before a barrier include user threads such as the main thread Setting SUNW_MP_MAX_POOL_ THREADS to 0 forces the thread pool to be empty and all parallel regions will be executed by one thread The value specified should be a non negative integer The default value is 1023 This environme
136. istribution of work over the processors by compiler directives OpenMP or function calls to threading libraries or a combination of those Parallelization for computers with distributed memory DM is done via the explicit dis tribution of work and data over the processors and their coordination with the exchange of messages Message Passing with MPI MPI programs run on shared memory computers as well whereas OpenMP programs usu ally do not run on computers with distributed memory As a consequence MPI programs can use virtually all available processors of the HPC Cluster whereas OpenMP programs can use up to 128 processors of a Bull SMP BCS node or up to 1024 hypercores of the Bull ScaleMP node For large applications the hybrid parallelization approach a combination of coarse grained parallelism with MPI and underlying fine grained parallelism with OpenMP might be attractive in order to efficiently use as many processors as possible Please note that long running computing jobs should not be started interac tively Please use the batch system see chapter 4 5 on page 35 which determines the distri bution of the tasks to the machines to a large extent We offer examples using the different parallelization paradigms Please refer to chapter 1 3 on page 9 for information how to use them 6 1 Shared Memory Programming OpenMPY is the de facto standard for shared memory parallel programming in the HPC realm The OpenMP API is defined
137. l example scripts for LSF Note We do not recommend to copy the scripts from this PDF file by Ctrl C Ctrl V Instead use the scripts from PSRC pis LSF directory or download from the Wiki e LEO Offload Job listing 1 on page 20 or in the Wiki IShttps wiki2 rz rwth aachen de download attachments 3801235 phi_leo sh txt The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 19 e Native Job listing 2 on page 21 or in the Wiki e MPI Job listing 3 on page 22 or in the Wiki Listing 1 PSRC pis LSF phi_leo sh f l usr bin env zsh Job name BSUB J PHI_LEO_JOB File path where STDOUT will be written the J is the job id BSUB o PHI_LEO_JOB 4J o 0 y O 0 Bb Ww N KF OFF Different file for STDERR if not to be merged with STDOUT BSUB e PHI_LEO_JOB efJ e a N H o Request the time you need for execution in minutes The format for the parameter is hour minute that means for 80 minutes you could also use this 1 20 BSUB W 80 Boe Be BR H JD 0 e Qu Request vitual memory you need for your job in MB BSUB M 1024 Ye n Oo w OFF Specify your mail address BSUB u user rwth aachen de N N N w N e Send a mail when job is done BSUB N Y N N D 0 Request the number of compute slots you want to use N J BSUB n 16 WN N O o Use esub for Phi BSUB a phi w wow a Ww N e Now specify the type of
138. le Studio compiler has a built in time measurement function gethrtime Linux FORTRAN example with Oracle Studio compiler INTEGER 8 gethrtime REAL 8 second second 1 d 9 gethrtime In FORTRAN there is an intrinsic time measurement function called SYSTEM CLOCK The time value returned by this function can overflow so take care about it The following code can be used on the Windows platform to get a high precision low overhead real timer include lt Windows h gt define Li2Double x double x HighPart 4 294967296E9 double x LowPart double SECOND void LARGE_INTEGER time freq QueryPerformanceCounter amp time QueryPerformanceFrequency amp freq return Li2Double time Li2Double freq Please be aware that by including Windows h some unexpected side effects might occur such as the definition of the macros min and max which can conflict with some function of the C STL for example 5 11 Memory usage To get an idea how much memory your application needs you can use the memusage command Start your program on the frontend using memusage as a wrapper and stop with CRTL C after some time In most applications most of the memory is allocated at the begin of the runtime Now you can round up the virtual memory peak and use as parameter for the batch system Example memusage sleep 1 VmPeak 3856 kB For MPI programs you have to insert the wrapper just b
139. le package provides the dynamic modification of the user s environment Initial ization scripts can be loaded and unloaded to alter or set shell environment variables such as PATH to choose for example a specific compiler version or use software packages The need to load modules will be described in the according software sections in this document The advantage of the module system is that environment changes can easily be undone by unloading a module Furthermore dependencies and conflicts between software packages can be easily controlled Color coded warning and error messages will be printed if conflicts are detected The module command is available for the zsh ksh and tcsh shells csh users should switch to tcsh because it is backward compatible to csh Note bash users have to add the line Jusr local_host etc bashrc into bashre to make the module function available The most important options are explained in the following To get help about the module command you can either read the manual page man module or type module help to get the list of available options To print a list of available initialization scripts use module avail This list can depend on the platform you are logged in to The modules are sorted in categories e g CHEMISTRY and DEVELOP The output may look like the following example but will usually be much longer Soria usr local_rwth modules modulefiles linux linux64 DEVELOP intel 11 1 openm
140. le system Programs can perform I O on the Lustre file system without modification Nevertheless if your programs are I O intensive you should consider optimizing them for parallel I O For details on this technology refer to e http www whamcloud com 4 3 2 2 Mental Model A Lustre setup consists of one metadata server MDS and several object storage servers OSS The actual contents of a file are stored in chunks on one or more OSSs while the MDS keeps track of file attributes name size modification time permissions as well as which chunks of the file are stored on which OSS Lustre achieves its throughput performance by striping the contents of a file across several OSSs so I O performance is not that of a single disk or RAID hundreds of MB s but that of all OSSs combined up to 5 GB s sequential An example You want to write a 300 MiB file with a stripe size of 16 MiB 19 chunks across 7 OSSs Lustre would pick a list of 7 out of all available OSSs Then your program would send chunks directly to each OSS like this OSS 1 2 31415 6 7 Chunks 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 So when your program writes this file it can use the bandwidth of all requested OSSs the write operation finishes sooner and your program has more time left for computing 4 3 2 3 Optimization If your MPI application requires large amounts of disk I O you shoul
141. link your program with the Intel MKL 9 4 The Oracle Sun Performance Library Lin The Oracle Sun Performance Library is part of the Oracle Studio software and contains highly optimized and parallelized versions of the well known standard public domain libraries available from Netlib http www netlib org LAPACK version 3 BLAS FFTPACK ver sion 4 and VFFTPACK version 2 1 from the field of linear algebra Fast Fourier trans forms and solution of sparse linear systems of equations Sparse Solver SuperLU see http crd Ibl gov xiaoye SuperLU The studio module sets the necessary environment variables To use the Oracle performance library link your program with the compiler option xlic_lib sunperf The performance of FORTRAN programs using the BLAS library and or intrinsic functions can be improved with the compiler option xknown _lib blas intrinsics The corresponding routines will be inlined if possible The Performance Library contains parallelized sparse BLAS routines for matrix matrix multiplication and a sparse triangular solver Linpack routines are no longer provided It is strongly recommended to use the corresponding LAPACK routines instead Many of the contained routines have been parallelized using the shared memory pro gramming model Compare the execution times To use multiple threads set the OMP NUM_ THREADS variable accordingly PSRC pex 920 export OMP_NUM_THREADS 4 88http software intel com sites prod
142. ll in kilobytes only affect the initial master thread The number of threads to be started for each parallel region may be specified by the environment variable OMP_NUM_ THREADS which is set to 1 per default on our HPC Cluster The OpenMP standard does not specify the number of concurrent threads to be started if OMP_NUM_ THREADS is not set In this case the Oracle and PGI compilers start only a single thread whereas the Intel and GNU compilers start as many threads as there are processors available Please always set the OMP_NUM_ THREADS environment variable to a reasonable value We especially warn against setting it to a value greater than the number of processors available on the machine on which the program is to be run On a loaded system fewer threads http www openmp org http www compunity org 76 The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 may be employed than specified by this environment variable because the dynamic mode may be used by default Use the environment variable OMP_ DYNAMIC to change this behavior If you want to use nested OpenMP the environment variable OMP_NESTED TRUE has to be set Beginning with the OpenMP v3 0 API the new runtime functions OMP_ THREAD_ LIMIT and OMP_MAX_ACTIVE_LEVELS are available that control nested be havior and obsolete all the old compiler specific extensions Note Not all compilers support nested OpenMP 6 1 1 Automatic Shared Memory Parallelization of Loops Au
143. lly link against the OpenMP library You should therefore use the same compiler options for linking as you used for compiling Otherwise the compiler may not generate all needed linker options To link the objects to the program jacobi exe you have to use CXX FLAGS_DEBUG FLAGS_FAST FLAGS_OPENMP jacobi o main o o jacobi exe Now after having built the executable you can run it The example program is an iter ative solver algorithm with built in measurement of time and megaflops per second Via the environment variable 0MP__NUM_ THREADS you can specify the number of parallel threads with which the process is started Because the jacobi exe program needs input you have to supply an input file and start export OMP_NUM_THREADS 1 jacobi exe lt input After a few seconds you will get the output including the runtime and megaflop rate which depend on the load on the machine As you built a parallel OpenMP program it depends on the compiler with how many threads the program is executed if the environment variable O0MP_NUM_ THREADS is not explicitly set In the case of the GNU compiler the default is to use as many threads as processors are available As a next step you can double the number of threads and run again export OMP_NUM_THREADS 2 jacobi exe lt input Now the execution should have taken less time and the number of floating point operations per 100Tf you are not using one of our cluster systems the values of the enviro
144. lt num gt and using the quota command 4 6 3 Limitations Jobs submitted to the JARA HPC Partition are generally limited to a maximum run time of 72 hours Longer running computations have to be divided into smaller parts and can be submitted as chain jobs see chapter 4 5 1 on page 40 Currently we do not know about any limitations concerning the number of nodes per MPI job with Open MPI With Intel MPI it is currently not possible to use more than 64 128 compute nodes per MPI job This limitation is under technical investigation The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 57 5 Programming Serial Tuning 5 1 Introduction The basic tool in programming is the compiler which translates the program source to ex ecutable machine code However not every compiler is available for the provided operating systems On the Linux operating system the freely available GNU GCC compilers are the some what natural choice Code generated by these compilers usually performs acceptably on the cluster nodes Since version 4 2 the GCC compilers offer support for shared memory paral lelization with OpenMP Since version 4 of the GNU compiler suite a FORTRAN 95 compiler gfortran is available Code generated by the old g77 FORTRAN compiler typically does not perform well so gfortran is recommended To achieve the best possible performance on our HPC Cluster we recommend using the Intel compilers The Intel compiler fam
145. main name slaves uniq LSB_DJOB_HOSTFILE grep v master echo Master master Slaves slaves start worker processes on slave nodes using the ssh wrapper for i in uniq LSB_DJOB_HOSTFILE grep v master do all nodes but not master in background echo starting on host i ssh i OMP_NUM_THREADS grep i LSB_DJOB_HOSTFILE wc 1 worker exe amp done start server process on the master node OMP_NUM_THREADS grep master LSB_DJOB_HOSTFILE wc 1 server exe after finish don t forget to terminate the worker processes The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 51 Delete a Job For an already submitted job you can use the bkill command to remove it from the batch queue bkill job_ID If you want to kill all your jobs please use this bkill O LSF Environment Variables There are several environment variables you might want to use in your submission script see table 4 14 on page 52 Note These variables will not be interpreted in combination with the magic cookie 4BSUB in the submission script Environment Variable Description LSB_ JOBNAME The name of the job LSB_JOBID The job ID assigned by LSF LSB_ JOBINDEX The job array index LSB_HOSTS The list of hosts selected by LSF to run the job LSB_MCPU_HOSTS The list of the hosts and the number of CPUs used LSB_DJOB_HOSTFILE Path to the hostfile LSB_DJOB_NUMPROC The numb
146. mber of loop iterations is unknown during compile time code is produced which decides at runtime whether a parallel execution of the loop is more efficient or not alternate coding With automatic parallelization it is furthermore possible to specify the number of used threads by the environment variable OMP_ NUM_ THREADS 6 1 4 4 Nested Parallelization The Oracle compilers OpenMP support includes nested parallelism You have to set the environment variable OMP_NESTED TRUE or call the runtime routine omp_set_nested to enable nested parallelism Oracle Studio compilers support the OpenMP v3 0 as of version 12 so it is recommended to use the new functions OMP_THREAD_ LIMIT and OMP_MAX_ ACTIVE_ LEVELS to control the nesting behavior see the OpenMP API v3 0 specification The Oracle Sun specific MP pragmas have been deprecated and are no longer supported Thus the xparallel option is obsolete now Do not use this option 3 However the older Oracle Sun specific variables SUNW_MP_MAX_POOL THREADS and SUNW_MP_ MAX NESTED LEVELS are still supported e SUNW_MP_MAX_ POOL THREADS specifies the size maximum number of threads of the thread pool The thread pool contains only non user threads threads that the libmtsk library creates It does not 80 The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 6 1 5 GNU Compilers Lin As of version 4 2 the GNU compiler collection supports OpenMP with the option fopen
147. mp The OpenMP v3 0 support is as of version 4 4 included The default thread stack size can be set with the variable GOMP__ STACKSIZE in kilobytes or via the OMP_ STACKSIZE environment variable For more information on GNU OpenMP project refer to web pages http gcec gnu org projects gomp http gcc gnu org onlinedocs libgomp 6 1 5 1 Thread binding CPU binding of the threads can be done with the GOMP_CPU_ AFFINITY environment variable The variable should contain a space or comma separated list of CPUs This list may contain different kind of entries either single CPU numbers in any order a range of CPUs M N or a range with some stride M N S CPU numbers are zero based For example GOMP_CPU_AFFINITY 0 3 1 2 4 15 2 will bind the initial thread to CPU 0 the second to CPU 3 the third to CPU 1 the fourth to CPU 2 the fifth to CPU 4 the sixth through tenth to CPUs 6 8 10 12 and 14 respectively and then start assigning back to the beginning of the list GOMP_ CPU _ AFFINITY 0 binds all threads to CPU 0 A defined CPU affinity on startup cannot be changed or disabled during the runtime of the application 6 1 5 2 Autoparallelization Since version 4 3 the GNU compilers are able to parallelize loops automatically with the option ftree parallelize loops lt threads gt However the number of threads to use has to be specified at compile time and cannot be changed at runtime 6 1 5 3 Nested Parallelization OpenMP nesting
148. n For example the Oracle specific compiler switches xopenmp and xautopar automatically invoke high optimization x03 Compile with g to prepare the program for debugging and do not use optimization if possible e Intel compiler use openmp O0 g switches e Oracle Studio compiler use xopenmp noopt g switches e GCC compiler use fopenmp 00 g switches e PGI compiler use mp Minfo mp 00 g switches The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 119 A 2 3 3 Starting TotalView Start debugging your OpenMP program after specifying the number of threads you want to use OMP_NUM_THREADS nthreads totalview a out The parallel regions of an OpenMP program are outlined into separate subroutines Shared variables are passed as call parameters to the outlined routine and private variables are defined locally A parallel region cannot be entered stepwise but only by running into a breakpoint You may switch to another thread by e clicking on another thread in the root window or e circulating through the threads with the T or T buttons in the process window A 2 3 4 Setting a Breakpoint By right clicking on a breakpoint symbol you can specify its properties A breakpoint will stop the whole process group by default or only the thread for which the breakpoint is defined In case you want to synchronize all processes at this location you have to change the breakpoint into a barrier by right clicking on a line number and
149. n may result in longer compilation times fast fast a simple but less portable way to get good performance The fast option turns on 03 ipo static and no prec div Note A processor with SSE3 extensions is required this option will not work on older Opterons Note no prec div enables optimizations that give slightly less precise results than full IEEE division inline level N ObN N 0 disable inlining default if O0 specified N 1 enable inlining default N 2 automatic inlining xC QxC generate code optimized for processor extensions C see compiler manual The code will only run on this platform ax C1 C2 QaxC1 C2 like x but you can optimize for several platforms and baseline code path is also generated vec report X Qvec report X emits level X diagnostic information from the vectorizer if X is left out level 1 is assumed ip Qip enables additional interprocedural optimizations for single file compilation ipo Qipo enables interprocedural optimization between files Functions from different files may be inlined openmp Qopenmp enables generation of parallel code based on OpenMP directives openmp stubs Qopenmp stubs compiles OpenMP programs in sequential mode the OpenMP directives are ignored and a sequential version of the OpenMP library is linked parallel Qparallel generates multi threaded code for loops that can be safely executed in
150. n RZ Sharepoin slides from Thomas Warschko Bull For the performance of shared memory jobs it is important to notice that not only the BCS interconnect imposes a NUMA topology consisting of the four nodes but still every node consists of four NUMA nodes connected via the QPI thus this system exhibits two different levels of NUMAness in one single system The t 4 or in the 2 3 8 ScaleMP system The company ScaleMP provides software called vSMP foundation to couple several standard x86 based servers into a virtual shared memory system The software works underneath the operating system so that a standard Linux is presented to the user Executables for x86 based machines can run on the ScaleMP machines without recompilation or relinking Our installation couples 16 boards each equipped with 4 Intel Xeon X7550 processors and 64 GB of main memory So a user sees a Single System Image on this machine with 512 Cores and 3 7 TB of main memory A part of physically availabe memory is used for system purposes and thus is not availale for computing For the performance of shared memory jobs it is very important to notice that the ScaleMP system exhibits two different levels of NUMAness where the NUMA ratio between onboard and offboard memory transfers is very high 2 4 Innovative Computer Architectures GPU Cluster In order to explore innovative computer architectures for HPC the Center for Computing and Communication has installed a GP
151. n be quite slow over weak network connection and in case of a temporary netwofk failure your program will die and the session is lost In order to prevent this we offer special frontends capable to run the X Win32 and the NX software see table 1 1 on page 9 Both of these sofware packages allow you to run remote X11 sessions even across low bandwidth network connections as well as reconnecting to running sessions 4 1 2 1 X Win32 The X Win32 from StarNet Communications http www starnet com is commercial soft ware However we decided to give an X Win32 client to all HPC Cluster users free to use You can download X Win32 form Asknet https rwth asknet de search for X Win32 Upon the first time X Win32 is started click on Assistant to set up the connection If your firewall asks for any new rules just click on Cancel Specify an arbitrary connection name and 25To login from outside of the RWTH network you will need VPN http www rz rwth aachen de go id oif 2 The screen command is known to lose the value of the LD_ LIBRARY _ PATH environment variable just after it started In order to fix it we changed the global initialization file etc screenrc Be aware of this if you are using your own screen initialization file SHOME screenrc Thttp en wikipedia org wiki X_Window_ System 8 older versions of ssh have to use the X option The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 27 choose LIVE as connection typ P
152. n be used The percentage value ranges from 200 no core hours were used during the previous and the current month to 101 the combined usage for the current and the previous month is more than the three months allowance with negative values indicating that quota from the following month is being borrowed If the percentage value drops below 100 the project enters low priority mode The storage quotas are all project specific It is important to note that you have to store all project relevant data in home jara lt num gt work jara lt num gt or hpcwork jara lt num gt depending on the file system you would like to use and also to note that the quota space is shared among all project participants Please note that the quota is separate from the one for the user accounts e g home ab123456 The data in home jara lt num gt and work jara lt num gt are stored on an NFS file system where only home jara lt num gt is backed up The data in hpework jara lt num gt is stored on the high performance Lustre parallel file system and should be used for large files and parallel IO Each user can check the utilization of the Lustre file system hpcwork jara lt num gt with the quota command Unfortunately at the moment there exists no convenient method to check 56 The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 the quota usage on the NFS file systems Only the technical project lead can check it by logging in as user jara
153. n more details As the Makefile already does everything but explain the steps the following paragraph will explain it step by step You have to start with compiling the source files in this case main cpp and jacobi cpp with the C compiler CXX FLAGS_DEBUG FLAGS_FAST FLAGS_OPENMP DREAD_INPUT c jacobi cpp main cpp This command invokes the C compiler stored in the environment variable CXX in this case g as you are using the GNU compiler collection The compiler reads both source files and puts out two object files which contain machine code The variables FLAGS DEBUG FLAGS FAST and SFLAGS OPENMP contain compiler flags to respectively put debugging information into the object code to optimize the code for high performance and to enable OpenMP parallelization The D option specifies C preprocessor directives to allow conditional compilation of parts of the source code The command line above is equivalent to writing just the content of the variables g g 03 ffast math mtune native fopenmp DREAD_INPUT c jacobi cpp main cpp You can print the values of variables with the echo command which should print the line above echo CXX FLAGS_DEBUG FLAGS_FAST FLAGS_OPENMP DREAD_INPUT c jacobi cpp main cpp After compiling the object files you need to link them to an executable You can use the linker ld directly but it is recommended to let the compiler invoke the linker and add appropriate options e g to automatica
154. nalysis of the proc directory pmap cat proc cpuinfo Processor information free Shows how much memory is used top Process list strace Logs system calls file Determines file type uname a Prints name of current system ulimit a Sets gets limitations on the system resources which command Shows the full path of command dos2unix unix2dos DOS to UNIX text file format converter and vice versa screen Full screen window manager that multiplexes a physical terminal 10 2 Useful Commands Win hostname Prints name of current system quota Shows quota values for Home and Work set Prints environment variables where cmmd Shows full path of the cmmd command windiff Compares files directories graphically 92 Note The utilities fsplit lint dumpstabs are shipped with Oracle Studio compilers thus you have to load the studio module to use them module load studio 112 The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 A Debugging with TotalView Quick Reference Guide Lin This quick reference guide describes briefly how to debug serial and parallel OpenMP and MPI programs written in C C or FORTRAN 90 95 using the TotalView debugger from TotalView Technologies on the RWTH Aachen HPC Cluster For further information about TotalView refer to the Users s Manual and the Ref erence Guide which can be found here ht
155. nd mpi l machines are combined into one chassis rack R lt number gt up to five chassis are combined into one rack mtype mpi s mpi l for different machine types like mpi s smp s Table 4 9 Compute Units Using Compute Units you can e g tell LSF that you want all processes of your job to run on one chassis This would be done by selecting BSUB R cultype chassis maxcus 1 Which means J want to run on a chassis type chassis and I want to run on maz one chassis maxcus 1 37 Note The hostgroups are subject to change check the actual stage before submitting 3 Max Mem means the recommended maximum memory per process if you want to use all slots of a machine It is not possible to use more memory per slot because the operating system and the LSF needs approximately 3 of the total amount of memory The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 37 You normally do not want to mix SMP Nodes and MPI Nodes in one job so if you do not use the 4BSUB m option we set for you BSUB R cultype mtype maxcus 1 If you want to know which machines are in one compute unit you can use the bhosts X lt compute unit name gt command HPCWORK Lustre availability The HPCWORK file system is based on the Lustre high performance technology This file system offers huge bandwidth but it is not famous for their stability The availability goal is 95 which means some 2 weeks per year of planned downtime
156. ne of several available nodes Some clients using older 22Kerberos RFC http tools ietf org html rfc4120 Kerberos on Wikipedia http en wikipedia org wiki Kerberos _ protocol 30h ttp www kernel org doc Documentation cgroups cgroups txt 28 The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 versions of the RDP protocol e g all available Linux rdesktop versions and RDP clients of older Windows desktop OS versions do not get along with the load balancing system very well If you use such a client it might be that you have to repeat the process of entering the domain your user name and password in a second login screen That is caused by the transparent redirection of your connection request you may have noticed that the computer name on the login screen changed Please enter the domain your username and password again to login The need to enter the user name and password twice is a known problem but for the time being there is no technical solution to this issue 4 2 1 Remote Desktop Connection To log into a Windows system from a Windows environment the program Remote Desktop Con nection is used You will find it under Start Programs Accessories Remote Desktop Connection Start Programme Zubeh r Remotedesktopverbindung After start Windows Security xj Remote Desktop Connection Enter your credentials Remote Desktop These credentials
157. nment variables CXX FLAGS DEBUG et cetera are probably not set and you cannot use them However as every compiler has its own set of compiler flags these variables make life a lot easier on our systems because you don t have to remember or look up all the flags for all the compilers and MPIs The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 123 second should be about twice as high as before B 4 Computation in batch mode After compiling the example and making sure it runs fine you want to compute However the interactive nodes are not suited for larger computations Therefore you can submit the example to the batch queue for detailed information see chapter 4 5 on page 35 It will be executed when a compute node is available To submit a batch job you have to use the command bsub which is part of the workload management system Platform LSF refer to 4 5 1 on page 35 The bsub command needs several options in order to specify the required resources e g the number of CPUs the amount of memory to reserve or the runtime bsub J TEST o output txt n 2 R span hosts 1 W 15 M 700 a openmp u lt your_email_address gt N module switch intel gcc 4 6 export OMP_NUM_THREADS 2 jacobi exe lt input 101 You will get an email when the job is finished if you enter your email address instead of lt your_email_address gt The output of the job will be written to output txt in the current directory The same job can be
158. nsferring Files to the Cluster 2 204 31 132 Lustre Parallel Filesystem oo conoces empre a E a e a 32 4 4 Defaults of the RWTH User Environment Lin 33 4 4 1 Z Shell zsh Configuration Files oaa 33 4 42 The Module Package o a 34 4 5 The RWTH Batch Job Administration e o 35 4 5 1 The Workload Management System LSF Lin 35 45 2 Windows Batch System Win lt 4 05 6444055 46904 A 53 4 6 JARa HPC Patton se coria wade meee awe Cb awe oe RRS 54 The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 46 1 Project application lt lt cercos ene dee aS 54 4 6 2 Resources Core hour quota 0000 ee eee eee 55 aio Ugo ooh oe A ee ee Be ee eee ee yee 57 5 Programming Serial Tuning 58 Gal Ira oca Ad a ee EP AA Bo e es 58 5 2 General Hints for Compiler and Linker Usage Lin 58 oo Tumi AA 59 Oo IENCIATERE oe bs REE ee Re a Ee eo ds 61 56 Intel Compllers Lin f Win a one ee kk ee ee ORE Oe SR eS 61 5 5 1 Frequently Used Compiler Options 61 Tos Tomie ie ek E eee BS ce ho ee a 64 E o oea aoe es BOR ee oJ Be a a es Boe Re BE a 64 56 Oracle Compilers Lin o o oee 2444044 bimet PERG YEE Oe ee RES 65 5 6 1 Frequently Used Compiler Options oaoa a e e 65 oa Tnne TPS a e ee ae e S oaan ae OR a ae a Re be a e a 67 603 Interval Avitlometic Lin ee ece e a diee de 54464 e a e 44 69 mee GN
159. nt variable can prevent a single process from creating too many threads That might happen e g for recursively nested parallel regions e SUNW_MP_MAX_NESTED_ LEVELS specifies the maximum depth of active parallel regions Any parallel region that has an active nested depth greater than SUNW_ MP _MAX_NESTED_ LEVELS will be executed by a single thread The value should be a positive integer The default is 4 The outermost parallel region has a depth level of 1 The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 81 Note Nested parallelization is NOT supported Note The environment variables OMP_ DYNAMIC does not have any effec Note OpenMP v3 0 standard is supported including all the nesting related rou tines However due to lack of nesting support these routines are dummies only For more information refer to http www pgroup com resources openmp htm or http www pgroup com resources docs htm t 75 6 1 6 1 Thread binding The PGI compiler offers some support for NUMA architectures with the option mp numa Using NUMA can improve performance of some parallel appli cations by reducing memory latency Linking mp numa also allows to use the environment variables MP_ BIND MP_ BLIST and MP_ SPIN When MP _ BIND is set to yes parallel processes or threads are bound to a physical processor This ensures that the operating system will not move your process to a different CPU while it is running Using MP _ BLIST you can
160. nux users who are new to our HPC Cluster to get a quick start 10 The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 1 4 Further Information Please check our web pages for more up to date information http www rz rwth aachen de hpc The latest version of this document is located here http www rz rwth aachen de hpc primer News like new software or maintenance announcements about the HPC Cluster is provided through the rzcluster mailing list Interested users are invited to join this mailing list at http mailman rwth aachen de mailman listinfo rzcluster The mailing list archive is accessible at http mailman rwth aachen de pipermail rzcluster Please feel free to send feedback questions or problem reports to servicedesk rz rwth aachen de Have fun using the RWTH Aachen HPC Cluster The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 11 2 Hardware This chapter describes the hardware architecture of the various machines which are available as part of the RWTH Aachen University s HPC Cluster 2 1 Terms and Definitions Since the concept of a processor has become increasingly unclear and confusing it is necessary to clarify and specify some terms Previously a processor socket was used to hold one processor chip and appeared to the operating system as one logical processor Today a processor socket can hold more than one processor chip Each chip usually has multiple cores Each core may s
161. oduct bundles provides an integrated development performance anal ysis and tuning environment with features like highly sophisticated compilers and powerful libraries monitoring the hardware performance counters checking the correctness of multi threaded programs The basic components are e Intel Composer ch 5 5 on page 61 including Intel MKL ch 9 3 on page 105 e Intel MPI Library see chapter 6 2 3 on page 84 e Intel Trace Analyzer and Collector see chapter 8 2 2 on page 98 e Intel Inspector ch 7 4 2 on page 92 e Intel VTune Amplifier see chapter 8 2 1 on page 98 formerly Intel VTune Performance Analyzer e Intel Parallel Advisor not yet described here All tools but Parallel Amplifier can be used with no restrictions All tools are designed to work with binaries built with the Intel compilers but in general other compilers can be used as well In order for the tools to show performance data in correlation to your programs source code you need to compile with debug information g 84 bearing a lot of names Parallel Studio Parallel Studio 2011 Parallel Studio XE Cluster Toolkit Cluster Studio Cluster Studio XE the area of collection is still open The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 97 8 2 1 Intel VTune Amplifier The Intel VTune Amplifier XE is a powerful threading and performance optimization tool for C C and Fortran developers It has its own GUI and pro
162. ognizes several VampirTrace environment variables For further information on the meaning of these variables see the VampirTrace 5 5 3 documentation Use the M option to set the version of MPI to be used selectable values are OMPT CT OPENMPI MPICH2 MVAPICH2 INTEL As clear from the names for Open MPI the OPENMPT value and for Intel MPI the INTEL value are to be used The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 95 h cycles on insts on Same meaning as in table 8 26 on page 95 h fp_comp_ops_ exe x87 on Floating point counters on different execution fp_comp_ops_ exe mmx on units The sum divided by the runtime in fp comp ops exe sse fp seconds gives the FLOPS rate h cycles on dtlb_ misses any on A high rate of DTLB misses indicates an unpleasant memory access pattern of the program Large pages might help h Ilc reference on llc misses on Last level L3 cache references and misses h 12_rqsts references on l2__rqsts miss on L2 cahce references and misses h 11i hits on 11i misses on L1 instruction cache hits and misses Table 8 28 Hardware counter available for profiling with collect on Intel Nehalem CPUs Also here all processes must run on localhost in order to get the profiled data Open MPI example as above but additionally collect the MPI trace data SPSRC pex 814 OMP_NUM_THREADS 2 collect h cycles on insts on M OPENMPI mpiexec np 2 H hostname a out an
163. ogram with optimization delivers other results than without floating point optimization may be responsible There is a possibility to test this by optimizing the program carefully Please note that the environment variables FLAGS FAST and FLAGS_ FAST NO _FPOPT containing different sets of optimization flags for the last loaded compiler module If you use FLAGS_ FAST NO_FPOPT flag instead of FLAGS_ FAST the sequence of the floating point operations is not changed by the opti mization perhaps increasing the runtime Besides you have to consider that on the x86 platform floating point calculations do not necessarily conform to IEEE standard by default so rounding effects may differ between plat forms The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 89 7 3 Debuggers A Debugger is a tool to control and look into a running program It allows a programmer to follow the program execution step by step and see e g values of variables It is a powerful tool for finding problems and errors in a program For debugging the program must be translated with the option g and optimization should be turned off to facilitate the debugging process If compiled with optimization some vari ables may not be visible while debugging and the mapping between the source code and the executable program may not be accurate A core dump can be analyzed with a debugger if the program was translated with g Do not forget to increase the core file si
164. ongly recommend making a backup of your old projects before you use Visual Studio 2008 for the first time The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 71 5 10 Time measurements For real time measurements a high resolution timer is available However the measurements can supply reliable reproducible results only on an almost empty machine Make sure you have enough free processors available on the node The number of processes which are ready to run plus the number of processors needed for the measurement has to be less or equal to the number of processors On ccNUMA CPU s like Nehalem or Opteron be aware about processor placement and binding refer to 3 1 1 on page 24 User CPU time measurements have a lower precision and are more time consuming In case of parallel programs real time measurements should be preferred anyway The r_ ib library offers two timing functions r_rtime and r_ctime They return the real time and the user CPU time as double precision floating point numbers For information on how to use r_ lib refer to 9 8 on page 109 Depending on the operating system programming language compiler or parallelization paradigm different functions are offered to measure the time To get a listing of the file you can use cat PSRC include realtime h If you are using OpenMP the omp_get_wtime function is used in background and for MPI the MPI_ Wtime function Otherwise some operating system depend
165. ort routine 5 5 2 2 Interprocedural Optimization Traditionally optimization techniques have been limited to single routines because these are the units of compilation in FORTRAN With inter procedural optimization the compiler extends the scope of applied optimizations to multiple routines potentially to the program as a whole With the flag ip interprocedural optimiza tion can be turned on for a single source file i e the possible optimizations cover all routines in that file When using the O2 or O3 flags some single file interprocedural optimizations are already included If you use ipo instead of ip you turn on multi file interprocedural optimization In this case the compiler does not produce the usual object files but mock object files which include information used for the optimization The ipo option may considerably increase the link time Also we often see compiler bugs with this option The performance gain when using ipo is usually moderate but may be dramatic in object oriented programs Do not use ipo for producing libraries because object files are not portable if ipo is on 5 5 2 3 Profile Guided Optimization PGO When trying to optimize a program dur ing compile link time a compiler can only use information contained in the source code itself or otherwise supplied to it by the developer Such information is called static because it is passed to the compiler before the program has been built and henc
166. ossible hosts are denoted in the table 1 1 on page 9 Enter the username and the password In the next step choose root installation Now you can open your X Win32 connection by clicking Start You may have to confirm that the host is a trusted machine Choose between Gnome or KDE session and start it by clicking on Launch 4 1 2 2 The NX Software You can download the NX client from http www nomachine com download php Upon the first time NX is started the NX Connection Wiz Session cluster x ard will help you to set up the connection All you need to get Insert server s name and port where you want to co started is to enter the session information By default you will Host cluster x r2 rwth aachen de be provided with a KDE desktop from which you can start ather programs Select type of your internet connection If your connection appears to be slow try out some con your co on appears to try out so figuration Especially enabling Configure Advanced gt Disable direct draw for screen rendering could make your Windows NX client faster If you are using the KDE graphical desktop environment you should disable toy features which produce useless updates of the screen Right click on the control bar choose Configure Panel or Kon trollleiste einrichten in German then Appearance Erscheinungsbild In the General Allgemein part disable both check boxes and save the configuration Sometimes the environment
167. ource code is weakened by optimization making debugging more difficult To use the Performance Analyzer with a C program you can use the option g0 in order not to prevent the compiler of inlining Otherwise performance might drop significantly 5 6 2 Tuning Tips The option xunroll n can be used to advise the compiler to unroll loops Conflicts caused by the mapping of storage addresses to cache addresses can be eased by the creation of buffer areas padding see compiler option pad With the option dalign the memory access on 64 bit data can be accelerated This alignment permits the compiler to use single 64 bit load and store instructions Otherwise the program has to use two memory access instructions If dalign is used every object file has to be compiled with this option With this option the compiler will assume that double precision data has been aligned on an 8 byte boundary If the application violates this rule the runtime behavior is undetermined but typically the program will crash On well behaved programs this should not be an issue but care should be taken for those applications that perform their own memory management switching the interpretation of a chunk of memory while the program executes A classical example can be found in some older FORTRAN programs in which variables of a COMMON block are not typed consistently The following code will break i e values other than 1 are printed when compiled with th
168. parallel auto parallelization par report X opt report X Qpar report X Qopt report X emit diagnostic information from the auto parallelizer or an optimization report g Zi produces symbolic debug information in object file stack size set the default stack size in byte Xlinker val link val passes val directly to the linker for processing heap arrays size heap arrays size Puts automatic arrays and temporary arrays on the heap instead of the stack Table 5 17 Intel Compiler Options The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 63 5 5 2 Tuning Tips 5 5 2 1 The Optimization Report To fully exploit the capabilities of an optimizing com piler it is usually necessary to re structure the program code The Intel Compiler can assist you in this process via various reporting functions Besides the vectorization report cf Sec tion 5 5 1 on page 61 and the parallelization report cf Section 6 1 3 on page 78 a general optimization report can be requested via the command line option opt report You can control the level of detail in this report e g opt report 3 provides the maximum amount of optimization messages The amount of feedback generated by this compiler option can easily get overwhelming Therefore you can put the report into a file opt report file or restrict the output to a certain compiler phase opt report phase or source code routine opt rep
169. perated to serve the computational needs of researchers from the RWTH Aachen University and other universities in North Rhine Westphalia This means that every employee of one of these universities may use the cluster for research purposes Furthermore students of the RWTH Aachen University can get an account in order to become acquainted with parallel computers and learn how to program them This primer serves as a practical introduction to the HPC Cluster It describes the hard ware architecture as well as selected aspects of the operating system and the programming environment and also provides references for further information It gives you a quick start in using the HPC Cluster at the RWTH Aachen University including systems hosted for institutes which are integrated into the cluster If you are new to the HPC Cluster we provide a Beginner s Introduction in appendix B on page 121 which may be useful to do the first steps 1 1 The HPC Cluster The architecture of the cluster is heterogeneous The system as a whole contains a variety of hardware platforms and operating systems Our goal is to give users access to specific features of different parts of the cluster while offering an environment which is as homogeneous as possible The cluster keeps changing since parts of it get replaced by newer and faster machines possibly increasing the heterogeneity Therefore this document is updated regularly to keep up with the changes The
170. pi 1 6 1mt intel 12 1 openmpi 1 6 4 default intel 13 1 default openmpi 1 6 4mt An available module can be loaded with module load modulename This will set all necessary environment variables for the use of the respective software For example you can either enter the full name like intel 11 1 or just intel in which case the default intel 13 1 will be loaded A module that has been loaded before but is no longer needed can be removed by module unload modulename If you want to use another version of a software e g another compiler we strongly recom mend switching between modules module switch oldmodule newmodule This will unload all modules from bottom up to the oldmodule unload the oldmodule load the 34 The loading of another version by unloading and then loading may lead to a broken environment 34 The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 newmodule and then reload all previously unloaded modules Due to this procedure the order of the loaded modules is not changed and dependencies will be rechecked Furthermore some modules adjust their environment variables to match previous loaded modules You will get a list of loaded modules with module list A short description about the software initialized by a module can be obtained by module whatis modulename and a detailed description by module help modulename The list of available categories inside of the GLOBAL category can be obtain
171. platform active win32 Z Configuration Manager nects to the chosen processes only _ ad a It is possible to select a different subset E n Select processes to attach to 128 showing filtered 128 total of processes at any time during the debug Aitah test tem Rank progran Bl Linuxncoo2 r20 o home pk224850 SVNompifasttest trunk MPI_FastTest El El r Ele el Bl linuxnco02 r21 1 home pk224850 SVN mpifasttest trunk MPI_FastTest session 11 the Group Attach Subset di O linuxno002 r22 2 home pk224850 SVN mpi fasttest trunk MPI_FastTest b OD linuxnoo02 rz3 3 Chome pk224850 SVN mpifasttest trunk HPI_FastTest alog DOX OD linuxno002 rz4 4 home pk224850 SVN mpifasttest trunk MPI_FastTest O linuxncoo2 rz5 5 7home pk224850 SVN mpifasttest trunk MPI_FastTest OD linuxno002 rz6 6 home pk224850 SVN mpifasttest trunk MPI_FastTest Attach All Detach All r Filters 3 Communicator rah 7 a Array of Ranks 9 Talking to Rank mii 7 G List of Ranks tes t M Send M Receive M Unexpected Apply Filters Halt control group Continue Help A 2 2 3 Setting a Breakpoint By right clicking on a breakpoint symbol you can specify its properties A breakpoint will stop the whole process group all MPI processes default or only one process In case you want to synchronize all processes at this location you have to change the breakpoint into a barrier by right clicking on a line num
172. please consult the subsequent sections While autoparallelization tries to exploit multiple processors within a machine automatic vectorization cf section 5 5 on page 61 makes use of instruction level parallelism within a processor Both features can be combined if the target machine consists of multiple processors equipped with vector units as it is the case on our HPC Cluster This combination is especially useful if your code spends a significant amount of time in nested loops and the innermost loop can successfully be vectorized by the compiler while the outermost loop can be autoparallelized It is common to autoparallelization and autovectorization that both work on serial i e not explicitly parallelized code which usually must be re structured to take advantage of these compiler features Table 6 20 on page 77 summarizes the OpenMP compiler options For the currently loaded compiler the environment variables FLAGS _OPENMP and FLAGS_AUTOPAR are set to the corresponding flags for OpenMP parallelization and autoparallelization respectively as is ex plained in section 5 2 on page 58 Compiler FLAGS _OPENMP FLAGS AUTOPAR Oracle xopenmp xautopar xreduction Intel openmp parallel GNU fopenmp 4 2 and above empty PGI mp Minfo mp Mconcur Minline Table 6 20 Overview of OpenMP and autoparallelization compiler options 70 Although the GNU compiler has an autoparallelization option we intent
173. pplication Please refer to the scalasca http www scalasca org software documentation for more details Example in C summing up all three steps PSRC pex 870 skin MPICC FLAGS_DEBUG FLAGS_FAST FLAGS_ARCH64 PSRC cmj c SPSRC pex 870 scan MPIEXEC show x EPK_TRACE x EPK_TITLE x EPK_LDIR x EPK_GDIR x ELG_BUFFER_SIZE np 4 a out SPSRC pex 870 square epik_a_4_sum Note Instead of skin scan and square you can also use scalasca instrument scalasca analyse and scalasca examine 8 5 Runtime Analysis with gprof Lin With gprof a runtime profile can be generated The program must be translated and linked with the option pg During the execution a file named gmon out is generated that can be analyzed by gprof program With gprof it is easy to find out the number of the calls of a program module which is a useful The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 103 information for inlining Note gprof assumes that all calls of a module are equally expensive which is not always true We recommend using the Callers Callees info in the Oracle Performance Analyzer to gather this kind of information as it is much more reliable However gprof is useful to get the exact function call counts 104 The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 9 Application Software and Program Libraries 9 1 Application Software You can find a list of available application software and progr
174. processes on the ordered hosts HEE you can even specify a 0 for each host HEE c number of MICs HHH d comma separated list of MPI processes on the ordered MICs BSUB Jd hosts 1 0 mics 2 20 20 You need to reserve the hosts And each host needs at least one process otherwise the Job will not start 2 5 3 7 Module System There is no module system at the coprocessors Only one version of the Intel compiler loaded by default and one version of Intel MPI suffix mic are supported 2 5 3 8 Limitations The Intel Xeon Phi cluster is running in the context of innovative computation which means that we do not guarantee the availability At the moment we have the following limitations e Only one compiler version always the default Intel compiler and one MPI version in telmpi mic is supported e Intel MPI LSF does not terminate the job although your MPI application finished Please use a small run time limit 4BSUB W to save resources The job will terminate after reaching this limit e LEO is not supported within MPI jobs e Our mpi_ bind script see chapter 4 5 1 on page 43 is not working for jobs on Intel Xeon Phi Please refer to the Intel MPI manual for process binding 2 5 3 9 Further Information Introduction to the Intel Xeon Phi in the RWTH Compute Cluster Environment Slides 2013 08 07 Introduction to the Intel Xeon Phi in the RWTH Compute Cluster Environment Exercises 2013 08 07 https sha
175. programs It can be used for multithreaded OpenMP and MPI applications Furthermore since version 2 6 it can handle GPGPU programs written with NVIDIA Cuda For non GPU programs you should enable the check box Run without CUDA support The module is located in the DEVELOP category and can be loaded with module load ddt For full documentation please refer to http content allinea com downloads userguide pdf Note If DDT is running in the background e g using amp ddt amp then this process may get stuck some SSH versions cause this behaviour when asking for a password If this happens to you go to the terminal and use the fg or similar command to make DDT a foreground process or run DDT again without using amp 7 4 Runtime Analysis of OpenMP Programs If an OpenMP program runs fine using a single thread but not multiple threads there is probably a data sharing conflict or data race condition This is the case if e g a variable which should be private is shared or a shared variable is not protected by a lock The presented tools will detect data race conditions during runtime and point out the portions of code which are not thread safe Recommendation Never put an OpenMP code into production before having used a thread checking tool 7 4 1 Oracle s Thread Analyzer Lin Oracle Sun integrated the Thread Analyzer a data race detection tool into the Studio compiler suite The program can be instrumented while compiling
176. provided on the home filesystem Silent removement of the snapshots in the work file system stays reserved The hpework file system is accessible as SHPCWORK hpcwork username from the linux part of the HPC Cluster and is currently not available from the Windows part This high performance Lustre file system see chapter 4 3 2 on page 32 is intended for very large data consisting of not so many big and huge files You are welcome to use this file system 30 The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 instead of the WORK file system There is no backup of the HPCWORK file system Note The hpcwork filesystem is available from the old legacy non Bull part of the HPC Cluster but with limited speed only so do not run computations with huge amount of input output on the old machines Note The constellation of the WORK and the HPCWORK Lustre file systems may be subject to change Stay tuned Note Every user has limited space quota on file systems Use the quota command to figure out how much of your space is already used and how much is still available Due to the amount of HPC Cluster users the quota in the home directory is rather small in order to reduce the total storage requirement If you need more space or files please contact us Note In addition to the space also the number of files is limited Note The Lustre quotas on hpework are group quotas this may have impact to very old HPC Clusteraccounts
177. pute slots you want to use 27 consists of all host threads processes without those on the MIC 23 You must specify n 1 because otherwise the job will not start 29 BSUB n 1 31 Use esub for Phi 32 RBSUB a phi 34 Now specify the type of Phi job 35 native gt NATIVE Job 36 RBSUB Jd native 38 Execute your native application 39 ssh_mic a out The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 21 0 0 y O 0 FF OU N KB a a Y UT OOOO A A A BB BR B RB BR BP Bw ww HW ww Y ww ww Yd YY dS NY YN DSN Re BP Be BP Be BP Be Be a Naat vY FF OO 0 JO 0 FE GON A OOoO Oo O a KE DH DH KF FCEAA DAA KR HOH FF FGCOwMA DAH RB wWNH HO 22 Listing 3 PSRC pis LSF phi_mpi sh f l usr bin env zsh Job name BSUB J PHI_MPI_JOB File path where STDOUT will be written the J is the job id BSUB o PHI_MPI_JOB 4J OFF Different file for STDERR if not to be merged with STDOUT BSUB e PHI_MPI_JOB efJ Request the time you need for execution in minutes The format for the parameter is hour minute that means for 80 minutes you could also use this 1 20 AH IMPORTANT At the moment your job will not automatically end when your program is finished The job uses all the time you requested in your job script Please be careful with the estimated duration BSUB W 80 Request virtual memory you need for your job in MB BSUB
178. r work flow MPI specifies the interface but not the implementation Therefore there are plenty of implementations for PCs as well as for supercomputers There are free implementations available as well as commercial ones which are particularly tuned for the target platform MPI has a huge number of calls although it is possible to write meaningful MPI applications just employing some 10 of these calls Like the compiler environment flags which were set by the compiler modules we also offer MPI environment variables in order to make it easier to write platform independent makefiles However these variables are only available on our Linux systems Since the compiler wrappers and the MPI libraries relate to a specific compiler a compiler module has to be loaded before the MPI module Refer to p 170 p 190 in the PDF file in http www pgroup com doc pgifortref pdf All other shared memory parallelization directives have to occur within the scope of a parallel region Nested PARALLEL END PARALLEL directive pairs are not supported and are ignored Refer to p 182 p 202 in the PDF file ibidem 82 The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 Some MPI libraries do not offer a C or a FORTRAN 90 interface for all compilers e g the Intel MPI does not offer such interfaces for the Oracle compiler If this is the case there will be an info printed while loading the MPI module e MPIEXEC The MPI command used to start
179. rces in the JARA HPC Partition you would first need to select between the two available node types of the RWTH Compute Cluster see chapter 2 2 on page 54 The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 13 for more details on the available hardware and submit a project proposal electronically using one of the following forms e for Westmere MPI S L nodes https pound zam kfa juelich de jarabullw_ projekt e for Nehalem SMP S L nodes https pound zam kfa juelich de jarabulln_ projekt Applications for computing time on the JARA HPC partition can be submitted by any scientist of RWTH Aachen University Forschungszentrum J lich or German Research School for Simulation Sciences GRS qualified in his or her respective field of research Note In order to login to HPC Cluster the members of the Forschungszentrum J lich and GRS should go to https webapp rz rwth aachen de partner sso p fzj and follow the instructions there If your JARA HPC application is approved and granted compute time it would be assigned a JARA HPC four digit project number and an identifier similar to jara4321 A Unix group by the name of the identifier will be created This name has to be used for all job submissions as well as it must be provided to all tools for group management and accounting Lead of a project and the technical contact person if specified in the proposal have been granted the ability to administer the corresponding Unix group
180. rder of libraries If a library xxx uses symbols from the library yyy the library yyy has to be right of xxx in the command line e g ld lxxx lyyy The search path for header files is extended with the Idirectory option and the library search path with the Ldirectory option The environment variable LD_ LIBRARY _ PATH specifies the search path where the program loader looks for shared libraries Some compile time linkers e g the Oracle linker also use this variable while linking but the GNU linker does not Consider the static linking of libraries This will generate a larger executable which is however a lot more portable Especially on Linux the static linking of libraries may be a good idea since every distribution has slightly different library versions which may not be compatible with each other 5 3 Tuning Hints There are some excellent books covering tuning application topics 55If linked with this option the binary knows at runtime where its libraries are located and is thus inde pendent of which modules are loaded at the runtime The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 59 e G Hager and G Wellein Introduction to High Performance Computing for Scientists and Engineers CRC Computation Science Series 2010 ISBN 978 1 4398 1192 4 e J Hennessy and D Patterson Computer Architecture A Quantitative Approach Mor gan Kaufmann Publishers Elsevier 2011 ISBN 978 0123838728 Contig
181. rement does not include per function perturbation e Number of times a function region was executed e Aggregated counter values for each function region MPLrrelated metrics Total CPU allocation time e Time spent in pre instrumented MPI functions e Time spent in MPI communication calls subdivided into collective and point to point e Time spent in calls to MPI_ Barrier e Time spent in MPI I O functions e Time spent in MPI_ Init and MPI_ Finalize OpenMP related metrics Total CPU allocation time e Time spent for OpenMP related tasks e Time spent for synchronizing OpenMP threads e Time spent by master thread to create thread teams e Time spent in OpenMP flush directives e Time spent idle on CPUs reserved for slave threads 102 The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 Setup Use module load UNITE module load scalasca to load the current default version of scalasca Instrumentation To perform automatic instrumentation of serial or MPI codes simply put the command for the scalasca wrapper in front of your compiler and linker commands For OpenMP codes the additional flag pomp is necessary For example gcc skin gcc or ifort skin pomp ifort Execution To execute such an instrumented binary prepend scan to your normal launch line This will properly set up the measurement environment and analyze data during the program execution There are two possible modes that you can use with Scalasc
182. removing creates load on the back up system Please use work or tmp file systems for short living files The HOME data will be backed up in regular intervals We offer snapshots of the home directory so that older versions of accidentally erased or modified files can be accessed without requesting a restore from the backup The snapshots are located in each directory in the snapshot snapshot name subdirectory where the name depends on the snapshot interval rule and is hourly nightly or weekly followed by a number Zero is the most recent snapshot higher numbers are older ones Alternativly you can access the snapshot of your home directory with the environment variable HOME SNAPSHOT The date of a snapshot is saved in the access time of these directories and can be shown for example with the command ls ltru The work file system is accessible as 8WORK work username or W and is intended for medium term data like intermediate compute results and especially for sharing the data with the windows part of the cluster As far as you do not depend on sharing the data between Linux and Windows you should use the hpcwork instead of the work direktory Note There is no backup of the SWORK file system Do not store any non reproducible or non recomputable data like source code or input data on the work file system Note As long as there is some free volume we will offer the snapshots on the work file system in the same way as they are
183. repoint campus rwth aachen de units rz HPC public Shared Documents 2013 08 07_mic_tutorial pdf https sharepoint campus rwth aachen de units rz HPC public Shared Documents 2013 08 07_ex_ phi tar gz The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 23 3 Operating Systems To accommodate our user s needs we are running two different operating systems on the ma chines of the HPC Cluster at the RWTH Aachen University Linux see chapter 3 1 on page 24 and Windows see chapter 3 2 on page 25 The differences between these operating systems are explained in this chapter 3 1 Linux Linux is a UNIX like operating system We are running the 64 bit version of Scientific Linux SL with support for 32 bit binaries on our systems Scientific Linux is a binary compatible clone of RedHat Enterprise Linux The Scientific Linux release is displayed by the command cat etc issue The Linux kernel version can be printed with the command uname r 3 1 1 Processor Binding Note The usage of user defined binding may destroy the performance of other jobs running on the same machine Thus the usage of user defined binding is only allowed in batch mode if cluster nodes are reserved exclusively Feel free to contact us if you need help with binding issues During the runtime of a program it could happen and it is most likely that the scheduler of the operating system decides to move a process or thread from one CPU to another in
184. rmation can be found at http www hdfgroup org HDF5 To initialize the environment use module load LIBRARIES module load hdf5 This will set the environment vari ables HDF5 ROOT FLAGS HDF5 INCLUDE and FLAGS HDF5_ LINKER for compil ing and linking and enhance the environment variables PATH LD LIBRARY PATH FLAGS MATH Example PSRC pex 990 MPIFC FLAGS_MATH_INCLUDE c PSRC psr ex_ds1 f90 SPSRC pex 990 MPIFC FLAGS_MATH_LINKER ex_ds1 o PSRC pex 994 a out 9 10 Boost Lin Boost provides free peer reviewed portable C source libraries that work well with the C Standard Library Boost libraries are intended to be widely useful and usable across a broad spectrum of applications More information can be found at http www boost org To initialize the environment use module load LIBRARIES module load boost This will set the environment vari ables BOOST ROOT FLAGS BOOST INCLUDE and FLAGS BOOST _LINKER for com piling and linking and enhance the environment variables PATH LD_LIBRARY_PATH FLAGS MATH Most Boost libraries are header only they consist entirely of header files containing tem plates and inline functions and require no separately compiled library binaries or special treat ment when linking Example The C interfaces are available for Open MPI only please add lhdf5_ cpp to the link line 110 The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 PSRC pex 992 CXX FL
185. roximately 3 of the total amount of memory 42 The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 Model Architecture Slots Memory Max Mem SMP S BCS Beckton Nehalem EX 128 256 GB 1950 MB SMP L BCS Beckton Nehalem EX 128 1 TB 7550 MB SMP XL BCS Beckton Nehalem EX 128 2 TB 15150 MB Table 4 12 Available BCS nodes MPI Binding Script Especially for big SMP machines like the BCS nodes the binding of the MPI processes and the threads e g hybrid codes is very important for the performance of an application To overcome the lack of functionality in the vendor MPIs and for convenience we provide a binding script in our environment The script is not designed to get the optimal distribution in every situation but it covers all usual case e g one process per socket The script makes the following assumptions e It is executed within a batch job some LSF environment variable are needed e The job reserved the node s exclusively e The job does not overload the nodes e The OMP_NUM_ THREADS variable is set correctly e g for hybrid jobs To use this script set mpi_ bind between the mpiexec command and your application a out MPIEXEC FLAGS_MPI_BATCH mpi_bind a out Note that the threads are not pinned at the moment If you want to pin them as well you can use the vendor specific environment variables Vendor Environment Variable
186. rting Stopping and Restarting your Program 114 ALG Printing a Variable 4 4 6 4 8 be Ree Ee ae aa 114 A 1 7 Action Points Breakpoints Evaluation Points Watchpoints 115 ALS Memory Debugdig e s 0 0 605 586 deena ee bee eae a 115 FLO Replay Engine 22 secs sce osse ee ee ee e oe eS 116 The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 A 1 10 Offline Debugging TVScript 2 224 116 A 2 Debugging Parallel Programs e 117 A 2 1 Some General Hints for Parallel Debugging 117 A 2 2 Debugging MPI Programs lt o o ee ee 117 A 2 3 Debugging OpenMP Programs 02 0 00 119 B Beginner s Introduction to the Linux HPC Cluster 121 Bol DOO nwa osme a tee ee ogee eee heed eee othe eh i 121 B 2 The Example Collection e 4 2 2540 ee HEM ERE EE Re pai naa 121 B 3 Compilation Modules and Testing 2 0000 122 B 4 Computation im batch mode cok kee ewe ee ee 124 The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 7 1 Introduction The Center for Computing and Communication of the RWTH Aachen University Rechen und Kommunikationszentrum RZ der Rheinisch Westf lischen Technischen Hochschule RWTH Aachen has been operating a UNIX cluster since 1994 and supporting Linux since 2004 and Windows since 2005 Today most of the cluster nodes run Linux while Windows becomes increasingly popular The cluster is o
187. s are replicated and other parts usually the compu tational pipelines are shared between threads These threads run different instruction streams in pseudo parallel mode The performance gained by this approach depends much on hardware and software Processor cores not supporting hardware threads can be viewed as having only one thread From the operating system s point of view every hardware thread is a logical processor For instance a computer with 8 sockets having installed dual core processors with 2 hardware threads per core would appear as a 32 processor 32 way system As it would be tedious to write logical processor or logical CPU every time when referring to what the operating system sees as a processor we will abbreviate that Anyway from the operating system s or software s point of view it does not make a difference whether a multicore or multisocket system is installed 2 1 1 Non Uniform Memory Architecture For performance considerations the architecture of the computer is crucial especially regarding memory connections All of today s modern multiprocessors have a non uniform memory access NUMA architecture parts of the main memory are directly attached to the processors Today all common NUMA computers are actually cache coherent NUMA or ccNUMA ones There is special purpose hardware or operating system software to maintain the cache coherence Thus the terms NUMA and ccNUMA are very often us
188. s well as OpenMP events Additionally certain program specific events and data from hardware event counters can also be measured Vampir is designed to help you to find performance bottlenecks in your application Such bottlenecks originate from computation communication memory and I O aspects of your The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 99 application in conjunction with the hardware setup Note Measurement may significantly disturb the runtime behavior of your application Possible bottlenecks identifiable through the use of VampirTrace are e Unbalanced computation e Strictly serial parts of your program e Very frequent tiny function calls e Sparse loops e Communication dominating over computation e Late sender late receiver e Point to point messages instead of collective communication e Unmatched messages e Overcharge of MPI s buffers e Bursts of large messages e Frequent short messages e Unnecessary synchronization e Memory bound computation detectable via hardware event counters e O bound computation slow input output sequential I O on single process I O load imbalance Be aware that tracing can cause substantial additional overhead and may produce lots of data which will ultimately perturb your application runtime behavior during measurement To be able to spot potential bottlenecks the traces created with VampirTrace are visual ized with either Vampir or VampirServer These GUIs offer
189. slots on the same host Furthermore it will set the OMP NUM THREADS environment variable for OpenMP jobs to the number of threads you specified with n see example in listing 7 on page 47 MPI Parallelization In order to start a MPI program you have to tell LSF how many processes you need and eventually how they should be distributed over the hosts Additionally you have to specify which MPI you want to use with the option BSUB a open intelmpi in your job file Do not forget to switch the module if you do not use the default MPI see 9 on page 49 To call the a out MPI binary use in your submit script the line MPIEXEC FLAGS_MPI_BATCH a out The batch system set these environment variables accordingly to your request and used MPI You can call the MPI program multiple times per batch job however it is not recommended Note Usage of only one MPI library implementation per batch job is supported so you have to submit discrete jobs for e g Open MPI and Intel MPI programs Note Usage of deviant less than specified number of processes is currently not supported Submit a separate batch job for each number of MPI processes you want your program to run with 38 The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 Example MPI Jobs can be found in listings 8 on page 48 and 9 on page 49 Open MPI The Open MPI is loaded by default It is tightly integrated within LSF which means that Open MPI and LSF communicate directly Thus the FL
190. sors denoted with logical IDs The value specified for SUNW_MP_PROCBIND can be one of the following e The string true or false e A list of one or more non negative integers separated by one or more spaces The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 79 e Two non negative integers n1 and n2 separated by a minus nl must be less than or equal to n2 means all IDs from n1 to n2 Logical IDs are consecutive integers that start with O If the number of virtual processors available in the system is n then their logical IDs are 0 1 n 1 Note The thread binding with SUNW_MP_PROCBIND currently does not care about bind ing in operating system e g by taskset This may lead to unexpected behavior or errors if using both ways to bind the threads simultaneously 6 1 4 2 Automatic Scoping The Oracle compiler offers a highly interesting feature which is not part of the current OpenMP specification called Automatic Scoping If the programmer adds one of the clauses default _ auto or ___auto list of variables to the OpenMP parallel directive the compiler will perform the data dependency analysis and determine what the scope of all the variables should be based on a set of scoping rules The programmer no longer has to declare the scope of all the variables private firstprivate lastprivate reduction or shared explicitly which in many cases is a tedious and error prone work In case the compiler is not a
191. sponding row every time one run of the job is started e g so INPUTLINE awk NR LSB_JOBINDEX input txt echo INPUTLINE a out input INPUTLINE Note Multiple jobs of the same array job can start and run at the same time the number of concurrently running array jobs can be restricted Of the the following array job with 100 elements only 10 would run concurrently bsub J myArray 1 100 10 echo Job LSB_JOBINDEX Environment variables available in array jobs are denoted in the table 4 10 on page 40 Environment Variable Description LSB_JOBINDEX_STEP Step at which single elements of the job array are defined LSB_JOBINDEX Contains the job array index LSB_JOBINDEX_END Contains the maximum value of the job array index Table 4 10 Environment variables in Array Jobs More details on array jobs can be found in WiKi Chain Jobs It is highly recommended to divide long running computations several days into smaller parts It minimizes the risk of loosing computations and reduces the pending time Such partial computations form a chain of batch jobs in which every successor waits until its predecessor is finished There are multiple ways to define chain jobs e A chain job can be created by submitting an array job with up to 1000 elements and limiting the number of concurrently running subjobs to 1 Example with 4 subjobs BSUB J ChainJob 1 4 1 1 Note The order of the subtasks is not guarante
192. storage system Values larger than 64 MiB have shown almost no throughput benefit in our tests 4 3 2 5 Caveats The availability of our Lustre setup is specified as 95 which amounts to 1 2 days of expected downtime per month Lustre s weak point is its MDS metadata server all file operations also touch the MDS for updates to a file s metadata Large numbers of concurrent file operations e g a parallel make of the Linux kernel have reliably resulted in slow down of our Lustre setup 4 4 Defaults of the RWTH User Environment Lin The default login shell is the Z zsh shell Its prompt is symbolized by the dollar sign With the special dot command a shell script is executed as part of the current process sourced Thus changes made to the variables from within this script affect the current shell which is the main purpose of initialization scripts PSRC pex 440 For most shells e g bourne shell you can also use the source command source PSRC pex 440 Environment variables are set with export VARIABLE value This corresponds to the C shell command the C shell prompt is indicated with a symbol setenv VARIABLE value If you prefer to use a different shell keep in mind to source initialization scripts before you change to your preferred shell or inside of it otherwise they will run after the shell exits intit_script exec tcsh If you prefer using a different shell e g bash as
193. t jobs using the bsub option P lt project group gt The submission process will check the membership and will conduct additional settings for the job Advanced Reservation An advanced reservation reserves job slots for a specified period of time By default the user can not do this by his own In case such an advanced reservation was made for you use the reservation ticket with U lt reservation_ID gt submit option The command brsvs displays all advanced reservations Overloading Systems Oversubscription of the slot definition e g usage of hyperthread ing is currently not supported by LSF However for shared memory and hybrid jobs the num ber of threads can be adjusted by setting the OMP_ NUM THREADS environment variable manually Do not forget to request the nodes for exclusive usage to prevent disturbance by other jobs possibly running on the same node if you wish to experiment with overloading Binding and pinning The Platform LSF built in capabilities for hardware affinity are currently not used in our environment Feel free to bind pin the processes and threads using e g the taskset command or compiler specific options However if you want to use some affinity options in your batch job request the nodes for exclusive usage to prevent disturbance by other jobs possibly running on the same node For an easy vendor independed MPI binding you can use our mpi_ bind script see chapter 4 5 1 on page 43 Big SMP BCS systems The
194. t programming models can be used Most programs can run natively on the coprocessor Parallel regions of the code can be offloaded using the Intel Language Extension for Offload LEO Intel MPI can be used to send messages between the hosts and the coprocessors 2 5 3 1 Native Execution Cross compiled programs using OpenMP Intel Threading Building Blocks TBB or Intel Cilk Plus can run natively on the coprocessor To prepare the application the Intel compiler on the host must be instructed to cross compile the application for the coprocessor e g by adding the mmic switch to your makefile Now you can login to the coprocessor and start the program in the normal way e g ssh cluster phi mict cd path to dir a out The LD_LIBRARY_PATH and the PATH environment variables will be set automatically 2 5 3 2 Language Extension for Offload LEO The Intel Language Extension for Offload offers a set of pragmas and keywords to tag code regions for execution on the coprocessor Programmers have additional control over data transfer by clauses that can be added to the offload pragmas One advantage of the LEO model compared to other offload programming models is that the code inside the offloaded region may contain arbitrary code and is not restricted to certain types of constructs The code may contain any number of function calls and it can use any parallel programming model supported e g OpenMP Fortran do concurrent POSIX Threads Int
195. tain parts of the instruction set which are not available on all refer to chapter 4 4 2 on page 34 The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 69 processors Hence mtune is the less aggressive option and you might consider switching to march if you know what you are doing Other options which might be of particular interest to you are e fopenmp Enables OpenMP support GCC 4 2 and newer versions Please refer to Section 6 1 on page 76 for information about OpenMP parallelization e ftree parallelize loops N Turns on auto parallelization and generates an executable with N parallel threads GCC 4 3 and newer versions Please refer to Section 6 1 on page 76 for information about auto parallelizing serial code 5 7 2 Debugging The GNU compiler offers several options to help you find problems with your code e g Puts debugging information into the object code This option is necessary if you want to debug the executable with a debugger at the source code level cf Chapter 7 on page 88 e Wall Turns on lots of warning messages of the compiler Despite its name this flag does not enable all possible warning messages because there is e Wextra which turns on additional ones e Werror Treats warnings as errors i e stops the compilation process instead of just printing a message and continuing e O0 Disables any optimization This option speeds up the compilations during the development debug
196. tainig all the needed switches and source it once at the beginning of any interactive session or batch job 4 5 The RWTH Batch Job Administration A batch system controls the distribution of tasks also called batch jobs to the available ma chines and the allocation of other resources which are needed for program execution It ensures that the machines are not overloaded as this would negatively impact system performance If the requested resources cannot be allocated at the time the user submits the job to the system the batch job is queued and will be executed as soon as resources become available Please use the batch system for jobs running longer than 15 minutes or requiring many resources in order to reduce load on the frontend machines 4 5 1 The Workload Management System LSF Lin Batch jobs on our Linux systems are handled by the workload management system IBM Plat form LSF http www 03 ibm com systems technicalcomputing platformcomputing products Isf index html The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 35 Note All information in this chapter may be subject to change since we are collecting further experiences with LSF in production mode For latest info take a look at this wiki https wiki2 rz rwth aachen de display bedoku Workload Management System LSF Job Submission For job submission you can use the bsub command bsub options command arguments We advise to use a batch script within
197. the Intel Parallel Insepctor is integrated into Visual Studio You can either choose it by starting Start Programs ntel Parallel Studio gt Intel Parallel Studio with VS 2008 or directly run Visual Studio and select the Start Inspector Analysis button from the tool bar More information will be provided in a future release of this User s Guide Or see http software intel com en us articles intel inspector xe documentation 92 The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 8 Performance Runtime Analysis Tools This chapter describes tools that are available to help you assess the performance of your code identify potential performance problems and locate the part of the code where most of the execution time is spent Runtime analysis is no trivial matter and cannot be sufficiently explained in the scope of this document An introduction to some of the tools described in this chapter will be given at workshops in Aachen and other sites in regular intervals If you need help using these tools or if you need assistance when tuning your code please contact the HPC group via the Service Desk servicedesk rz rwth aachen de The following chart provides an overview of the available tools and their field of use MPI Analysis Oracle Performance Analyzer Intel Amplifier XE VTune Cache and Memory Analysis Call Graph Based Analysis mr OpenMP and Threading Analysis
198. the memory debugging capabilities of TotalView More information about TVScript can be found in Chapter 4 of the Reference Guide Example Compile and run a Fortran program print the current stack backtrace into the log file on the begining of subroutines t1 and t2 PSRC pex al5 FC g PSRC psr TVScript_tst f90 tvscript create_actionpoint tl1 gt display_backtrace create_actionpoint t2 gt display_backtrace a out MPI Programs also can be debugged with tvscript Each process is debugged independently but the whole output is written to the same log files However the records are still distin guishable because the MPI rank is noted as well Note that for each MPI process a license token is consumed so the number of debuggable processes is limited Optional parameters to underlying mpiexec of the MPI library can be provided with the starter_ args option http www roguewave com products totalview family replayengine overview features aspx http www roguewave com support product documentation totalview family aspx 116 The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 If using tvscript in the batch you must provide both the number of processes to start and FLAGS_MPI_ BATCH environment variable containing the host file Example runs also interactively Launch a out with 2 processes using Open MPI with aim to prints the value of variables my _MPI_ Rank and
199. tion xopenmp This option may be used together with automatic parallelization enabled by xautopar but loops within OpenMP parallel regions are no longer subject to autoparal lelization The xopenmp option is used as an abbreviation for a multi tude of options the FORTRAN 95 compiler for example expands it to mp openmp explicitpar stackvar D OPENMP O3 Please note that all lo cal data of subroutines called from within parallel regions is put onto the stack A subroutine s stack frame is destroyed upon exit from the routine Therefore local data is not preserved from one call to the next As a consequence FORTRAN programs must be compiled with the stackvar option The behavior of unused worker threads between parallel regions can be controlled with the environment variable SUNW_MP_THR_IDLE The possible values are spin sleep ns nms The worker threads wait either actively busy waiting and thereby consume CPU time or passively idle waiting and must then be woken up by the system or in a combination of these methods they actively wait spin and are put to sleep n seconds or milliseconds later With fine grained parallelization active waiting and with coarse grained parallelization pas sive waiting is recommended Idle waiting might be advantageous on an over loaded system Note The Oracle compilers default behavior is to put idle threads to sleep after a certain time out Those users that prefer the old behavior be
200. tion crashed 10 e g OpenMP and Fortran can consume a lot of stack X Request node s exclusive please do not use without good OFF reasons especially do not use for serial jobs Table 4 7 Job resources options memory up to another gigabytes In order to use all slots of a machine you should order less memory per process than the naive calculation returns of course only if your job can run with this memory limit at all Special Resources If you want to submit a job to a specific machine type or a predefined host group you can use the option m lt hostgroup gt The values for lt hostgroup gt can be the host groups you get with the bhosts command A range of recommended host groups are denoted in the table 4 8 on page 37 Host Group Architecture Slots Memory Max Mem mpi s Westmere EP 12 24 GB 1850 MB mpi l Westmere EP 12 96 GB 7850 MB Table 4 8 Recommended host groups More information about the hardware can be found in the chapter 2 2 on page 13 Compute Units To ensure MPI jobs run on nodes directly connected through a high speed network so called Compute Units are used The selection of such a compute unit is done automatically for you when an MPI job is submitted We have defined several compute unit types see table 4 9 on page 37 Compute example meaning Unit name chassis C lt number gt up to eighteen of the mpi s a
201. toparallelization All compilers installed on our HPC Cluster can parallelize programs more precisely loops automatically at least in newer versions This means that upon request they try to transform portions of serial FORTRAN C C code into a multithreaded program Success or failure of autoparallelization depends on the compiler s ability to determine if it is safe to parallelize a nested loop This often depends on the area of the application e g finite differences versus finite elements programming language pointers and function calls may make the analysis difficult and coding style The flags to turn this feature on differ among the various compilers Please refer to the sub sequent sections for compiler specific information The environment variable FLAGS _AUTOPAR offers a portable way to enable autoparallelization at compile link time For the Intel Ora cle and PGI compilers the number of parallel threads to start at runtime may be set via OMP_NUM_ THREADS just like for an OpenMP program Only with the GNU compiler the number of threads is fixed at compile link time Usually some manual code changes are necessary to help the compiler to parallelize your serial loops These changes should be guided by compiler feedback increasing the compiler s verbosity level therefore is recommended when using autoparallelization The compiler options to do this as well as the feedback messages themselves are compiler specific so again
202. tp www roguewave com support product documentation totalview family aspx A 1 Debugging Serial Programs A 1 1 Some General Hints for Using TotalView e Click your middle mouse button to dive on things in order to get more information e Return undive by clicking on the undive button if available or by View Undive e You can change all highlighted values Press F2 e If at any time the source pane of the process window shows disassembled machine code the program was stopped in some internal routine Select the first user routine in the Stack Trace Pane in order to see where this internal routine was invoked A 1 2 Compiling and Linking Before debugging compile your program with the option g and without any optimization A 1 3 Starting TotalView You can debug your program 1 either by starting TotalView with your program as a parameter SPSRC pex al0 totalview a out a options 2 or by starting your program first and then attaching TotalView to it In this case start totalview which first opens its New Program dialog This dialog allows you to choose the program you want to debug 3 You can also analyze the core dump after your program crashed by totalview a out core Start Parameters runtime arguments environment variables standard IO can be set in the Process Startup Parameters menu After starting your program TotalView opens the Process Window It consists of e the Source Pane display
203. tsu Siemens Intel Xeon X7350 4 16 64 GB cluster2 RX60084 X Tigerton 2 93 GHz 187 5 GFlops cluster x2 2 nodes Fujitsu Siemens Intel Xeon E5450 2 8 16 32 GB cluster linux xeon RX20084 X Harpertown 3 0 GHz 96 GFlops winhtc04 62 60 nodes IBM eSever AMD Opteron 8356 4 16 32 GB linuxbc01 03 LS42 3 nodes Barcelona 2 3 GHz 147 2 Gflops Table 2 3 Node overview hosted systems are not included 14 The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 e Level 1 on chip 32 KB data cache 32 KB instruction cache 8 way associative e Level 2 on chip 256 KB cache for data and instructions 8 way associative e Level 3 on chip 8 MB cache for data and instructions shared between all cores 16 way associative The cores have a nominal clock speed of 2 93 GHz 2 3 2 The Xeon X7550 Beckton Nehalem EX Processor Intel s Xeon X7550 Processors codename Beckton formerly also Nehalem EX have eight cores per chip Each core is able to run two hyperthreads simultaneously Each of these cores has two levels of cache per core and one level 3 cache shared between all cores e Level 1 on chip 32 KB data cache 32 KB instruction cache 8 way associative e Level 2 on chip 256 KB cache for data and instructions 8 way associative e Level 3 on chip 18 MB cache for data and instructions shared between all cores 16 way associative The cores have
204. uctions shared between all cores 16 way associative using Intel Turbo Boost up to 2 8 GHz http www intel com content www us en architecture and technology turbo boost turbo boost technology html The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 15 2 3 5 Memory Each processor package Intel just calls it processor has its own memory controller and is connected to a local part of the main memory The processors can access the remote memory via Intel s new interconnect called Quick Path Interconnect So these machines are the first Intel processor based machines that build a ccNUMA architecture On ccNUMA computers processor binding and memory placement are important to reach the whole available performance see chapter 2 1 1 on page 12 for details The machines are equipped with DDR3 RAM please refer to table 2 3 on page 14 for details The total memory bandwidth is about 37 GB s 2 3 6 Network The nodes are connected via Gigabit Ethernet and also via quad data rate QDR InfiniBand This QDR InfiniBand achieves an MPI bandwidth of 2 8 GB s and has a latency of only 2 ps 2 3 7 Big SMP BCS systems The nodes in the SMP complex are now coupled to big shared memory systems with the pro prietary BCS Bull Coherent Switch chips This means that 2 or 4 physical nodes boards form a 8 socket or rather a 16 socket systems with up to 128 cores detailed specification of these Bullx 56010 nodes can be found i
205. ucts http www rz rwth aachen de go id ond 8 The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 Frontend name OS cluster rz RWTH Aachen DE Linux cluster2 rz RWTH Aachen DE cluster linux rz RWTH Aachen DE cluster x rz RWTH Aachen DE Linux for graphical login cluster x2 rz RWTH Aachen DE X Win32 NX software cluster copy rz RWTH Aachen DE Linux for data transfers cluster copy2 rz RWTH Aachen DE cluster linux nehalem rz RWTH Aachen DE Linux Gainestown cluster linux opteron rz RWTH Aachen DE Linux Barcelona cluster linux xeon rz RWTH Aachen DE Linux Harpertown cluster windows rz RWTH Aachen DE Windows Table 1 1 Frontend nodes e Oracle Solaris Studio F95 C C e MS Visual Studio C Win e GNU F95 C C n e PGI F95 C C For Message Passing MPI one of the following implementations can be used e Open MPI e Intel MPI Win e Microsoft MPIW Table 1 2 on page 10 gives an overview of the available debugging and analyzing tuning tools 1 3 Examples To demonstrate the various topics explained in this user s guide we offer a collection of example programs and scripts The example scripts demonstrate the use of many tools and commands Command lines for which an example script is available have the following notation in this document PSRC pex 100 echo Hello World You can either run the script PSRC pex 100 to
206. ucts documentation hpc mkl mkl_userguide_Inx index htm 89 However if you want to use an alternative version of MKL with a given Intel compiler you have to initialize the environment of this MKL version after the compiler Also note that you have to use the FLAGS MKL_ INCLUDE and FLAGS MKL LINKER environment variables instead of FLAGS MATH _ ones because the latter ones will contain flags for both the included and the loaded version of MKL which cannot turn out well 106 The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 SPSRC pex 920 CC FLAGS_MATH_INCLUDE FLAGS_MATH_LINKER PSRC psr useblas c The number of threads used by the parallel Oracle Performance Library can also be controlled by a call to its use_ threads n function which overrides the OMP_NUM_THREADS value Nested parallelism is not supported Oracle Performance Library calls made from a parallel region will not be further parallelized 9 5 ACML AMD Core Math Library Lin The AMD Core Math Library ACML incorporates BLAS LAPACK and FFT routines that are designed for performance on AMD platforms but the ACML works on Intel processors as well There are OpenMP parallelized versions of this library are recognizable by an _ mt appended to the version string If you use the OpenMP version don t forget to use the OpenMP flags of the compiler while linking To initialize the environment use module load LIBRARIES module load acml This will set the
207. ule is loaded with module load totalview 7 3 2 Oracle Solaris Studio Lin Oracle Solaris Studio includes a complete Integrated Development Environment IDE which also contains a full screen debugger for serial and multi threaded programs Furthermore it provides a standalone debugger named dbx that can also be used by its GUI dbxtool In order to start a debugging session you can attach to a running program with module load studio dbxtool pid or analyze a core dump with S0Etnus was renamed to TotalView Technologies which now belongs to Rogue Wave Software http www roguewave com 81 see chapter 4 4 2 on page 34 90 The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 dbxtool corefile if you know the name of your executable you can also use this name instead of the dash or start the program under the control of the debugger with PSRC pex 730 dbxtool a out 7 3 3 gdb Lin Win gdb is a powerful command line oriented debugger The corresponding manual pages as well as online manuals are available for further information 7 3 4 pgdbg Lin pgdbg is a debugger with a GUI for debugging serial and parallel multithreaded OpenMP and MPI programs compiled with the PGI compilers To use it first load the PGI module and then run the debugger module load pgi pgdbg 7 3 5 Allinea ddt Lin Allinea ddt Distributed Debugging Tool is a debugger with a GUI for serial and parallel
208. unt of floating point operations divided by the runtime in seconds gives the FLOPS rate h cycles on dtlbm on Cycle count data translation look aside buffer DTLB misses A high rate of DTLB misses indicates an unpleasant memory access pattern of the program Large pages might help h llc reference on llc misses on Last level cache references and misses h 12 ld on 12_lines_in on L2 cache references and misses h 11i_reads on 11i_misses on L1 instruction cache references and misses Table 8 27 Hardware counter available for profiling with collect on Intel Harpertown Tigerton and Dunnington CPUs profiles These profiles can be vieved separately or alltogether giving an overview over the whole application run We found out that all processes must run on localhost in order to get the profiled data Example run 2 MPI processes on localhost with 2 threads each look for instructions and cycles harware counter PSRC pex 813 OMP_NUM_THREADS 2 mpiexec np 2 H hostname collect h cycles on insts on a out analyzer test x er Wrap the mpiexec Use collect for MPI profiling to manage collection of the data from the constituent MPI processes collect MPI trace data and organize the data into a single founder experiment with subexperiments for each MPI process collect lt opt gt M lt MPI gt mpiexec lt opt gt a out lt opt gt MPI profiling is based on the open source VampirTrace 5 5 3 release It rec
209. uous memory access is crucial for reducing cache and TLB misses This has a direct impact on the addressing of multidimensional fields or structures FORTRAN arrays should therefore be accessed by varying the leftmost indices most quickly and C and C arrays with rightmost indices When using structures all structure components should be processed in quick succession This can frequently be achieved with loop interchange The limited memory bandwidth of processors can be a severe bottleneck for scientific ap plications With prefetching data can be loaded prior to the usage This will help reducing the gap between the processor speed and the time it takes to fetch data from memory Such a prefetch mechanism can be supported automatically by hardware and software but also by explicitly adding prefetch directives FORTRAN or function calls in C and C The re use of cache contents is very important in order to reduce the number of memory accesses If possible blocked algorithms should be used perhaps from one of the optimized numerical libraries described in chapter 9 on page 105 Cache behavior of programs can be improved frequently by loop fission loop splitting loop fusion loop collapsing loop unrolling loop blocking strip mining and combinations of these methods Conflicts caused by the mapping of storage addresses to the same cache addresses false sharing can be eased by the creation of buffer areas padding The compiler optimizat
210. upport multiple threads simultaneously in hardware It is not clear which of those should be called a processor and everybody has another opinion on that Therefore we try to avoid the term processor for hardware and will use the following more specific terms A processor socket is the foundation on the main board where a processor package as delivered by the manufacturer is installed An 8 socket system for example contains up to 8 processor packages All the logic inside of a processor package shares the connection to main memory RAM A processor chip is one piece of silicon containing one or more processor cores Although typically only one chip is placed on a socket processor package 1t is possible that there is more than one chip in a processor package multi chip package A processor core is a standalone processing unit like the ones formerly known as processor or CPU One of today s cores contains basically the same logic circuits as a CPU previously did Because an n core chip consists coarsely speaking of n replicated traditional processors such a chip is theoretically memory bandwidth limitations set aside n times faster than a single core processor at least when running a well scaling parallel program Several cores inside of one chip may share caches or other resources A slightly different approach to offer better performance is hardware threads Intel Hyper Threading Here only parts of the circuit
211. use Ctrl Enter Job Details Resource Selection Licenses The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 53 It is important that you set a working directory from which the cluster can get the files stated in the command line and where it can put the output and error files Remember not to use your Windows drives like H as the cluster will only know them if you add a net use command or if you use the whole network path The command net use h cifs Cluster Home lt username gt mounts the HOME directory on Linux as the network drive H Similarly the network drive W can be mounted explicitly net use w cifs Cluster Work lt username gt Wokingdiectoy og exe Batch cmd lt username gt denotes the 8 digit login name C cits clustershometwx 23456 2 In order to access ISV software which is available in Standard input C Shared_ Software on the interactive frontend machines the fol lowing network path has to be used in a batch job Standard output cifs Cluster Software Ceuta Standard You can also specify to which nodes you want to submit your job rn error but this is not recommended When you are done click on Submit and your job will be queued With Save Job as your configu ration will be saved on the disk It can later be transmitted via f unttis is ony on nodes inthe folowing ist Actions Job Submissions Create new Job from Description
212. utoparallelization or OpenMP MPI Message Passing e ser Serial version no parallelization See chapter 5 on page 58 e aut Automatic parallelization done by the compiler for shared memory systems See chapter 6 1 on page 76 e omp Shared memory parallelization with OpenMP directives See ch 6 1 on page 76 e mpi Parallelization using the message passing interface MPI See ch 6 2 on page 82 e hyb Hybrid parallelization combining MPI and OpenMP See ch 6 3 on page 86 The example directories contain Makefiles for Linux and Visual Studio project files for Win dows Furthermore there are some more specific examples in project subdirectories like vihps You have to copy the examples to a writeable directory before using them On Linux you can copy an example to your home directory by changing into the example directory with e g cd PSRC F omp pi and running gmake cp After the files have been copied to your home directory a new shell is started and instructions on how to build the example are given gmake will invoke the compiler to build the example program and then run it Additionally we offer a detailed beginners introduction for the Linux cluster as an appendix see chapter B on page 121 It contains a step by step description about how to build and run a first program and should be a good starting point in helping you to understand many topics explained in this document It may also be interesting for advanced Li
213. vel it is possible to pass on additional information to the C compiler With the directive pragma pipeloop 0 in front of a for loop it can be indicated to the C compiler that there is no data dependency present in the loop In FORTRAN the syntax is PRAGMA PIPELOOP 0 Attention These options xrestrict and xalias level and the pragma are based on certain assumptions When using these mechanisms incorrectly the behavior of the program becomes undefined Please study the documentation carefully before using these options or directives Program kernels with numerous branches can be further optimized with the profile feedback method This two step method starts with compilation using this option added to the regular optimization options xprofile collect a out Then the program should be run for one or more data sets During these runs runtime characteristics will be gathered Due to the instru mentation inserted by the compiler the program will most likely run longer The second phase consists of recompilation using the runtime statistics xprofile use a out This produces a better optimized executable but keep in mind that this is only beneficial for specific scenarios When using the g option and optimization the Oracle compilers introduce comments about loop optimizations into the object files These comments can be printed with the com mand SPSRC pex 541 er_src serial_pi o A comment like Loop below pipelined with steady state cycl
214. vides the following analysis types e Lightweight Hotspots e Hotspots e Concurrency e Locks and waits e Hardware performance counter based analysis Requirements On Linux the first four mentioned experiment types can be made on any machine in the HPC Cluster The hardware counter based analysis requires special permissions These permissions can only be granted on the cluster linux tuning rz RWTH Aachen DE machine Therefore hardware counter based experiments need to be done there and you need to be added to the vtune group via the Service Desk servicedesk rz rwth aachen de On Windows all experiments need to be done on cluster win tuning rz RWTH Aachen DE You need to be added to the g tuning group to get access to this node via the Service Desk Usage If you plan to use hardware counters on Linux you need to connect to cluster linux tuning rz RWTH Aachen DE first Before logging in there with ssh you need to initialize your Kerberos ticket or you won t be able to log in Note It is not possible to log in to cluster linux tuning rz RWTH Aachen DE and any other non graphical frontend in HPC Cluster directly with an X Win32 or NX software client but only through one of the graphical cluster x nodes If you do not need hardware counters you can use VTune Amplifier XE on any machine Load the VTune Amplifier XE module and start the GUI module load intelvtune amplxe gui On Windows log in to cluster win tuning rz
215. wapping the bytes when reading binary files Below is a C example to convert from big to little endian or vice versa This example can easily be adapted for C however one has to write a function for each data type since C does not know templates Note This only works for basic types like integer or double and not for lists or arrays In case of the latters every element has to be swapped Listing 12 PSRC pex 542 ijtemplate lt typename T gt T swapEndian T x 2 union T x unsigned char blsizeof T dat1 dat2 3 4 dati x x 5 for int i 0 i lt sizeof T i 6 7 dat2 b i dati b sizeof T 1i il 8 9 return dat2 x 10 5 5 Intel Compilers Lin Win On Linux a version of the Intel FoRTRAN C C compilers is loaded into your environment per default They may be invoked via the environment variables CC CXX FC or directly by the commands icc icpe ifort on Linux and icl ifort on Windows The corresponding manual pages are available for further information An overview of all the available compiler options may be obtained with the flag help You can check the version which you are currently using with the v option Please use the module command to switch to a different compiler version You can get a list of all the available versions with module avail intel In general we recommend using the latest available compiler version to benefit from performance improvements and bug fixes
216. which is free to use Other software is available under different licenses for example PuTTY http www chiark greenend org uk sgtatham putty download html or SSH Client for Windows ftp ftp cert dfn de pub tools net ssh The SSH Client for Windows provides a graphical file manager for copying files to and from the cluster as well see chapter 4 3 1 on page 31 another tool providing such functionality is WinSCP http winscp net eng docs start If you log in over a weak network connection you are welcome to use the screen program which is a full screen CLI window manager Even if the connection breaks down your session will be still alive and you will be able to reconnect to it after you logged in again 4 1 2 Graphical Login If you need graphical user interface GUI you can use the X Window System The forwarding of GUI windows using the X Window System is possible when logged in in any Linux frontend see table 1 1 on page 9 When logging from Linux or Unix you usually do not need to install additional packages Depending on your local configuration it may be necessary to use the Y flag of the ssh command to enable the forwarding of graphical programs On Windows to enable the forwarding of graphical programs a X server on your local computer must run e g the cygwin http www cygwin com contains one Another X server for Windows is Xming http sourceforge net projects xming However the X Window System ca
217. workdirectory Execute your application MPIEXEC FLAGS_MPI_BATCH a out The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 Listing 9 PSRC pis LSF intelmpi_job sh k l usr bin env zsh HH Job name BSUB J IntelMPI64 File path where output will be written the 4J is the job id BSUB o IntelMPI64 4J 1 2 3 4 5 6 7 8 9 OFF Different file for STDERR if not to be merged with STDOUT o BSUB e IntelMPI64 eXJ 1 2 3 4 5 6 7 8 9 Request the time you need for execution in minutes The format for the parameter is hour minute that means for 80 minutes you could also use this 1 20 BSUB W 1 42 Request vitual memory you need for your job in MB BSUB M 1024 20 OFF Specify your mail address 2114 BSUB u user rwth aachen de 23 Send a mail when job is done 24 BSUB N 26 Request the number of compute slots you want to use 27 BSUB n 64 29 Use esub for Intel MPI 30 BSUB a intelmpi 32 switch to Intel MPI module 33 module switch openmpi intelmpi 35 Export an environment var 36 export A_ENV_VAR 10 38 Change to the work directory 39 cd home user workdirectory 41 Execute your application 42 MPIEXEC FLAGS_MPI_BATCH a out The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 49 0 0 JD 0 FF OU N e A OB be BR be BR BR A Bw Y Y Y Y ww ww wo Yd YY Dd NY NYDN DN Be BP
218. xchangejacobimpidata_ gt printresults lt 1 Figure 8 1 The Vampir GUI Instrumentation To perform automatic instrumentation of serial or Open MPI codes sim ply replace your compiler command with the appropriate vampir trace wrapper for example CC vtcc CXX vtcxx FC vtf90 If your application uses MPI you have to specify the MPI compiler wrapper for vampirtrace to ensure correct linking of the MPI libraries For this the option vt cc vt cxx and vt 90 is used for C C and FORTRAN respectively Execution Such an instrumented binary can then be executed as usually and will generate trace data during its execution There are several environment variables to control the behavior of the measurement facility within the binary Please refer to the vampirtrace documen tation at http tu dresden de die_tu_dresden zentrale_einrichtungen zih forschung software _werkzeuge_zur_unterstuetzung_von_programmierung _und_optimierung vampirtrace dateien VT UserManual 5 14 3 pdf for more details Visualization To start the analysis of your trace data with the classic Vampir load the module then simply type vampir tracefilename otf To analyze with the more advanced and multiprocessing vampir next generation the server needs to be started if not already running prior to analysis Assuming the module environment has been set up properly calling The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 101
219. your account needs to be activated If you are interested in using this machine please write a mail to servicedesk rz rwth aachen de with your user ID and let us know that you want to use the Intel Xeon Phi Cluster 2 5 2 Interactive Mode One frontend system can be used interactively This system should be used for programming debugging preparation and post processing of batch jobs It is not allowed to run production jobs Login from Linux is possible with the Secure Shell ssh For example ssh cluster phi rz rwth aachen de From the frontend you can login to the coprocessors ssh cluster phi mic0 or ssh cluster phi mic1 Please note that the host system cluster phi is only accessible with an additional hop over one of our normal frontends The coprocessor is only accessible from the Phi host system The frontend reboots every night at 4 00 am for setting up new users http www rz rwth aachen de vr The RWTH HPC Cluster User s Guide Version 8 2 6 August 2013 17 Registered users can access their HOME and WORK directories at the coprocessors using home lt tim gt and work lt tim gt paths where lt tim gt denotes the TIM user ID like ab123456 The local MIC home directory is michome lt tim gt Due to the fact that pro grams using the Intel Language Extension for Offload LEO are started with a special user id micuser file IO with in an offloaded region is not allowed 2 5 3 Programming Models Three differen
220. ze limit of your shell if you want to analyze the core that your program may have left behind ulimit c unlimited But please do not forget to purge core files afterwards Note You can easily find all the core files in your home dir with the following command find HOME type f iname core rz RWTH Aachen DEx In general we recommend using a full screen debugger like TotalView or Oracle Studio to e start your application and step through it e analyze a core dump of a prior program run e attach to a running program In some cases e g in batch scripts or when debugging over a slow connection it might be preferable to use a line mode debugger like dbx or gdb 7 3 1 TotalView Lin The state of the art debugger TotalView from Rogue Wave Software can be used to debug serial and parallel FORTRAN C and C programs You can choose between different versions of TotalView with the module command From version 8 6 on TotalView comes with the ReplayEngine The ReplayEngine allows backward debugging or reverting computations in the program This is especially helpful if the program crashed or miscomputed and you want to go back and find the cause In the appendix A on page 113 we include a TotalView Quick Reference Guide We recommend a careful study of the User Guide and Reference Guide http www roguewave com support product documentation totalview aspx to find out about all the near limitless skills of TotalView debugger The mod

Download Pdf Manuals

image

Related Search

Related Contents

Audiovox AA-929 User's Manual  Verbatim DVD-R Shiny Silver  TRP-C51 User's Manual  SCANCON A/S ABSOLUTE ROTARY ENCODER DEVICE NET  PDFファイルはこちら  Page 1 Programmation COMOS Platform Programmation Manuel d  NEW - セイコーウオッチ  User Manual TRANSIENT  据付工事説明書 取扱説明書  Trident Aegis  

Copyright © All rights reserved.
Failed to retrieve file