Home
Intel MPI999LSGE1 development software
Contents
1. 2 9 1 Building an MPI Application To build an MPI application for the host node and the Intel Xeon Phi coprocessor follow these steps 1 Establish the environment settings for the compiler and for the Intel MPI Library install dir composerxe bin compilervars sh intel64 install dir impi intel64 bin mpivars sh 2 Build your application for Intel Xeon Phi coprocessor mpiicc mmic myprog c o myprog mic 3 Build your application for Intel 64 architecture mpiicc myprog c o myprog 13 Intel MPI Library User s Guide for Linux OS 2 9 2 Running an MPI Application To run an MPI application on the host node and the Intel Xeon Phi coprocessor do the following 1 Ensure that NFS is properly set up between the hosts and the Intel Xeon Phi coprocessor s For information on how to set up NFS on the Intel Xeon Phi coprocessor s visit the Intel Xeon Phi coprocessor developer community at http software intel com en us mic developer 2 Use the I MPI MIC POSTFIX environment variable to append the mic postfix extension when running on the Intel Xeon Phi coprocessor export I MPI MIC POSTFIX mic 3 Make sure your mpi hosts file contains the machine names of your Intel Xeon host processors and the Intel Xeon Phi coprocessor s For example cat mpi hosts clusternodel clusternodel mic0 4 Launch the executable file from the host export I MPI
2. 2 Create a hostfile text file that lists the nodes in the cluster using one host name per line Intel MPI Library User s Guide for Linux OS To compile your MPI program 1 SDK only Make sure you have a compiler in your PATH To find the path to your compiler run the which command on the desired compiler For example which icc opt intel composerxe 2013 bin intel64 icc 2 SDK only Compile a test program using the appropriate compiler driver For example mpiicc o myprog lt installdir gt test test c To run your MPI program 1 Usethe previously created hostfile and start the mpirun command as follows mpirun n 4 of processes f hostfile myprog See the rest of this document and the Inte 9 MPI Library Reference Manual for more details 2 4 Compiling and Linking SDK only To compile and link an MPI program with the Intel MPI Library 1 Ensure that the underlying compiler and related software appear in your PATH If you are using the Intel Composer XE packages ensure that the compiler library directories appear in the LD LIBRARY PATH environment variable For example for Intel Composer source the environment variable scripts to configure the PATH and LD LIBRARY PATH appropriately opt intel composerxe 2013 bin compilervars c sh intel64 2 Compile your MPI program using the appropriate mpi compiler script For example to compile C code using the GNU C compiler use the following comman
3. VC 1 MJPEG AC3 AAC G 711 G 722 G 722 1 G 722 2 AMRWB Extended AMRWB AMRWB G 167 G 168 G 169 G 723 1 G 726 G 728 G 729 G 729 1 GSM AMR GSM FR are international standards promoted by ISO IEC ITU ETSI 3GPP and other organizations Implementations of these standards or the standard enabled platforms may require licenses from various entities including Intel Corporation BlueMoon BunnyPeople Celeron Celeron Inside Centrino Centrino Inside Cilk Core Inside E GOLD Flexpipe i960 Intel the Intel logo Intel AppUp Intel Atom Intel Atom Inside Intel Core Intel Inside Intel Insider the Intel Inside logo Intel NetBurst Intel NetMerge Intel NetStructure Intel SingleDriver Intel SpeedStep Intel Sponsors of Tomorrow the Intel Sponsors of Tomorrow logo Intel StrataFlash Intel vPro Intel XScale InTru the InTru logo the InTru Inside logo InTru soundmark Itanium Itanium Inside MCS MMX Moblin Pentium Pentium Inside Puma skoool the skoool logo SMARTi Sound Mark Stay With It The Creators Project The Journey Inside Thunderbolt Ultrabook vPro Inside VTune Xeon Xeon Inside X GOLD XMM X PMU and XPOSYS are trademarks of Intel Corporation in the U S and or other countries Other names and brands may be claimed as the property of others Microsoft Windows and the Windows logo are trademarks or registered trademarks of Microsoft Corporation in the United States and or other countries Java i
4. using a resource manager such as PBS Pro or LSF For example to run the application in the PBS environment follow these steps 1 Create a PBS launch script that specifies number of nodes requested and sets your Intel MPI Library environment For example create a pbs run sh file with the following content PBS 1 nodes 2 ppn 1 11 Intel MPI Library User s Guide for Linux OS PBS 1 walltime 1 30 00 PBS q workq PBS V Set Intel MPI environment mpi dir installdir arch bin ed SPBS_O WORKDIR source mpi_dir mpivars sh Launch application mpirun n lt of processes gt myprog 2 Submit the job using the PBS qsub command qsub pbs run sh When using mpirun under a job scheduler you do not need to determine the number of available nodes Intel MPI Library automatically detects the available nodes through the Hydra process manager 2 8 Controlling MPI Process Placement The mpirun command controls how the ranks of the processes are allocated to the nodes of the cluster By default the mpirun command uses group round robin assignment putting consecutive MPI process on all processor ranks of a node This placement algorithm may not be the best choice for your application particularly for clusters with symmetric multi processor SMP nodes Suppose that the geometry is lt ranks gt 4 and lt nodes gt 2 where adjacent pairs of ranks are assigned to each node for example for two way SM
5. HARMLESS AGAINST ALL CLAIMS COSTS DAMAGES AND EXPENSES AND REASONABLE ATTORNEYS FEES ARISING OUT OF DIRECTLY OR INDIRECTLY ANY CLAIM OF PRODUCT LIABILITY PERSONAL INJURY OR DEATH ARISING IN ANY WAY OUT OF SUCH MISSION CRITICAL APPLICATION WHETHER OR NOT INTEL OR ITS SUBCONTRACTOR WAS NEGLIGENT IN THE DESIGN MANUFACTURE OR WARNING OF THE INTEL PRODUCT OR ANY OF ITS PARTS Intel may make changes to specifications and product descriptions at any time without notice Designers must not rely on the absence or characteristics of any features or instructions marked reserved or undefined Intel reserves these for future definition and shall have no responsibility whatsoever for conflicts or incompatibilities arising from future changes to them The information here is subject to change without notice Do not finalize a design with this information The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order Copies of documents which have an order number and are referenced in this document or other Intel literature may be obtained by calling 1 800 548 4725 or go to http www intel com design literature htm MPEG 1 MPEG 2 MPEG 4 H 261 H 263 H 264 MP3 DV
6. MIC on mpirun n 4 hostfile mpd hosts myprog NOTE You can also use the con igfile and machinefile Options To run the application on Intel Xeon Phi coprocessor only follow the steps described above and ensure that mpd hosts contains only the Intel Xeon Phi coprocessor name See Also You can get more details in the Inte 9 Xeon Phi Coprocessor Support topic of the Intel amp MPI Library Reference Manual for Linux OS You can get more information about using Intel MPI Library on Intel Xeon Phi coprocessor at How to run Intel Xeon Phi Coprocessor 14 3 Troubleshooting This section explains how to test the Intel MPI Library installation and how to run a test program 3 1 Testing the Installation To ensure that the Intel MPI Library is installed and functioning correctly complete the general testing below in addition to compiling and running a test program To test the installation on each node of your cluster 1 Verify that lt installdir gt lt arch gt bin is in your PATH ssh lt nodename gt which mpirun You should see the correct path for each node you test SDK only If you use the Intel Composer XE packages verify that the appropriate directories are included in the PATH and LD LIBRARY PATH environment variables mpirun n of processes env grep PATH You should see the correct directories for these path variables for each node you test If not call
7. P nodes To see the cluster nodes enter the command cat mpi hosts The results should look as follows clusternodel clusternode2 To equally distribute four processes of the application on two way SMP clusters enter the following command mpirun perhost 2 n 4 myprog exe The output for the myprog exe executable file may look as follows Hello world rank 0 of 4 running on clusternodel 12 Hello world rank 1 of 4 running on clusternodel Hello world rank 2 of 4 running on clusternode2 Hello world rank 3 of 4 running on clusternode2 Alternatively you can explicitly set the number of processes to be executed on each host through the use of argument sets One common use case is when employing the master worker model in your application For example the following command equally distributes the four processes on clusternodel and on clusternode2 mpirun n 2 host clusternodel myprog exe n 2 host clusternode2 myprog exe See Also You can get more details in the Local Options topic of Intel MPI Library Reference Manual for Linux OS You can get more information about controlling MPI process placement at Controlling Process Placement with the Intel 9 MPI Library 2 9 Using Intel MPI Library on Intel Xeon Phi Coprocessor Intel MPI Library for the Intel Many Integrated Core Architecture Intel MIC Architecture supports only the Intel Xeon Phi coprocessor codename Knights Corner
8. amically selects the most appropriate fabric for communication between MPI processes To select a specific fabric combination set the 1 MPI _ FABRICS environment variable 2 6 1 MPI FABRICS Select a particular network fabric to be used for communication Syntax I MPI FABRICS fabric intra node fabric inter nodes fabric Where e fabric shm dapl tcp tmi ofa e intra node fabric shm dapl tcp tmi ofa e inter nodes fabric shm tcp tmi ofa Intel MPI Library User s Guide for Linux OS Arguments Argument Definition lt fabric gt Define a network fabric shm Shared memory dapl DAPL capable network fabrics such as InfiniBand iWarp Dolphin and XPMEM through DAPL tep TCP IP capable network fabrics such as Ethernet and InfiniBand through IPoIB Network fabrics with tag matching capabilities through the Tag tmi Matching Interface TMI such as Intel True Scale Fabric and Myrinet Network fabric such as InfiniBand through OpenFabrics ofa Enterprise Distribution OFED verbs provided by the Open Fabrics Alliance OFA For example to select the OFED InfiniBand device use the following command mpirun n f of processes env I MPI FABRICS shm dapl executable For these devices if provider is not specified the first DAPL provider in the etc dat conf file is used The shm fabric is available for both Intel and non Intel microproces
9. and environment variables Visit the Intel MPI Library for Linux OS Knowledge Base for additional troubleshooting tips and tricks compatibility notes known issues and technical notes For more information see Websites Product Web Site Intel MPI Library Support Intel Cluster Tools Products Intel Software Development Products 2 Using the Intel MPI Library This section describes the basic Intel MPI Library usage model and demonstrates typical usages of the Intel MPI Library 2 1 Usage Model Using the Intel MPI Library involves the following steps Select network fabric Figure 1 Flowchart representing the usage model for working with the Intel MPI Library 2 2 Before You Begin Before using the Intel MPI Library ensure that the library scripts and utility applications are installed See the product Inte MPI Library for Linux OS Installation Guide for installation instructions 2 3 Quick Start To start using the Intel MPI Library 1 Source the mpivars c sh script to establish the proper environment settings for the Intel MPI Library It is located in the lt installdir gt lt arch gt bin directory where lt installdir gt refers to the Intel MPI Library installation directory for example opt intel impi and lt arch gt is one of the following architectures e intel64 Intel 64 architecture binaries e mic Intel Many Integrated Core Architecture
10. d mpicc o myprog installdir test test where installdir isthe full path to the installed package All supported compilers have equivalent commands that use the prefix mpi for the standard compiler command For example the Intel MPI Library command for the Intel Fortran Compiler ifort is mpiifort 2 5 Setting up the Intel MPI Library Environment The Intel MPI Library uses the Hydra process manager To run programs compiled with the mpiicc or related commands make sure your environment is set up correctly 1 Set up the environment variables with appropriate values and directories For example in the cshrc Or bashrc files e Ensure that the PATH variable includes the lt installdir gt lt arch gt bin directory Use the mpivars c sh scripts included with the Intel MPI Library to set up this variable e SDK only If you are using the Intel Composer XE packages ensure that the LD LIBRARY PATH variable contains the directories for the compiler library To set this variable run the compilervars c sh scripts included with the compiler e Set any additional environment variables that your application uses 2 Make sure that every node can connect to any other node without a password 3 Create a hostfile text file that lists the nodes in the cluster using one host name per line For example SS cat hostfile nodel node2 lt ctrl gt D 2 6 Selecting a Network Fabric The Intel MPI Library dyn
11. dicating the shared memory and DAPL capable network fabrics are being used e Test any other fabric using mpirun n 2 genv I MPI DEBUG 2 genv I MPI FABRICS fabric myprog where fabric is a supported fabric For more information see Selecting a Network Fabric For each of the mpirun commands used you should see one line of output for each rank as well as debug output indicating which fabric was used The fabric s should agree with the 1 MPI FABRICS setting The lt installdir gt test directory in the Intel MPI Library Development Kit contains other test programs in addition to test 16
12. eeeeeeeeeeeeeeaeaes 13 2 9 1 Building an MPI Application cccceeee cece eee eee ee memes 13 2 9 2 RUNNING an MPL Application 2 otii tecta ha seda dgtagatuadaasieis aa 14 3 Troubleshooting scent aono meu pn et niea a E tas adiga ERR RUA unde BRUNNEN e DR RR NER UE RR E Ira ER 15 3 1 Testing the Installation coire ei tired E A RusU KR DURER NAE RU M DEN Dd 15 3 2 Compiling and Running a Test Program sssssssssssssss sene eene esee 15 Disclaimer and Legal Notices INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTEL S TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE AND OR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT A Mission Critical Application is any application in which failure of the Intel Product could result directly or indirectly in personal injury or death SHOULD YOU PURCHASE OR USE INTEL S PRODUCTS FOR ANY SUCH MISSION CRITICAL APPLICATION YOU SHALL INDEMNIFY AND HOLD INTEL AND ITS SUBSIDIARIES SUBCONTRACTORS AND AFFILIATES AND THE DIRECTORS OFFICERS AND EMPLOYEES OF EACH
13. env I MPI FABRICS shm dapl n lt of processes myprog or simply mpirun n lt of processes myprog To use shared memory for intra node communication and TMI for inter node communication use the following command mpirun genv I MPI FABRICS shm tmi n lt of processes myprog To select shared memory for intra node communication and OFED verbs for inter node communication use the following command mpirun genv I MPI FABRICS shm ofa n f of processes myprog To utilize the multirail capabilities set the I MPI OFA NUM ADAPTERS or the I MPI OFA NUM PORTS environment variable The exact settings depend on your cluster configuration For example if you have two InfiniBand cards installed on your cluster nodes use the following command export I MPI OFA NUM ADAPTERS 2 mpirun genv I MPI FABRICS shm ofa n lt of processes myprog To enable connectionless DAPL User Datagrams DAPL UD set the 1 MPI DAPL UD environment variable export I MPI DAPL UD enable mpirun genv I MPI FABRICS shm dapl n lt of processes myprog If you successfully run your application using the Intel MPI Library over any of the fabrics described you can move your application from one cluster to another and use different fabrics between the nodes without re linking If you encounter problems see Troubleshooting for possible solutions Additionally using mpirun is the recommended practice when
14. intel Intel MPI Library for Linux OS User s Guide Copyright 2003 2014 Intel Corporation All Rights Reserved Document Number 315398 012 Intel MPI Library User s Guide for Linux OS Contents 1 IMGFOGCUCTION PER 0 5 1 1 Introducing Intel MPT Library vias cetacean ce cca aa ec nies kx aa deca ed xxr EE oe RR nnn nar RR RARE 5 1 2 Intended Audierice carter irri sex ttes rende bri uix FER daria a NETE N E aa 5 1 3 Notational Conventio MS sisaria cand haat tate ike ceno ened kata adel decd la Dea a LR dies Sd ex E NUR dada ERR TRADER 5 1 4 Related InformatlOn ieiuna eo pic sc T RR usa aan E SENT E ARM ER EEE 6 2 Using the Intel MPL Libraty 2 n sical teen tia ud ex co ee cb ated ee Prada ou yaaa sa co aA RR E RE FEDERE ARR RE 7 2 1 Usage Model COTES 7 2 2 B fore YOU Begi e 7 253 QUICK Start Ce 7 2 4 Compiling and Linking iei rea rene ada tek Tere aaa N A ERR aN ie 8 2 5 Setting up the Intel MPI Library Environment sss eene 9 2 6 Selecting a Network FabFic iiri eris biete rouge pk unb E wr sa aan RD E ETE A NP UP da ar E RNRE 9 2 6 1 I MPI FABRICS raiateseeexixe dua btexrae E A E uarie AE ede e E d 9 2 7 Running an MPL Program eco etr ar p Re Rua RP MR e a d MER dienes 10 2 8 Controlling MPI Process Placement ssssssssssssssssssese senses nennen ennemi 12 2 9 Using Intel MPI Library on Intel Xeon Phi COprocCeSSOP ccceeeeeeeeeee
15. ment Organization Section Descipion O Section 1 Introduction Section 1 introduces this document Section 2 Using the Intel Section 2 describes how to use the Intel MPI Library MPI Library Section 3 Troubleshooting Section 3 outlines first aid troubleshooting actions 1 1 Introducing Intel MPI Library The Intel MPI Library is a multi fabric message passing library that implements the Message Passing Interface version 3 0 MPI 3 0 specification 1 2 Intended Audience This User s Guide is intended for first time users of Intel MPI Library 1 3 Notational Conventions The following conventions are used in this document Table 1 3 1 Conventions and Symbols used in this Document This type style Hyperlinks This type style Commands arguments options file names THIS TYPE STYI Environment variables this type style Placeholders for actual values Intel MPI Library User s Guide for Linux OS items Optional items er ae SEN items separated by vertical bar s For Software Development Kit SDK users onl 1 4 Related Information To get more information about the Intel MPI Library explore the following resources See the Inte MPI Library Release Notes for updated information on requirements technical support and known limitations The Intel MPI Library Reference Manual for in depth knowledge of the product features commands options
16. s a registered trademark of Oracle and or its affiliates Copyright C 2003 2014 Intel Corporation All rights reserved Optimization Notice Intel s compilers may or may not optimize to the same degree for non Intel microprocessors for optimizations that are not unique to Intel microprocessors These optimizations include SSE2 SSE3 and SSSE3 instruction sets and other optimizations Intel does not guarantee the availability functionality or effectiveness of any optimization on microprocessors not manufactured by Intel Microprocessor dependent optimizations in this product are intended for use with Intel microprocessors Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice Notice revision 220110804 1 Introduction This User s Guide explains how to use the Intel MPI Library to compile and run a simple MPI program This guide also includes basic usage examples and troubleshooting tips To quickly start using the Intel MPI Library print this short guide and walk through the example provided The Intel MPI Library for Linux OS User s Guide contains information on the following subjects e First steps using the Intel MPI Library e First aid troubleshooting actions This User s Guide contains the following sections Docu
17. sors but it may perform additional optimizations for Intel microprocessors than it performs for non Intel microprocessors NOTE Ensure the selected fabric is available For example use shm only if all the processes can communicate with each other through the availability of the dev shm device Use dap1 only when all processes can communicate with each other through a single DAPL provider 2 7 Running an MPI Program To launch programs linked with the Intel MPI Library use the mpirun command as follows mpirun n lt of processes myprog This command invokes the mpiexec hydra command Use the mpiexec hydra options on the mpirun command line Use the n option to set the number of MPI processes If the n option is not specified the program is either be pulled from a job scheduler or uses the number of cores on the machine if this program is not under a scheduler 10 If you are using a network fabric different than the default fabric use the genv option to assign a value to the 1 MPI FABRICS variable For example to run an MPI program using the shm fabric type in the following command mpirun genv I_MPI FABRICS shm n lt of processes gt myprog For a dap1 capable fabric use the following command mpirun genv I MPI FABRICS dapl n lt of processes myprog To use shared memory for intra node communication and the DAPL layer for inter node communication use the following command mpirun g
18. the appropriate compilervars c sh script For example for the Intel Composer XE 2011 use the following source command opt intel composerxe bin compilervars sh intel64 2 In some unusual circumstances you need to include the lt installdir gt lt arch gt 1ib directory in your LD_LIBRARY_PATH To verify your LD LIBRARY PATH settings use the command mpirun n of processes env grep LD LIBRARY PATH 3 2 Compiling and Running a Test Program To compile and run a test program do the following 1 SDK only Compile one of the test programs included with the product release as follows cd lt installdir gt test mpiicc o myprog test c 2 If you are using InfiniBand Myrinet or other RDMA capable network hardware and software verify that everything is functioning correctly using the testing facilities of the respective network 3 Run the test program with all available configurations on your cluster e Test the TCP IP capable network fabric using mpirun n 2 genv I MPI DEBUG 2 genv I MPI FABRICS tcp myprog You should see one line of output for each rank as well as debug output indicating the TCP IP capable network fabric is used e Test the shared memory and DAPL capable network fabrics using mpirun n 2 genv I MPI DEBUG 2 genv I MPI FABRICS shm dapl myprog 15 Intel MPI Library User s Guide for Linux OS e You should see one line of output for each rank as well as debug output in
Download Pdf Manuals
Related Search
Related Contents
Porta-Davit500-Information Produits F TRP-C37A User's Manual Hotpoint RC16P Freezer User Manual Meat Slicer / Cortadora de Carne Polaroid LINK smartphones are shipped unlocked KD-R951BT / KD-R852BT / KD MANUALE DI INSTALLAZIONE E MANUTENZIONE Copyright © All rights reserved.
Failed to retrieve file