Home
"user manual"
Contents
1. Bay 11 4 iE d Bay 12 Bay 9 EE Es Bay 10 Bay 7 Bay 8 pay A RRE PER j Baye BM Tsn IL ll ll ll ll f Information panel Figure 3 2 Enterprise Chassis Front view Compute nodes that are based on POWER or Intel processor architectures have options for processors memory expansion cards and internal disks Virtualization technologies that are supported are PowerVM on Power Systems compute nodes and KVM VMware ESX and Microsoft Hyper V on x86 based compute nodes 56 IBM F
2. Fan bay 6 Figure 3 7 Enterprise Chassis fan module locations 3 6 1 Node cooling There are two compute node cooling zones zone 1 on the right side of the chassis and zone 2 on the left side of the chassis both viewed from the rear The chassis can contain up to eight 80 mm fan modules across the two zones Four 80 mm fan modules are included in the base configuration for node cooling Other fan modules are added in pairs across the two zones 70 IBM Flex System p270 Compute Node Planning and Implementation Guide Figure 3 8 shows the node cooling zones and fan module
3. Power a supply I a 7 Power bay 6 aa Aai T Power HE suppl 1 By Power p Power supply F supply bay 4 ey bay 1 Figure 3 5 Enterprise Chassis power supply locations Currently the following types of power supplies are available gt 2100 W power supplies gt 2500 W power supplies The ordering feature codes for these power supplies are listed in Table 3 2 The minimum number of power supplies that is configurable is two and the total number installable is six Intermixing of 2100 W and 2500 W power supplies in the same chassis is not permitted Table 3 2 Power supply feature codes AAS Power Brand Description Feature code for base Feature code for power supplies additional power quantity must be 2 supplies quantity can be 0
4. a Size GB Pool Mount Type A R710_GROUP3_01 2 36 ITSO WIOS7 Read Only A F I Base O1 RSH R710 2 33 ITSO VIOS Read Only F R710_GROUP3_02 1 89 ITSO WIOS7 Read Only a I Base 01_RSG R710 aaa ITSO vies Read Only e F R710_GROUPi_O1 3 39 ITSO VIOS Read Only F R710_GROUP3_03 1 66 ITSO VIOS Read Only F R710_GROUP3_04 3 26 ITSO WIOS7 Read Only E F R710_GROUP3_05 2 87 ITSO WIOS7 Read Only F Linux _RH DVD 3 05 ITSO VIOS Read wWrite w Figure 11 9 Creating a Virtual Server Selecting Optical devices panel E For more information about TR levels for IBM i V7R1 see this website http ibm com systems support i planning resave v7rl html 10 In the Physical I O panel click Next All I O for IBM i clients must be virtualized and physical devices are unsupported Chapter 11 Installing IBMi 509 11 In the Load source and console page select your initial load source from which the system loads the program to install the operating system In our example we select the virtual optical device as shown in Figure 11 10 Load source and console wf Name Select the resources for the load source and console adapters of the IBMi virtual server wf Memory F yf Processor Load source yw Ethernet Wirtual StoragefOptical ITSO WIOS wt vi Storage selection y Storage Alternate restart yf Optical devices Mon
5. Accept the default Adapter of 5 This value change can be changed if needed To create a virtual SCSI relationship between this VIOS and a client virtual server specify SCSI as the Adapter type If other client virtual servers were created the Connecting Virtual Server ID box features a drop down menu When the VIOS is the first virtual server that is defined on the physical server and there are no drop down options enter the planned number of the Connecting Virtual Server ID in this case 2 In the Connecting adapter ID field enter the number of the corresponding connecting adapter ID for an existing client virtual server or the number that is planned for a future virtual SCSI adapter on a client virtual server Connecting adapter ID of 102 is used in this example Click OK to save the settings for this virtual storage adapter and return to the main virtual storage adapter window Chapter 8 Virtualization 369 Note The number of virtual adapters that are allowed on the virtual server can be set in this window Set it to one more than the highest ID number that you plan to assign If you do not set it correctly it automatically increases if necessary when you are assigning ID numbers to virtual adapters that exceed the current setting This value cannot be changed dynamically after a virtual server is activated 3 Click OK to save the settings for this virtual storage adapter and return to the main virtual storage ada
6. Password _ Note After 15 minutes of inactivity the system will log you out automatically and ask you to log in again Required field Figure 7 175 ESA login Chapter 7 Power node management 331 Figure 7 176 shows the main ESA page This page is the starting point for the ESA functions IBM Electronic Service Agent Welcome padmin Logout Privacy Help Electronic Service Agent Status The status of problem reporting for your system Your system is being monitored Problem information Work with problems Service information View information about the service information collections and collect service information related to hardware software system configuration and performance Activity log View Electronic Service Agent activity Settings Work with detailed settings for Electronic Service Agent SRE filters View the list of filters which will be applied to Electronic Service Agent problem reporting activities Export Import Export or import the Electronic Service Agent configuration Export Import IBM ID Provide an IBM ID to be associated with information sent by Electronic Service Agent for this system or virtual partition IBM Electronic Support Display and manage service requests to IBM Electronic Support Figure 7 176 ESA web interface main page For more information about configuring and using ESA see BM Systems Electronic Service Agent on
7. Systems Management Servers se ee ee Filter i Taske views Available Select Name a Statuz a Processing a Available a Reference Units Memory GB Code a Server 7954 24 Sh1 077828 H Power Off 21 6 Figure 7 111 HMC managed server in powered off status messages Chapter 7 Power node management 287 Opening a virtual terminal console session with the HMC GUI One virtual terminal console for each LPAR or partition can be opened from the HMC This virtual terminal console can be used for initial operating system installation network configuration and debug or general access if wanted HMC CLI interface The HMC command vtmenu can also be used from the HMC CLI The command prompts for the server and partition to open a console Flex System and SOL When a Power Systems compute node is managed by an HMC SOL must be disabled for the node at the CMM to allow access to the virtual terminal of the first partition on a node For more information about disabling SOL see Disabling SOL for chassis on page 218 or Disabling SOL for an individual compute node on page 219 To open a virtual terminal console complete the following steps 1 Click Servers in the navigation pane then click the wanted server in the work pane The work pane updates and shows the available partitions Click the wanted partition By using the task button or the task list select Operations gt Console Window
8. Virtual server ID For example 2 Environment AIX Linux Select Assign all resources to this virtual server Click Next Chapter 8 Virtualization 431 5 Review the summary window as shown in Figure 8 85 All of the resources are assigned to this virtual server Create Virtual Server Server 7954 2448 SMHLO7 S26 Summary yf Mame gt Summar The following is a summary of your virtual server settings You can select Back to make Y changes You can also use the virtual server properties task to make changes after the virtual server is created Server Name Server 7954 24e SHLOF7 726 Virtual server name full_sys_par Wirtual server ID 2 Environment ATMs Linus Using all resources Figure 8 85 Summary window when creating full system partition with HMC 6 Click Finish to complete the creation of the single partition 8 8 2 Creating a full system partition with the HMC UI The process to create a full system partition is similar to the process that is described in Creating the VIOS logical partition on page 375 using the HMC UI Complete the following steps 1 Complete the steps in Creating the VIOS logical partition on page 375 to reach the point that is shown in Figure 8 8 on page 359 The window that is shown in Figure 8 86 on page 433 opens 432 IBM Flex System p270 Compute Node Planning and Implementation Guide Create Partition Partitian Profile Processors Memory Settings L
9. Figure 12 17 General setting for the installed system 570 IBM Flex System p270 Compute Node Planning and Implementation Guide 22 As shown in Figure 12 18 if a network is available that is providing external network access for a software repository select the IBM repository and accept licenses This makes future updates easier with the yum tool In this example there is no access to the Internet and IBM public repositories so we leave the boxes cleared therefore we use a locally based software repository Click Next to continue IBM Installation Toolkit for PowerLinux Software repository enablement Repository Repository Licenses Details O ibm power repo ILAN GPL See details Lt accept all the licenses above Guit Prev Met Figure 12 18 Configure the IBM repository Select the repositories to be instaled on the target system Check the box next ta the repository name to select it for installation For more information about the repository click See details To proceed you must accept all licenses for the selected repositories by checking the box next to 7 accept all licenses above When finished click Next Chapter 12 Installing Linux 571 23 As shown in Figure 12 19 you select which packages to install The following pack options are available Grayed out packages The grayed out packages are the mandatory IBM packages to install and cannot be cleared Other optional packages e
10. Journal Synchronization Start the Operating System During this step you are prompted to load the next optical device 522 IBM Flex System p270 Compute Node Planning and Implementation Guide 13 The installation procedure prompts you with an option to accept all default settings for the installation or change settings as shown in Figure 11 23 Install the Operating System Type options press Enter Install option l Take defaults No other options are displayed 2 Change install options 00 99 01 12 01 31 Figure 11 23 Operating system installation options date and time settings Chapter 11 Installing IBMi 523 Status messages appear during the installation process You do not need to respond to any of these status displays Figure 11 24 shows the installation process status window The display is blank for a time between stage 4 and stage 5 Message ID CPI2070 IBM i Installation Status Installation Objects Stage Completed Restored 1 Creating needed profiles and libraries gt gt 2 Restoring programs to library QSYS 3 Restoring language objects to library QSYS 4 Updating program table 5 Installing database files 6 Installing base directory objects Figure 11 24 Installation process status window 14 The Sign On window opens as shown in Figure 11 25 Log on with QSECOFR and leave the password field blank Sign On System E1277E3B Subsystem QBASE Display
11. Create Partition Storage Storage Select any number of physical volumes and virtual disks from the following lists of devices which are not cu assigned to a partition You may use the Storage Management functions to change assignments at any time Available Virtual Disks aS ee ee ee z Faf Ipar rootwg rootvg Default 20 GB Physical Location Code 136 73 GB U78AE 001 WZ5R02E P1 D2 Figure 8 70 IVM Create Partition Storage window Chapter 8 Virtualization 419 9 As shown in Figure 8 71 the Optical Tape window lists all available physical and virtual optical devices and physical tape devices By using the Create Device option you can create more virtual optical devices Virtual optical devices are typically used to mount ISO images from a media library such as an operating system installation disk In the Optical Tape window no devices are selected for this example Create Partition Optical Tape Vame Optical Tape Memory 7 a 5 E a nome Select optical or tape devices from the following list of devices which are not currently assigned to a partitiq Ethernet Physical Optical Devices No devices Select one or more unassigned physical optical devices that you want to assign directly to the partition to storage Virtual Optical Devices You can use virtual optical devices to mount and unmount media files such as an ISO image that are in y media library for use by the partition
12. If the system board is replaced transfer the anchor card from the old system board to the new system board If the anchor card is replaced the information is transferred from the system board to the new anchor card upon the next boot If the system board and the anchor card are replaced the field core override option must be used to reset the core count back to the previous value 4 5 3 Architecture IBM uses innovative methods to achieve the required levels of throughput and bandwidth Areas of innovation for the POWER7 processor and POWER7 processor based systems include but are not limited to the following elements gt On chip L3 cache that is implemented in embedded dynamic random access memory eDRAM Cache hierarchy and component innovation Advances in memory subsystem Advances in off chip signaling Advances in RAS features such as power on reset and L3 cache dynamic column repair The superscalar POWER7 processor design also provides the following capabilities gt gt Binary compatibility with the prior generation of POWER processors Support for PowerVM virtualization capabilities including PowerVM Live Partition Mobility to and from POWER6 POWER6 and POWER7 processor based systems Figure 4 9 on page 85 shows the POWER7 processor die layout with the following major areas identified YYYY YV Y Eight POWER7 processor cores L2 cache L3 cache Chip power bus interconnect SMP link
13. Media Library size 29 88 GB 20 36 GB free Extend Library l Delete Library Ta ap Actions w Search the table Search Select Name Assigned Virtual serve 2 Mount Type Sime RHELSL iso 7989 RHEL6 2 Read Only 2 96 GB SLES 11 DVD ppc64 GM C 7989 SLES 4 Read Write 2 82 GB SLES 11 DVD ppc64 6M C None Read Write 3 65 GB diagcd iso None Read Only 107 MB Figure 12 31 Virtual optical media management Chapter 12 Installing Linux 581 To install RHEL complete the following steps 1 After the virtual media is set up boot the server and enter SMS The panel that is shown in Figure 12 32 opens Version AF 73 021 SMS 1 7 c Copyright IBM Corp 2000 2008 All rights reserved Main Menu 1 Select Language 2 Setup Remote IPL Initial Program Load 3 Change SCSI Settings 4 Select Console 5 Select Boot Options Type menu item number and press Enter or select Navigation key 5 Figure 12 32 Virtual server SMS menu 582 IBM Flex System p270 Compute Node Planning and Implementation Guide 2 Select option 5 Select Boot Options The panel that is shown in Figure 12 33 opens Version AF 73 021 SMS 1 7 c Copyright IBM Corp 2000 2008 All rights reserved Multiboot Select Install Boot Device Configure Boot Device Order Multiboot Startup lt OFF gt SAN Zoning Support Management Module Boot List Synchronization Navigation keys M return to Main Menu ESC key return to previous screen
14. i ilor2 p Advanced virtual ethernet configuration Ok Cancel Help Figure 8 13 Modify virtual Ethernet adapter window 3 When you return to the main virtual Ethernet window select the second adapter Adapter number 3 then click Edit Complete the following configuration options as shown in Figure 8 14 on page 365 Accept the default Adapter of 3 This value can change be changed if needed Set the Port Virtual Ethernet option to 1 Select IEEE 802 1Q capable adapter and add the VLAN 4092 IBM Flex System p270 Compute Node Planning and Implementation Guide Select Use this adapter for Ethernet bridging and set the Priority value This virtual adapter is used for a second SEA and has a different Port Virtual Ethernet value The priority value can be the same in as the first virtual adapter or different as one method to load balance network traffic across the two SEAs in a dual VIOS environment Click OK SEA The mkvdev sea command now includes a sharing option for the ha_mode attribute The sharing option divides traffic across the dual VIOS environment that is based on VLANs This function is negotiated in the dual VIOS environment automatically Virtual Ethernet Modify Adapter Specify an adapter ID and virtual Ethernet for this adapter Adapter Id q Port Virtual Ethernet VSI Type Id VSI Type Version VSI Manager Id IEEE Settings Select this option to allow a
15. Active Memory Sharing Memory Active Memory Sharing i Deduplication Suspend Resume Suspend Resume Se a Shared Storage Pools No M Treedeng no Thick provisioning a When the firmware is at level 7 6 or later micro partitions can be defined as small as 0 05 of a processor instead of 0 1 of a processor b IVM supports only a single Virtual I O Server c Needs IBM POWER processor based system or later d Needs IBM POWER processor based system with firmware at level 7 4 or later Table 8 4 lists the feature codes for ordering PowerVM with the p270 Compute Node Table 8 4 Availability of PowerVM on p270 Power compute nodes PowerVM Express 5225 PowerVM Standard 5227 PowerVM Enterprise 5228 For more information about the features that are included on each version of PowerVM see BM PowerVM Virtualization Introduction and Configuration SG24 7940 Chapter 8 Virtualization 337 8 2 2 PowerVM features 338 The latest version of PowerVM contains the following features gt The p270 includes support for up to 480 virtual servers or logical partitions LPARs Role Based Access Control RBAC RBAC brings an added level of security and flexibility in the administration of the Virtual I O Server VIOS With RBAC you can create a set of authorizations for the user management commands You can assign these authorizations to a role named UserManagement and this role can be given to any oth
16. Node 05 node05 FSM Yes View 6 Node 06 node06 p270 Yes View onaniaa 7 8 Node 07 node07 p270 Yes View ie 10 Node 10 Yes View Figure 7 141 Viewing FSP IP address from the CMM 4 With the IP address of the FSP determined open a browser and enter the following URL where system_name is the host name or IP address of the FSP https system_name 306 IBM Flex System p270 Compute Node Planning and Implementation Guide 5 The ASMI Welcome page opens as shown in Figure 7 142 Enter the login credentials are an FSM administrator User ID centrally managed systems or CMM supervisor User ID non centrally managed systems and password Click Log in Advanced System Management Server 7954 24X SN107782B FW773 00 AF773_021 User ID Password Language English v Welcome Machine type model 7954 24X Serial number 107782B Date 2013 6 25 Time 11 39 00 UTC Service Processor Primary Location U78AE 001 WZSRO2E P1 User Status User ID Status dev Disabled celogin Enabled celogin1 Disabled celogin 2 Disabled Figure 7 142 ASMI welcome page Chapter 7 Power node management 307 308 6 The User ID and Password pane is replaced with a navigation menu as shown in Figure 7 143 Expand the Power Restart Control section Advanced System Management ee User ID USERID Server 7954 24X SN107782B FW773 00 AF773_021 Expand all menus Welcome
17. Note This section provides a general description of the POWER7 processor design that applies to Power Systems servers in general The p270 Compute Node uses a six core chip variant that is packaged in a DCM Although the processor is an important component in servers many elements and facilities must be balanced across a server to deliver maximum throughput As with previous generations of systems that were based on POWER processors the design philosophy for POWER7 processor based systems is one of system wide balance in which the POWER7 processor plays an important role 4 5 1 Processor options Table 4 2 defines the processor options for the p270 Compute Node Table 4 2 Processor options Feature Number of POWER7 chips Cores per L3 cache size code sockets per socket POWER7 frequency per POWER7 chip processor 82 IBM Flex System p270 Compute Node Planning and Implementation Guide 4 5 2 Unconfiguring You can order the p270 with Feature Code 2319 which reduces the number of active processor cores in the compute node which reduces software licensing costs Feature Code 2319 is listed in Table 4 3 Table 4 3 Deconfiguration of cores Feature Description code 2319 Factory Deconfiguration of one core oc 1 less than the total number of cores 23 This core deconfiguration feature can also be updated after installation by using the field core override option As noted in table Table 4 3 a minimum of o
18. POWER CN4058 8 port 10Gb Converged EN2024 4 port 1Gb EN4054 4 port 10GbE Nodes Adapter Ethernet Adapter Adapter Ethernet I O Adapters POWER Not applicable FC5054 4 port 16Gb FC Adapter nodes Fibre Channel I O Adapters x86 Nodes CN4054 10Gb Virtual Fabric EN2024 4 port 1Gb EN4054 4 port 10GbE Ethernet I O Adapter Ethernet Adapter Adapter adapters LAN on Motherboard LAN on Motherboard 2 port 10 GbE 2 port 10 GbE x86 Nodes Not applicable FC5022 16Gb 2 port Fibre Channel adapter Fibre Channel FC3052 8Gb 2 port Fibre Channel adapter I O Adapters FC5024D 4 port Fibre Channel adapter x222 only ESXi USB Key USB ESXi USB Key Optional with x86 Nodes Optional with x86 Nodes x86 Nodes Port FoD Ports are computed during configuration that is based on chassis switch node type and Activations the I O adapter selection IBMiPureFlex Not configurable Available Available Solution ian ao TE able able VDI PureFlex Not configurable Solution Example configuration There are seven configurations for PureFlex Express as described in Table 2 4 on page 23 Configuration 2B features a single chassis with an external V7000 Storwize controller This solution uses FCoE and includes the Converged Switch module CN3093 to provide an FC Forwarder This means that only converged adapters must be installed on the node and that the CN4093 breaks out Ethernet and Fibre Channel externally from the chassis 24 IBM Flex System p270
19. Topology Perspectives p Create Group Change Default Profile Add to Autom ation Inventory Operations Restart Release Management Schedule Operations Security Shutdown System Configuration System Status and Health Semice and Support p Figure 7 57 Opening a virtual terminal console on a virtual server from the FSM Console Wiin dor b Open Terminal Console Close Terminal Console 2 Acknowledge any Java security messages to allow the console applet to start and open the console window 244 IBM Flex System p270 Compute Node Planning and Implementation Guide 3 When the terminal console opens as shown in Figure 7 58 the management console FSM IP address and the current User ID are shown in the window Enter the password for the current FSM User ID to access the terminal 7989 IOS Terminal Console mie File Edit Font Encoding Options a In order to access the terminal you must first authenticate with the following management console 9 42 170 223 User Ib USERID Password Connecting Connection successful Open in progress Open Completed IBM Virtual I O Server Login F Figure 7 58 Terminal console access and authentication 4 The Terminal Console tab that opened on the FSM can be cleared by clicking OK as shown in Figure 7 59 to return to the virtual server table or the tab from where you started the console IBM Flex System Manager Welcome USERID Problems o off Comp
20. 1 From the CFGTCP menu select Option 10 Work with TCP IP Host Table Entries and then press Enter Select Option 1 Add and press Enter to access the Add TCP IP Host Table Entry menu At the Internet address prompt specify the IP address that you defined earlier At the Host name prompt specify the associated fully qualified local host name and then press Enter Specify a plus sign by the for more values prompt to make space available for more than one host name if necessary Up to 65 host names can be specified for a single host table entry Repeat steps 3 and 4 for each of the other hosts in the network to which you want to communicate with by name and add an entry for each 550 IBM Flex System p270 Compute Node Planning and Implementation Guide After you define a host table you can use the character based interface or System i Navigator to change the configurations 11 9 7 Starting TCP IP You must start TCP IP to make TCP IP services ready to use To start TCP IP complete the following steps 1 From the command line enter the Start TCP IP command STRTCP and press F4 Prompt to access the Start TCP IP menu 2 Specify YES for the other devices that you want to start optionally otherwise specify NO 3 Press Enter to start TCP IP on the system The Start TCP IP command STRTCP starts and activates TCP IP processing and starts the TCP IP interfaces and the server jobs Only TCP IP interf
21. All tabs that are open and associated with this update can be closed 7 8 5 Service and Support Manager Service and Support Manager is a plug in for the FSM Service and Support Manager automatically detects serviceable hardware problems and collects supporting data for serviceable hardware problems that occur on your monitored endpoint systems The Electronic Service Agent ESA tool is integrated with Service and Support Manager and transmits serviceable hardware problems and associated support files to IBM Support For more information about Service and Support Manager see the Information Center which is available at this website http pic dhe ibm com infocenter flexsys information topic com ibm esa director help esa_kickoff html This section describes how to configure and activate ESA Chapter 7 Power node management 255 Activating ESA ESA is an IBM monitoring tool that reports hardware events to a support team automatically Complete the following steps to set up ESA on your IBM Flex System Manager 1 Access the ESA plug in from the FSM UI by clicking Home gt Plug ins gt Service and Support Manager as shown in Figure 7 73 Service and Support Manager Manage serviceable problems on your systems Problem Reporting Serviceable Problems for 9 Monitored Systems e Servaces Links Serviceable Problems A 0 systems with serviceable problems All Problems A 9 systems with no open serviceable IBM Support P
22. Integrated Virtualization Manager Welcome padmin podSbay6VIOS1 Partition Management VWiew Modify Partitions a ER a eee ee The Integrated Virtualization Manager allows you to perform various management tasks on a single system wi fy Syste pe z i ee any artitions and manage virtual storage Before you start creating logical partitions there are a few steps tha View Modify Shared Memory Pool g g Y g log p i dj 1 0 Adapter Management view Modify Virtual Ethernet Wiew Modify Physical Adapters Mirror the Integrated Virtualization Manager Partition View Virtual Fibre Channel Virtual Storage Management b Virtual Storage Management VWiew Modify Virtual Storage IVM Management Ethernet VWiew Modify User Accounts VWiew Modify TCP IP Settings Guided Setup Enter PowerVM Edition Key Physical Adapter Management Service Management t Create Partitions Flactennie Qanrira Anant If you have a System Plan to deploy you should proceed directly to the Manage System Plan task Figure 8 55 IVM Guided Setup view 7 To continue the process of modifying the VIOS configuration click View Modify Partitions from the left side navigation area Figure 8 56 shows the View Modify Partitions view The management partition or VIOS is shown with a default name of the system serial number View Modify Partitions To perform an action on a partition first select the part
23. One ASIC allocated for SAN attached disks through the CN4058 8 port 10Gb Converged Adapter One ASIC allocated for storage through the FC5054 4 port 16Gb FC Adapter for multipathing of storage One ASIC allocated for networking through a CN4058 8 port 10Gb Converged Adapter VIOS Server 2 consists of the following components Two processor cores 16 GB of memory One ASIC allocated for SAN attached disks through the CN4058 8 port 10Gb Converged Adapter One ASIC allocated for storage through the FC5054 4 port 16Gb FC Adapter for multipathing of storage One ASIC allocated for networking through a CN4058 8 port 10Gb Converged Adapter The VIOS virtual servers should be configured for redundant access to storage by addressing storage through both the CN4058 and FC5054 adapters A standard width compute node could use two CN4058 adapters but be aware that because of the nature of routing adapters to I O modules via the Enterprise Chassis midplane this requires a compatible I O module to be installed in I O Module bays 2 and 4 This would give the capability of using an ASIC off each installed CN4058 8 port 10Gb Converged Adapter and provide access to both forms of traffic over each adapter which gives resiliency at an adapter level to both kinds of traffic Additional AIX Linux or IBM i client virtual servers can now be configured by using resources from the VIO virtual servers with the assurance that the loss of a VIOS does n
24. Seach Chassis Properties and settings For the overall chassis Compute Nodes Properties and settings for compute nodes in the chassis Storage Nodes Properties and settings For storage nodes in the chassis 1 0 Modules Properties and settings for 1O Modules in the chassis Fans and Cooling Cooling devices installed in your system Power Modules and Management Power devices consumption and allocation Component IP Configuration Single location For you to view and configure the various IP address setting of chassis cc Chassis Internal Network Provides internal connectivity between compute node ports and the internal CMM manage Hardware Topology Hierarchical view of components in your chassis Reports Generate Reports of hardware information Figure 7 13 Chassis Management options Chapter 7 Power node management 207 gt Management Module management as shown in Figure 7 14 USERID settings Log Out Help hassis Management SEROR gt 2281 User Accounts Create and modify user accounts that will have access to this web console Firmware View CMM Firmware information and update Firmware Security Configure security protocols such as 55L and 55H Network Network settings such as SNMP and LEAF used by the CMM Configuration Backup current configuration and restore a configuration Properties Properties and settings such as Date and Time and Failover oy License Key Management Licenses For additional Functionality J Restart Resta
25. The Power Systems compute nodes have a three year limited on site warranty Upgrades to the base warranty are available An upgraded warranty provides a faster response time for repairs on site repairs for most work and after hours and weekend repairs For more information about warranty options and our terms and conditions see this website http www ibm com support warranties 4 16 Software support and remote technical support 128 IBM offers technical assistance to help solve software related challenges Our team assists with configuration how to questions and setup of your servers For more information about these options see this website http ibm com services us en it services tech support and maintenance services html IBM Flex System p270 Compute Node Planning and Implementation Guide Planning In this chapter we describe the steps that you should take before you order and install Power Systems compute nodes as part of an IBM Flex System solution This chapter includes the following topics 5 1 Planning your system An overview on page 130 5 2 Network connectivity on page 136 5 3 SAN connectivity on page 139 5 4 Converged networking on page 141 5 5 Configuring redundancy on page 141 5 6 Dual VIOS on page 149 5 7 Power planning on page 152 5 8 Cooling on page 157 5 9 Planning for virtualization on page 159 YYYY YYY V Y Copyright IB
26. This section describes the installation of Red Hat Enterprise Linux RHEL from an RHEL distribution image For more information about supported operating systems see 5 1 2 Software planning on page 132 IBM Installation Toolkit This section describes the process of installing RHEL from the ISO image as provided by Red Hat We also describe installing RHEL by using the IBM Installation Toolkit for PowerLinux which also installs the IBM unique RPMs for Power Systems compute node For more information see 12 1 IBM Installation Toolkit for PowerLinux on page 554 We install the virtual servers by using a virtual optical media and the ISO image of the RHEL distribution as the boot device Figure 12 31 shows the Virtual Optical Media window in IBM Flex System Manager Virtual Disks Storage Pools Physical Volumes Virtual Optical Media Virtual Fibre Channel Physical Optical Devices You can assign physical optical dewcies on the system directly to a logical partition Select the physical optical device then select the task that you want to perform There are no physical optical devices on the Virtual 1 0 Server available for assignment Virtual Optical Media You can assign virtual optical media such as an ISO image directly to a partition to use for storage Select the virtual optical media then select the task that you want to perform You can also extend the size of the media library or delete an existing media library
27. EE are to be allocated to the partition Ia Every partition needs a default profile To create MARETA Cle default profile specify the following information Optional Settings erate SURE SS System name Server 7954 24 SN107782B Partition name full sys_par Partition ID 2 Profile name This profile can assign specific resources to the partition or all resources to the partition Click Next if you want to specify the resources used in the partition Select the option below and then click Next if you want the partition to have all the resources in the system Lise all the resources in the system Figure 8 87 Assigning all resources to a full system partition with HMC 5 Click Next IBM Flex System p270 Compute Node Planning and Implementation Guide 6 The Summary window opens as shown in Figure 8 88 Click Finish to complete the creation of the full system partition t Create Partition Partition Profile Processors Memory Settings Lo Virtual Adapters Optional Settings Processing Settings Profile Summary lt Back Next Create Lpar Wizard Server 7954 24X SN107782B Profile Summary This is a summary of the partition and profile Click Finish to create the partition and profile To change any of your choices click Back You can see the details of the physical I O devices you chose by clicking Details You can modify the profile or partition by using the partition p
28. Flex System w700O i Enclosure 1 dakai Drive Slots iSCSI Name ION Failover iSCSI Alias iSCSI Failover Active No Ports WW PIN Status 5005076805180370 Mot Configured 5005076805140370 Mot Configured 5005076805100370 Mot Configured 5005076805040370 Mot Configured 5005076805000370 Active 5005076805080370 Active Adapters Location Configured Two port 10Gbrs Ethernet adapter Four port 8Gbrs FC adapter ign 1986 03 com ibm 2145 flexsystemrl000 _nodel Speed Mth Mth Mth Mth Gb 10Gb Detected Two port 10Gbrs Ethernet adapter Type Fibre Channel Fibre Channel Fibre Channel Ethernet Fibre Channel Ethernet Yalid Yes Four port 8Gbrs FC adapter Yes Figure 6 6 Active 10 Gb adapter on Canister 1 By comparing the canister PWWN with the output from the show fcoe database command in Example 6 4 on page 178 you can see that Canister 1 uses port INTA13 6 2 4 Creating zoning on CN4093 with CLI By creating a zone with members of the host and the storage controller the two can connect and storage can be accessed by the operating system platform on the compute node The following zoning steps are the same as those steps that are used for regular FC zoning 1 Create the zone 2 Create the zoneset or add the zone to the existing zoneset 3 Activate the zoneset Chapter 6 Converged networking 180 Example 6 5 shows from the ISCLI creating a zone and populating i
29. Manage Custom Groups Hardware Information Serviceability Figure 7 104 Update Password for managed system access 2 Enter the correct password in the Update Password window as shown in Figure 7 105 Click OK F Update Password Authentication Failed 9 49 171 37 Authentication failed on the managed system below because its HMC Access password has changed Managed system name g 42 171 37 Enter the correct HME Access password to access the selected managed system HMC Access password Figure 7 105 Update Password window 7 9 3 Power compute node management basics Basic compute node management consists primarily of the following tasks gt Powering server on and off gt Creating virtual server gt Creating virtual consoles to virtual servers Chapter 7 Power node management 283 gt Updating firmware gt Collecting and reporting errors Powering server on and off The Power On process of a Power compute node is the same as any other HMC managed Power based server From the navigation pane click Systems Management Servers In the work pane area click the option to select the wanted server When a server is selected the task button becomes visible and a list of available tasks is also displayed at the bottom of the work pane The Power On option can be selected from the list of tasks at the bottom of the work pane or by selecting the task button next to the server In either case s
30. Memory Ethernet Server Name Sernver 7954 24 SN107 7828 Virtual yf Storage server teaVIOS6A Adapters ne f Physical 1 0 Virtual server ID 1 gt Summary Environment VIOS Memory 8 0 GB Dedicated Processors 4 Dedicated Virtual Ethernets 2 4091 Bridge 3 1 Bridge 4 4094 Virtual Adapters 5 SCSI 2 102 UT SAE 001 W2ZSRO2E P1 Ri U7SAE 001 W2Z5R02E P1 T1 UTSAE 001 WZSRO2E P1 C18 L1 U78AE 001 WZSRO Physical adapters see Dae Figure 8 21 Virtual server wizard summary Complete the following steps 1 Review the summary to ensure that the VIOS virtual server is created as you expect If you must make corrections click Back to return to the wanted section and makes changes as needed 2 Click Finish to complete the definition of the VIOS virtual server The wizard ends and the FSM displays the Manage Power Systems Resources window To verify that the virtual server was defined click the wanted server under the Hosts heading from the navigation area The content area table displays the new virtual server as shown in Figure 8 22 Manage Power Systems Resources k Welcome Flex System Manager Version Power Systems Resources EL Hosts Server 954 244 SHLO7782B View Members B erver 79534 245 5wW107782 i a Performance Summary Search the table Search 4 Virtual Servers La Operating Systems Power Units Select Mame Part Id Access State Fi itsoVIOSeaA 1 on Stoppe
31. The implementation is based on a more granular treatment of trunking where there are different trunks that are defined for the SEAs on each VIOS Each trunk serves different VLANs and each VIOS can be the primary for a different trunk This situation occurs with just one SEA definition on each VIOS IBM PowerVM Workload Partitions Manager for AIX Version 2 2 has the following enhancements gt When used with AIX V6 1 Technology Level 6 the following support applies Support for exporting a VIOS SCSI disk into a Workload Partition WPAR There is compatibility analysis and mobility of WPARs with VIOS SCSI disk In addition to Fibre Channel devices VIOS SCSI disks can be exported into a WPAR WPAR Manager Command Line Interface CLI The WPAR Manager CLI allows federated management of WPARs across multiple systems through the command line Chapter 8 Virtualization 339 Support for workload partition definitions The WPAR definitions can be preserved after WPARs are deleted These definitions can be deployed later to any WPAR capable system gt In addition to the features supported on AIX V6 1 Technology Level 6 the following features apply to AIX V7 1 Support for AIX 5L V5 2 Workload Partitions for AIX V7 1 Lifecycle management and mobility enablement for AIX 5L V5 2 Technology Level 10 SP8 Version WPARs Support for AIX 5L V5 3 Workload Partitions for AIX V7 1 Lifecycle management and mobility
32. Virtual Ethernet Figure 8 17 Defined virtual Ethernet adapter properties Virtual storage Here we show an example of creating a virtual SCSI adapter for the VIOS virtual server When a virtual Fibre Channel adapter is created the same windows that are shown in Virtual Ethernet on page 362 are shown However change the Adapter type field to Fibre Channel Complete the following steps 1 Click Create adapter to open the Create Virtual Adapter window as shown in Figure 8 18 on page 369 368 IBM Flex System p270 Compute Node Planning and Implementation Guide Virtual Storage Adapters Specify the virtual storage adapters required for this virtual server Maximum number of virtual adapters 200 Select Create Adapter button to create a new virtual adapter Note 1 You can use the Virtual Storage Manage ask to define the physical block storage for the VIOS As client partitions are added and assigned storage the console will automs sate the SCSI or Fibre Channel server adapters that the client virtual servers will use for storage access Create Virtual Adapter Specify the virtual storage adapter ID and client information Adapter ID Adapter type SCSI Connecting virtual server information Connecting Virtual Server ID 2 Connecting adapter ID 102 Figure 8 18 Create a virtual SCSI adapter on VIOS 2 Complete the fields by using the following values
33. Within the network fabric FSM detects congestions notification policies and relocation of physical and virtual machines including storage and network configurations gt Resource pooling FSM pools network switches with placement advisors that consider VM compatibility processor availability and energy gt Intelligent automation FSM performs automated and dynamic VM placement that is based on usage energy hardware predictive failure alerts or host failures 12 IBM Flex System p270 Compute Node Planning and Implementation Guide The ability to support the workload demands of tomorrow s workloads is built into the new I O architecture which provides choice and flexibility in fabric and speed With the ability to use Ethernet InfiniBand FC FCoE RoCE and iSCSI the Enterprise Chassis is uniquely positioned to meet the growing I O needs of the IT industry 1 5 This book This book is a comprehensive guide to IBM PureFlex System and Flex Systems with the p270 Compute Node The book introduces the new offerings and describes the compute node Also covered are the management features of IBM PureFlex System and partitioning and installing various operating systems Chapter 1 Introduction 13 14 IBM Flex System p270 Compute Node Planning and Implementation Guide IBM PureFlex System IBM PureFlex System is one member of the IBM PureSystems range of expert integrated systems PureSystems deliver Application as a Service Aaa
34. a OK a OK E Server 7954 24 4 SN1LOF 7326 Server Bon T Ok i OK 24k Fi Flex System Manager nade Server Mon T OK a OK AGL E op Eski hosta Server Box Mo ers ACL F esxi host 2 Server Box ox ce ACL E OB Eski hoste Server Box ens ce ACi El Server 954 24H SHLOY FESS Server Mon 5j OK a DE 24H E OB esxibost 1 Server Box ox ck ACL F E GMO Flex System W7000 Storage Node Storage Enclos ER ry OK Z OK Ad E Fc3171 8Gb SAN Switch Switch Bon ens ox BCF 8146 09 E E cN4093 Coverged Network Switch Switch Offline Bets ens E En4093 10Gb Ethernet Switch Switch Mon ens ox 494272 BB enterprise Chassis System Chassis Mon FI OK T OK l l Figure 7 35 FSM discovered chassis graphical view 7 8 3 Manage Power Systems Resources navigation basics The Manage Power Systems Resources view that is shown in Figure 7 36 on page 230 is the starting point for basic Power compute node management and can be reached by several methods including the following most common methods gt By clicking Home gt Plug ins gt IBM Flex System Manager gt Manage Power Systems Resources gt By clicking Chassis Management General Actions gt Manage Power Systems Resources Chapter 7 Power node management 229 This initial view shows the hardware or compute nodes that are currently known in all the managed chassis This view has two areas of interest a navigation list on the left side and the content area o
35. af Welcome af System Analysis w Copy fles to installed system af TimeZone Save configuration Installation Install boot manager af Server Scenario Save installation settings af Installation Summary Prepare systern for intial boot b Perform Installation Configuration Check Installation Hostname Metwark Customer Center Service Clean Up Online Update Release Notes Hardware Configuration Saving kdurnp configuration PUR UR RRR RRR ITGA Help Abort Eack Hext Figure 12 50 Finishing Basic Installation window At the end of the installation the system reboots and the VNC connection is closed 596 IBM Flex System p270 Compute Node Planning and Implementation Guide Figure 12 51 shows the system console while rebooting After reboot VNC restarts with the same configuration after which we can reconnect to the VNC client IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM STARTING SOFTWARE IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM PLEASE WAIT IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IB
36. figures for AIX performance and spec_int2006 performance figures for Linux can be found at this website http ibm com systems power hardware reports system_ perf html Commercial Processing Workload CPW figures for IBM i performance can be found at this website http ibm com systems power software i management performance resource s html 4 2 Front panel The front panel of Power Systems compute nodes have the following common elements as shown in Figure 4 3 on page 77 gt One USB 2 0 port Power button and light path light emitting diode LED green Location LED blue Information LED amber Fault LED amber YY vV Yy 76 IBM Flex System p270 Compute Node Planning and Implementation Guide USB 2 0 port Power button LEDs left right location info fault Figure 4 3 Front panel of the IBM Flex System p270 Compute Node The USB port on the front of the Power Systems compute nodes is useful for various tasks including out of band diagnostic tests hardware RAID setup operating system access to data on removable media and local OS installation It might be helpful to obtain a USB optical CD or DVD drive for these purposes in case the need arises An externally powered CD DVD drive is recommended Tip There is no optical drive in the IBM Flex System Enterprise Chassis 4 2 1 Light path diagnostic LED panel The power button on the front of the server as shown in Figure 4 3 has the following func
37. gt No off chip drivers or receivers Removing drivers and receivers from the L3 access path lowers interface requirements conserves energy and lowers latency gt Small physical footprint The performance of DRAM when implemented on chip is similar to conventional SRAM but requires far less physical space IBM on chip eDRAM uses only one third of the components that are used in conventional SRAM which has a minimum of six transistors to implement a 1 bit memory cell gt Low energy consumption The on chip eDRAM uses only 20 of the standby power of SRAM POWER7 processor and intelligent energy Energy consumption is an important area of focus for the design of the POWER7 processor which includes intelligent energy features that help to optimize energy usage and performance dynamically so that the best possible balance is maintained Intelligent energy features Such as EnergyScale are available on the CMM to optimize processor speed dynamically which is based on thermal conditions and system usage For more information about the POWER7 energy management features see Adaptive Energy Management Features of the POWER7 Processor which is available at this website http researcher watson ibm com researcher files us lefurgy hotchips22 _power7 pdf Comparison of the POWER7 and POWER7 processors Table 4 4 shows the comparable characteristics between the generations of POWER7 and POWER processors Table 4 5 Comparing t
38. http www redbooks ibm com portals puresystems 0pen amp page pgbycat IBM Flex System Interoperability Guide http www redbooks ibm com fsig IBM System Storage Interoperation Center http www ibm com systems support storage ssic Help from IBM IBM Support and downloads http www ibm com support IBM Global Services http www ibm com services 606 IBM Flex System p270 Compute Node Planning and Implementation Guide jie IBM Flex System p270 Compute Node Planning and Implementation Guide Redbooks 1 0 spine 0 875 lt gt 1 498 460 lt gt 788 pages IBM Flex System p270 Compute Node Planning I Iln ft wit ae and Implementation Guide Redbooks Describes the new POWER7 compute node for IBM Flex System Provides detailed product and planning information Explains setting up converged networking partitioning and OS installation To meet today s complex and ever changing business demands you need a solid foundation of compute storage networking and software resources that is simple to deploy and can quickly and automatically adapt to changing conditions You also need to make full use of broad expertise and proven preferred practices in systems management applications hardware maintenance and more The IBM Flex System p270 Compute Node is an IBM Power Systems server that is based on the new dual chip module POWER7 processor and is optimized
39. improvements and or changes in the product s and or the program s described in this publication at any time without notice Any references in this information to non IBM websites are provided for convenience only and do not in any manner serve as an endorsement of those websites The materials at those websites are not part of the materials for this IBM product and use of those websites is at your own risk IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you Any performance data contained herein was determined in a controlled environment Therefore the results obtained in other operating environments may vary significantly Some measurements may have been made on development level systems and there is no guarantee that these measurements will be the same on generally available systems Furthermore some measurements may have been estimated through extrapolation Actual results may vary Users of this document should verify the applicable data for their specific environment Information concerning non IBM products was obtained from the suppliers of those products their published announcements or other publicly available sources IBM has not tested those products and cannot confirm the accuracy of performance compatibility or any other claims related to non IBM products Questions on the capabilities of non IBM products should be addressed to the suppliers of those
40. of the drives changes from 512 bytes to 520 bytes If you later decide to remove the drives delete the RAID array before you remove the drives If you decide to delete the RAID array and reuse the drives you might need to reformat the drives so that the sector size of the drives changes from 520 bytes to 512 bytes 4 9 I O adapters The networking subsystem of the IBM Flex System Enterprise Chassis is designed to provide increased bandwidth and flexibility The new design also allows for more ports on the available expansion adapters which allow for greater flexibility and efficiency with your system s design 102 IBM Flex System p270 Compute Node Planning and Implementation Guide This section includes the following topics gt gt gt gt gt 4 9 1 I O adapter slots on page 103 4 9 2 PCI hubs on page 104 4 9 3 Available adapters on page 105 4 9 4 Adapter naming convention on page 106 4 9 5 IBM Flex System EN2024 4 port 1Gb Ethernet Adapter on page 106 4 9 6 IBM Flex System EN4054 4 port 10Gb Ethernet Adapter on page 108 4 9 7 IBM Flex System CN4058 8 port 10Gb Converged Adapter on page 110 4 9 8 IBM Flex System EN4132 2 port 10Gb RoCE Adapter on page 112 4 9 9 IBM Flex System IB6132 2 port QDR InfiniBand Adapter on page 113 4 9 10 IBM Flex System FC3172 2 port 8Gb FC Adapter on page 114 4 9 11 IBM Flex System FC5052 2 port 16Gb FC Ada
41. powering on a host or server by right clicking the object and then selecting Operations Power On Performance Summary Search the table Search Select Name Access amp Reference Code Problems 2 Avail sa Related Resources gt fi Ere Bhd FS i Explorer Remove Add ta Automation Hardware Information Inwentony Operations Change Password Power OnOff Launch Advanced System Management AS hi Release Wanagement Security Power hianagement Power On System Configuration System Status and Health Schedule Operations Semice and Support Utilization Data Figure 7 48 Object right click options Rebuild Managed System T FF F F FLIT F F F Y 236 IBM Flex System p270 Compute Node Planning and Implementation Guide Typically operational selections start a wizard or display a set of options that are related to the operation Figure 7 49 shows the power on options for the selected Power compute node Power On Server 954 74e SHLOF FESR To power on the system select a power on option and click GK Power on options Normal Hardware Discovery Sustem Profile ron the managed system as defined by the next virtual serve olicy launch the following task Edit Host Figure 7 49 Power on options Chapter 7 Power node management 237 Right click options for an object are context sensitive meaning only valid options for the state of the object or the numb
42. the System Status view as shown in Figure 7 15 gives a visual indication of a node in a discovery status Newly inserted compute node that is discovered by the CMM Figure 7 15 Bay 6 compute node in discovery status 7 7 3 Power compute node management This section describes Power Systems compute node management options through the CMM and how to use these options to allow management by more advanced platform managers These options are used mainly with IVM but can be used with an HMC or FSM When you are performing management operations on Power or x86 based compute nodes there are two primary places at the top of page menu structure of the CMM that are used the System Status tab and the Chassis Management tab System Status option The System Status option shows a graphical chassis map window which is the default view when you enter the CMM web interface You can also access this view by clicking System Status The chassis map is active and shows changes in status of the chassis components by changes in colors and various symbols Placing the mouse cursor over a component shows VPD such as model type serial number and general health status Chapter 7 Power node management 209 The chassis map is also interactive and allows the selection of a component to display the available actions such as power on off boot options and locations LEDs Below the actions a detail window shows all available informati
43. trip Pay Figure 7 123 Enter the FTP server access information 8 Figure 7 124 shows the results of the readiness check against the selected server If the server was in a state that cannot be updated the readiness check fails Click OK to continue i Information Server 7954 24X 8N107782B Licensed Internal Code Readiness check found no errors for the following targets Server 7954 24 SN 1077828 7954 248 1077828 HScF0113 Figure 7 124 Readiness check results Chapter 7 Power node management 295 296 The Change Licensed Internal Code wizard continues with an information window as shown in Figure 7 125 Click Next to continue The FTP server is accessed and a determination is made if a valid update exists in the specified server and location Change Licensed Internal Code Wizard Server 7954 24X sN107782B Welcome to the Change Licensed Internal Code wizard You will be prompted to select the types of Licensed Internal Code LIC Updates to install If there are no updates for the selected types there willl be no prompts for installation Figure 7 125 Change Licensed Internal Code wizard code validation 10 The update concurrency window as shown in Figure 7 126 shows the options that are available for a disruptive in this example or nondisruptive installation Invalid options cannot be selected After you choose the wanted option click OK Managed System and Power L
44. with oversubscription to 3538 W output at 200 VAC The power supplies also contain two independently powered 40 mm cooling fans 154 IBM Flex System p270 Compute Node Planning and Implementation Guide The 80 PLUS performance specification is for power supplies that are used within servers and computers To meet the 80 PLUS standard the power supply must have an efficiency of 80 or greater at 20 percent 50 percent and 100 percent of rated load with a Power Factor PF of 0 09 or greater The standard has several grades such as Bronze Silver Gold and Platinum For more information about 80 PLUS see this website http www 80PLUS org 5 7 4 Power limiting and capping policies Simple power capping policies can be set to limit the amount of power that is used by the chassis The following policy options are available which you can configure with the Chassis Management Module CMM gt No Power Capping The maximum input power is determined by the active Power Redundancy policy This is the default setting gt Static Capping Sets an overall chassis limit on the maximum input power In a situation where powering on a component could cause the limit to be exceeded the component cannot power on Static capping can be set asa percentage with the slider number box or a Wattage figure If there is insufficient power available to power on a compute node the compute node does not come online The power capping options can be se
45. 2 or greater gt Integrated Virtualization Manager IVM For more information about management console options see Chapter 7 Power node management on page 183 Important PowerVM provides several types of licensing called editions Only Standard and Enterprise Editions are supported for Power Systems compute nodes Be sure to evaluate the options that are available in each of those editions and purchase the correct license for what you are implementing If you plan to use advanced features such as Live Partition Mobility or Active Memory Sharing the Enterprise Edition is required For more information about these features see this website http ibm com systems power software virtualization editions As described in 5 1 1 Hardware planning on page 130 rperf reports can be used to check processor values and equivalences Implementing a dual VIOS solution is the best way to achieve a high availability HA environment This environment allows for maintenance on one VIOS without disrupting the clients and avoids depending on just one VIOS to do all of the work functions For more information about implementing a dual VIOS solution see 5 6 Dual VIOS on page 149 Chapter 5 Planning 135 Note If you want a dual VIOS environment external disk access is required for one VIOS or the ETE connected IBM Flex System Dual VIOS Adapter is required to allow diverse SAS controllers for the two internal disks 5 2 Net
46. 36 IBM Flex System p270 Compute Node Planning and Implementation Guide P260 p270 p460 x222 x240 x220 x440 POWER Nodes CN4058 8 port 10Gb Converged Adapter EN4054 4 port 10GbE Adapter Ethernet I O Adapters POWER nodes Not applicable FC5054 4 port 16Gb FC Adapter Fibre Channel I O Adapters x86 Nodes CN4054 10Gb Virtual Fabric Adapter EN4054 4 port 10GbE Adapter Ethernet I O LAN on Motherboard 2 port 10 GbE FCoE LAN on Motherboard 2 port 10 GbE adapters x86 Nodes Not applicable FC5022 2 port 16Gb FC Adapter Fibre Channel I O FC3052 2 port 8Gb FC Adapter Adapters FC5024D 4 port Fibre Channel adapter x222 only ESXi USB Key Optional for x86 compute nodes only Port FoD Ports are computed during configuration that is based upon chassis switch node type and the Activations I O adapter selection IBM i PureFlex Not configurable Solution VDI PureFlex Supported Not configurable Solution a 1x 1 Chassis 2x 2 Chassis amp 3x 3 Chassis Example configuration There are eight different configuration starting points for PureFlex Enterprise as described in Table 2 11 on page 36 These configurations can be enhanced further with multi chassis and other storage configurations Figure 2 7 on page 38 shows an example of the wiring for base configuration 6B which is an Enterprise PureFlex that uses an external Storwize V7000 enclosure and CN4093 10Gb Converged Scalable Switch converged infrastructure switches Al
47. 4 2 shows the system board layout of the IBM Flex System p270 Compute Node POWER7 Dual Two I O adapter I O adapter 1 Chip Module 16 DIMM slots connectors _ under heatsinks PEDDI f 1413555 LHI Pa teeta 1 i i RAPPAR 0009900000000 ni i ATETEA T PE ET TE wry j i Il oS E A Disks are mounted on the cover Optional SAS controller card IBM which is over the memory DIMMs Flex System Dual VIOS Adapter Figure 4 2 System board layout of the IBM Flex System p270 Compute Node 4 1 1 Comparing the compute nodes The p270 is the follow on to the p260 Compute Node Table 4 1 shows a comparison between the various models of two systems Table 4 1 p260 and p270 comparison table ee p260 Machine type 7895 p270 7954 POWER7 POWER7 POWER7 Processor Single chip module SCM Dual chip packaging module DCM Chapter 4 Product information and technology 75 ee p260 Machine type 7895 p270 7954 Specifications Total cores per system Clock speed Clock speed 322 22 Ew 55 4 4 08 4 4 08 L2 cache 2 C 4 a 4 EM 2 2 LN 4 a 4 a 2 cm 2 a per chip 4 per 4 per DCM DCM L3 cache 16 MB 32 MB 32 MB 20 MB 40 MB 80 MB 80 MB 60 MB 60 MB per chip L3 cache 32 MB 64 MB 64MB 40 MB 80 MB 160 MB 160 MB 240 MB 240 MB a server Memory min min 8 GB per server Memory max 512 GB per server Relative Performance rperf
48. 40 O Figure 3 3 Enterprise Chassis I O module locations Chapter 3 Introduction to IBM Flex System 57 The internal connections between the node ports and the I O module internal ports are defined by the following components gt O modules 1 and 2 These modules connect to the ports on an I O expansion card in slot position 1 for standard width compute nodes such as the p270 or slot positions 1 and 3 for double wide compute nodes such as the p460 x86 based computer nodes Certain x86 based compute nodes offer integrated local area network LAN networking via LAN On Motherboard LOM hardware Power Systems compute nodes have no LOM capabilities and require I O cards for network access gt O modules 3 and 4 These modules are connected to the ports on an I O expansion card in slot position 2 for stan
49. 7 c Copyright IBM Corp 2000 2008 All rights reserved NIC Adapters Device Location Code Hardware Address 1 Interpartition Logical LAN U7954 24X 1077E3B V4 C4 T1 XXXXXXXXXXXX Navigation keys M return to Main Menu ESC key return to previous screen X eXit System Management Services Type menu item number and press Enter or select Navigation key Figure 9 50 MAC address 5 On the installation server configure the dhcpd conf file and assuming it is also the NFS server the etc exports file The dhcpd conf file is shown in Figure 9 51 on page 481 where we must replace XX XX XX XX XX XX and the network parameters with our MAC and IP addresses 480 IBM Flex System p270 Compute Node Planning and Implementation Guide always reply rfcl048 true allow bootp deny unknown clients not authoritative default lease time 600 max lease time 7200 ddns update style none subnet 10 1 0 0 netmask 255 255 0 0 host slesll fixed address 10 1 2 90 hardware ethernet XX XX XX XX XX XX3 next server 10 1 2 56 filename yaboot ibm Figure 9 51 The dhcpd conf file for SUSE Linux Enterprise Server 11 6 Create a file in tftpboot named yaboot conf xx xx xx Xx xx xx where XX XX XX XX XX XX IS Our MAC address as shown in Figure 9 52 Figure 9 52 also shows an example of this file that is configured to start the installer and access the DVD ISO image by using NFS default slesi1l timeout 100 image 64bit in
50. 8 Connection for drive Piere osai card PERE to the system cover Chapter 4 Product information and technology 101 4 8 4 RAID capabilities Disk drives and SSDs in the Power Systems compute nodes can be used to implement and manage various types of RAID arrays in operating systems that are on the ServerProven list For the compute node you must configure the RAID array by running smit sasdam which starts the SAS RAID Disk Array Manager for AIX Note Internal drives that are configured with only the onboard SAS controller can use RAID 0 and RAID 10 With the optional SAS controller installed only RAID O is possible because each controller has access to only a single drive The AIX Disk Array Manager is packaged with the Diagnostics utilities on the Diagnostics CD Run smit sasdam to configure the disk drives for use with the SAS controller The diagnostics CD can be downloaded in ISO file format from this website http www14 software ibm com webapp set2 sas f diags download For more information see Using the Disk Array Manager in the Systems Hardware Information Center at this website http publib boulder ibm com infocenter systems scope hw index jsp top ic p7ebj sasusingthesasdiskarraymanager htm Tip Depending on your RAID configuration you might need to create the array before you install the operating system in the compute node Before you can create a RAID array you must reformat the drives so that the sector size
51. 8 port 10Gb Converged Adapter U78AE 001 W25028Y P1 C19 L1i FC3172 2 port 8Gb Fibre Channel Adapter U7SAE 001 W25028Y P1 T2 PCI E SAS Controller U7FSAE 001 W25028Y P1 T1 PCI to PCI bridge U78AE 001 W2Z25028Y P1 C18 L2 CN4058 8 port 10Gb Converged Adapter Figure 9 29 Using the HMC to assign the USB port to an existing partition profile Chapter 9 Operating system installation methods 465 IVM managed compute node When you are creating an AIX Linux or IBM i partition with IVM by using the wizard select the USB Enhanced Host Controller device under the Physical Adapters option as shown in Figure 9 30 Create Partition Physical Adapters Step 6 of 10 Name Physical Adapters Memory Select any numer of currently unassigned physical adapters You may select each adapter individually or select an entire I O unit or bus using the selection assistant Processors Ethernet Tee eT Selection assistant Optical Tape All Summary Available Physical Adapters Select Physical Location Code Description UFaAE 001 W2S00R2 P1 T1 USB Enhanced Host Controller 3310e000 Figure 9 30 Using the IVM partition wizard to add the USB port to a new partition When you are using IVM to modify a partition from the work area click the partition name then click Physical Adapters and select the USB Enhanced Controller as shown in Figure 9 31 General Memory Processing Ethernet Storage Optical Tape Devices Physical Adapters The
52. A Figure 9 35 Using the VIOS Ismap command to verify the optical device assignment The output of the command indicates that a virtual target device vtopt0 was created with a backing device of cd0 and assigned to client partition ID 2 9 5 3 Using a VIOS media repository The procedure for using the VIOS media repository is much the same as virtualizing a physical device through the VIOS to the client virtual server or partition An ISO image file is used as the backing device instead of a physical device such as cd0 The following overall steps are completed to use a VIOS media repository 1 Creating the media repository on page 469 2 Loading the media repository on page 470 3 Creating the virtual target device and assigning the media on page 470 The FSM HMC and IVM all have GUI methods for performing these steps The VIOS also has commands that are used to create and populate the media repository The following example uses the CLI method from the VIOS 468 IBM Flex System p270 Compute Node Planning and Implementation Guide Table 9 3 lists the commands that are used in this section Table 9 3 Commands to create and work with a VIOS media repository comment Funston mkvdev fb0 Create a file back virtual target device loadopt Associate an ISO image with a virtual target device unloadopt Unload the ISO image with a virtual target device Tsmap all List virtual device mapping to a virtual server or
53. AIX 5L 5 3 on lower TL levels can run WPARS within a Virtual Server that is running AIX V7 For more information about WPARs prerequisites see this website http www 03 ibm com systems power software aix sysmgmt wpar v53_prere q html Linux installations also are supported on the Power Systems compute node Supported versions are listed in Operating system support on page 132 Note Full System partitions are not supported for IBM i because of the requirement for I O to be virtualized 134 IBM Flex System p270 Compute Node Planning and Implementation Guide Important Methods for installing these operating systems are described in Chapter 9 Operating system installation methods on page 437 Virtualized environment planning If you decide to implement a virtualized environment you can create AIX and Linux partitions on the Power Systems compute node with or without a VIOS If you choose not to use VIOS the number of virtual servers is limited by the number of expansion cards in the Power Systems compute node If you choose to use VIOS you can virtualize the limited number of expansion cards to create client virtual servers You must use VIOS 2 2 2 3 or later One of the following management consoles is required to attach to your Power Systems compute node Flexible Service Processor FSP to create virtual servers and perform virtualization gt IBM Flex System Manager gt IBM Hardware Management Console V7R7 7 0
54. ASIC gt Auto Negotiate to 16 Gb 8 Gb or 4 Gb gt KR protocol support at 16 Gb gt ECC protection of high density RAM gt Two physical PCle functions individually configurable into two fully independent FC ports Figure 4 27 on page 117 shows the IBM Flex System FC5052 2 port 16Gb FC Adapter 116 IBM Flex System p270 Compute Node Planning and Implementation Guide PERLE 1102 er iks EE t os Fe a ol ae F lt sa WS u g T b k s izd ue IS i gt a ist ai Figure 4 27 The FC5052 2 port 16Gb FC Adapter for IBM Flex System For more information about this adapter see the IBM Redbooks Product Guide that is available at this website http www redbooks ibm com abstracts tips1044 html 0pen 4 9 12 IBM Flex System FC5054 4 port 16Gb FC Adapter The FC5054 4 port 16Gb FC Adapter from Emulex enables high speed access for IBM Flex System Enterprise Chassis compute nodes to connect to a Fibre Channel SAN This adapter is based on the Emulex XE201 ASIC design and works with the FC5022 16Gb SAN Scalable switch The FC5054 4 port 16Gb FC Adapter has the following features and specifications gt Dual Emulex XE201 ASIC which allows logical partitioning gt Auto Negotiate to 16 Gb 8 Gb or 4 Gb gt KR protocol support at 16 Gb gt ECC protection of high density RAM Chapter 4 Product information and technology 117 gt Four physical PCle functions individually configurable into fo
55. CN switch SAN switch Figure 5 5 Dual SAN switch connection with the IBM Flex System p270 Compute Node With the CN4058 8 port 10Gb Converged Adapter hardware redundancy is possible in the compute node by using the capabilities of the CN4058 to carry TCP and FCP traffic via a converged network For more information about converged networking see Chapter 6 Converged networking on page 163 5 6 Dual VIOS Dual VIOS is supported in the Power Systems compute node Dual VIOS can be set up via multiple configurations depending on the hardware that is installed in the node To configure dual VIOS on a p270 compute node you need the following components gt A system that is managed by an FSM or an HMC gt Storage to host VIOS partitions that consist of one of the following configurations Two internal drives with the IBM Flex System Dual VIOS Adapter installed in the expansion port to allow a SAS controller and a single drive to be allocated per VIOS Both VIOS are installed on storage internally on the compute node Two internal drives to host one VIOS and an ASIC of CN4058 converged adapter that is assigned to the other VIOS to host it on external based storage Two CN4058 converged adapters with one or both ASIC allocated to each VIOS that uses convergence to provide FC and TCP traffic on the same adapter or ASIC No internal drives are required with this option Chapter 5 Planning 149 Because th
56. Chassis support The Power Systems compute nodes can be used only in the IBM Flex System Enterprise Chassis They do not fit in the previous IBM modular systems such as IBM iDataPlex or IBM BladeCenter There is no onboard video capability in the Power Systems compute nodes The machines are designed to use Serial Over LAN SOL with Integrated Virtualization Manager IVM or the IBM Flex System Manager FSM or Hardware Management Console HMC when SOL is disabled For more information about the IBM Flex System Enterprise Chassis see Chapter 3 Introduction to IBM Flex System on page 53 For information about FSM see 7 3 IBM Flex System Manager on page 191 Power supplies There are restrictions as to the number of p270 systems you can install in a chassis that are based on the power supplies installed and the power policies used For more information and a support matrix see 3 5 Power supplies on page 63 80 IBM Flex System p270 Compute Node Planning and Implementation Guide 4 4 System architecture This section describes the system architecture and layout of Power Systems compute nodes The overall system architecture for the p270 is shown in Figure 4 8 DIMM piv De DIMM DIMM SMI _ ETE DIMM To SMI DIMM front DIMM panel SMI Each DIMM PCle 2 0 x8 I O connector 1 SMI DIMM DIMM I O connector 2 DIMM z PCle 2 0 x8 DIMM t SAS controller on the optional SMI Dual VIOS Adapter i
57. Complete the following steps 1 As shown in Figure 12 2 in the VIOS load the IBM4LINUX tool in the virtual optical drive by using the loadopt command loadopt disk IBM Linux_TK vtd vtopt0 release Isrep Size mb Free mb Parent Pool Parent Size Parent Free 408001 404017 mediaRep 409344 768 Name File Size Optical Access IBM_Linux_TK 863 vtopt0 rw Linux_RH6_DVD 3121 None rw Figure 12 2 Mounting virtual media in VIOS media repository 2 Under Manage Power System Resources in the FSM activate the virtual server as shown in Figure 12 3 k Performance Summary Search the table Search ro e P Select Name oS Aca 23 State Detailed Processin Mg O Biitse vies 1 ok Started None 0 4 a d F Emso woss 2 ok Stopped None D4 8 0 Daa topped mone 02 16 Related E F Add to gt Automation b Inventory b b Console Window Frofile Mobility gt Suspend Operations b Operations Release Management b b Security p b b System Configuration System Status and Health Semice and Support b iiil M4 Pageiofi th i1 Selected 1 Total 3 Filtered 3 Figure 12 3 Activate virtual server panel 3 Open a terminal and go to the SMS menu For more information see 9 2 Accessing System Management Services on page 438 556 IBM Flex System p270 Compute Node Planning and Implementation Guide
58. Corp 2013 All rights reserved 489 10 1 Installing VIOS The installation of the Virtual I O Server VIOS is identical to the AIX process The following methods are available to install the VIOS on a Power Systems compute node however not all methods are available for each of the three management platforms gt Install by using the installios command This method is available with FSM and HMC only Follow the instructions in 9 3 Installios installation of the VIOS on page 440 gt Use NIM to install VIOS from the system image that was created by using the mksysb command This method is supported by FSM HMC and IVM Complete the following steps to install VIOS a The first part of the process setting up the environment for installation is described in 9 4 Network Installation Management method on page 446 A machine resource is created with the VIOS name IP address and so on Installation resources of a mksyb and corresponding SPOT are also required b The NIM BOS installation options are configured for the VIOS machine resource by using the proper VIOS mksysb and SPOT resources c The virtual server or logical partition LPAR is started and System Management Services SMS is accessed to configure the TCP IP parameters for the VIOS and NIM server d The installation boot order is set for the network device as described in Step 3 of 9 4 Network Installation Management method on page 446 e
59. Details window as shown in Figure 7 95 uses the example of eth1 to describe LAN adapter firewall settings configuration ae LAN Adapter Details LL Basic Settings IPv6 Settings Firewall Settings LAN interface address 00 21 5E F9 F8 46 Ethernet Available Applications Select Application Name Ports Secure Shell ae S Jallow remote Secure C Secure Remote Web Access 443 tco 9960 tcp Shell access Qpen Pegasus 5989 tcp B L RMC 697 Udo 037 tcp FES 9920 tcp 9900 udp vI Allowed Hosts Select Application Name Ports Allowed Hosts Open Pegasus 5989 tcp 0 0 0 0 0 0 0 0 E Open Pegasus 3989 tcp ste L EMC 657 udp tep 657 0 0 0 0 0 0 0 0 L RMC 657 Udo top 657 f L FSS 9920 tcep udp 9900 0 0 0 0 0 0 0 0 L FES g920 tcp udp 9900 ei E E 520 2S UG Gea tep 2301 0 0 0 0 0 0 0 0 L a250 300 tcp tep 301 T Incoming Ping echo regquest icmp 0 0 0 0 0 0 0 0 L L2TF 170iiudp 0 0 0 0 0 0 0 0 g Figure 7 95 LAN Adaptor Details Firewall Settings 276 IBM Flex System p270 Compute Node Planning and Implementation Guide The HMC also acts as a functional firewall which limits access by protocol to private and open networks to which the HMC is also attached The HMC does not allow any IP forwarding Clients on one network interface of the HMC cannot directly access elements on any other network interface You use the Firewall Settings tab of the LAN Adapter Details window to view and change current firewall adapter
60. Dynamic Reconfiguration Connector Index DRC Index of the physical slot location is required Chapter 8 Virtualization 349 Table 8 5 shows the cross reference of DRC Indexes to location codes for the p270 Table 8 5 DRC Index numbers for p270 DRC Index Description Location Code 21010218 PCI E SAS Controller U78AE 001 ssssss P1 Rl 21010219 PCl to PCI bridge USB port U78AE 001 ssssss P1 T1 2101021A Expansion card position 1 first bus U78AE 001 ssssss P1 C18 L1 21010239 Expansion card position 2 second bus U78AE 001 ssssss P1 C19 L2 2101021C Expansion card position 2 first bus U78AE 001 ssssss P1 C19 L1 21010238 Expansion card position 1 second bus U78AE 001 ssssss P1 C18 L2 2101021D Dual VIOS adapter second SAS controller U78AE 001 ssssss P1 C20 L1 To create a VIO Server by using a single command the mksyscfg command is run from the CLI of the HMC or FSM In an IVM managed system the VIOS is installed in the first LPAR and assigned all the physical I O resources The mksyscfg command has many attributes including the following attributes that are used here name profile name Ipar_env Ipar_id min _mem desired_mem max_mem proc_mode min procs desired procs max procs Sharing mode auto start Ipar_io pool _ ids io slots max virtual slots virtual serial adapters virtual scsi_adapters virtual eth adapters msp YY YYYY YYYY YYYY YYYY Y Y 350 IBM Flex System p270 Compute Node Planning an
61. E Collapse all menus Machine type model 7954 24X Serial number 107782B Date 2013 6 25 Time 13 02 14 UTC Service Processor Primary Location U78AE 001 WZSRO2E P1 Power Restart Control On LAN Current users System Service Aids System Information User ID Location System Configuration USERID 9 44 168 209 Network Services Performance Setup On Demand Utilities Concurrent Maintenance Login Profile celogin1 Disabled celogin 2 Disabled Figure 7 143 ASMI node power control IBM Flex System p270 Compute Node Planning and Implementation Guide 7 From the Power Restart Control options click Power On Off System as shown in Figure 7 144 Full control of power on options are available from this page The options that are shown are typically the default options that are set by the installation process of the VIOS IVM Click Save settings and power on to power on the compute node Advanced System Management User ID USERID Server 7954 24X SN107782B FW773 00 AF773_021 Expand all menus Power On Off System Collapse all menus Current system power state Off Power Restart Control Current firmware boot side Temporary Current system server firmware state Not running System diagnostic level for the next boot Normal Kec Firmware boot side for the next boot Temporary v Wake On LAN System Service Aids System operating mode Normal System Information System Configura
62. Fabric Manager Support gt Ethernet specific features Pv4 IPv6 TCP and UDP checksum offload Large Send Offload LSO Large Receive Offload Receive Side Scaling RSS and TCP Segmentation Offload TSO VLAN insertion and extraction Jumbo frames up to 9000 bytes Priority Flow Control PFC for Ethernet traffic Network boot Interrupt coalescing Load balancing and failover support including Adapter Fault Tolerance AFT switch fault tolerance SFT Adapter Load Balancing ALB and link aggregation and IEEE 802 1AX gt FCoE specific features Common driver for CNAs and HBAs Total of 3 500 N_Port ID Virtualization NPIV interfaces Support for FIP and FCoE Ether Types Fabric Provided MAC Addressing FPMA support 2048 concurrent port logins RPIs per port 1024 active exchanges XRIs per port 110 IBM Flex System p270 Compute Node Planning and Implementation Guide Note The CN4058 does not support iSCSI hardware offload Tip To make the most use of the capabilities of the CN4058 adapter the following I O modules should be upgraded to maximize the number of active internal ports gt For the CN4093 EN4093 EN4093R and SI4093 I O modules Upgrade 1 enables four ports per adapter and Upgrade 2 enables six ports per adapter gt For the EN2092 Upgrade 1 is required to use four ports of the adapter If no upgrades are applied to the Flex System switches only
63. Flex System p270 Compute Node Planning and Implementation Guide 4 Complete the optional information if needed as shown in Figure 7 29 Alternate Contact Information You can add add alternate contact details in addition to the above primary contact information These fields are optional Alternate Contact Name Alternate Phone Alternate Phone Extension Alternate E mail Machine Location Phone Outbound Connectivity You might require a HTTP proxy if you do not have direct network connection to IBM Support task your Network Administrator Use proxy Apply Figure 7 29 Optional information to enable CMM phone home capability 5 If a proxy is required for external communication to IBM Support be sure to include this information in the optional settings as shown in Figure 7 30 Outbound Connectivity You might require a HTTP proxy if you do not have direct network connection to IBM Support task your Network Administrator A Use proxy Host name Port 8080 A Proxy uses authentication Wiser Mame Figure 7 30 CMM to IBM Support proxy information 6 Click Apply to enable IBM Support and acknowledge any confirmation notices as they appear Chapter 7 Power node management 223 Figure 7 31 shows IBM support is now enabled IBM Chassis Management Module System Status Multi Chassis Monitor Events Service and Support Chassis Management Sera Service and Support Settings Servic
64. Hat Enterprise Linux system This option will preserve the existing data on your storage device s Figure 12 41 Select a fresh installation or an upgrade to an existing installation Chapter 12 Installing Linux 589 12 Select a disk layout as shown in Figure 12 42 You can choose from various installations or create a custom layout for example you can create a software mirror between two disks You can also manage older RHEL installations if they are detected Which type of installation would you like Use All Space Removes all partitions on the selected device s This includes partitions created by other operating ia ma systems Tip This option will remove data from the selected device s Make sure you have backups Replace Existing Linux System s Removes only Linux partitions created from a previous Linux installation This does not remove other partitions you may have on your storage device s such as VFAT or FAT32 Tip This option will remove data from the selected device s Make sure you have backups Shrink Current System Shrinks existing partitions to create free space for the default layout Use Free Space Retains your current data and partitions and uses only the unpartitioned space on the selected device ish assuming you have enough free space available m Create Custom Layout cos adi Manually create your own custom layout on the selected device s using our partitioning tool Figu
65. Health Summary Monitors Update Manager 331 Ready Fi i nfm ole A a A i ris Figure 8 6 Highlighting the Manage Power Systems Resources plug in Chapter 8 Virtualization 357 5 Click Manage Power Systems Resources to display the Manage Power Systems Resources main window as shown in Figure 8 7 A new tab was added to the main tab area IBM Flex System Manager Welcome USERID Problems i off Compliance 2 off Help U Home Chassis blan Manage Powe EN Select Action Manage Power Systems Resources k Welcome Flex System Manager Version E l Hosts 2 Secon ee Ree ace ll eee Eats Search the table Search PE Virtual Servers La Operating Systems Power Units Select Mame Access State Detailed Reference C server 7954 24 SN107732 M or Started Hone Figure 8 7 FSM Manage Power Systems Resources Creating the virtual server When you open the Manage Power Systems Resources main window as shown in Figure 8 7 you see choices to manage hosts and virtual servers In this section we describe how to create the VIOS virtual server 358 IBM Flex System p270 Compute Node Planning and Implementation Guide To create the virtual server complete the following steps 1 Click Hosts in the navigation area to display in the content area a list of the physical servers as shown in Figure 8 8 Power Systems Resources k Welcome Flex Sy
66. IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM 1 SMS Menu 5 Default Boot List Open Firmware Prompt 6 Stored Boot List Memory Keyboard Network SCSI Speaker Figure 9 14 SMS boot options 454 IBM Flex System p270 Compute Node Planning and Implementation Guide 16 Select option 1 SMS Menu to open the SMS Main Menu as shown in Figure 9 15 Version AF773_ 033 SMS 1 7 c Copyright IBM Corp 2000 2008 All rights reserved Main Menu 1 Select Language 2 Setup Remote IPL Initial Program Load 3 Change SCSI Settings 4 Select Console 5 Select Boot Options Type menu item number and press Enter or select Navigation key Figure 9 15 SMS menu options 17 Select option 2 Setup Remote IPL Initial Program Load from the SMS main menu Chapter 9 Operating system installation methods 455 18 Select the adapter to use for the installation as shown in Figure 9 16 Version AF773_ 033 SMS 1 7 c Copyright IBM Corp 2000 2008 All rights reserved NIC Adapters Device Location Code Hardware Address 1 Interpartition Logical LAN U7954 24X 1077E3B V5 C4 Tl 42dbfe361604 Navigation keys M return to Main Menu ESC key return to previous screen X eXit System Management Services Type menu item number and press Enter or select Navigation key Figure 9 16 NIC adapter selection 1
67. IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM 1 SMS Menu 5 Default Boot List 8 Open Firmware Prompt 6 Stored Boot List Memory Keyboard Network SCSI Speaker Figure 9 42 SMS menu The window that is shown in Figure 9 43 on page 474 opens Chapter 9 Operating system installation methods 473 Version AF773_ 033 SMS 1 7
68. IP address must be reachable through this adapter 1 ethO 10 91 0 2 2 ethl 9 42 170 223 3 mgmtO 10 3 0 2 Enter a number 1 3 2 Retrieving information for available network adapters This will take several minutes The following objects of type ethernet adapters were found Please select one 1 ent U7954 24X F28D005 V1 C2 T1 26e926276a02 vdevice 1 1an 30000002 n a virtual 2 ent U7954 24X F28D005 V1 C3 T1l 26e926276a03 vdevice 1 1an 30000003 n a virtual 3 ent U7954 24X F28D005 V1 C4 T1l 26e926276a04 vdevice 1 1an 30000004 n a virtual 4 ent U78AE 001 TA4S005 P1 C34 L1 T1 0000c9d16584 pci 800000020000219 ethernet 0 n a physical 5 ent U78AE 001 TA4S005 P1 C34 L1 T2 0000c9d16586 pci 800000020000219 ethernet 0 1 n a physical 6 ent U78AE 001 TA4S005 P1 C34 L2 T1 0000c9d16588 pci 800000020000238 ethernet 0 n a physical 7 ent U78AE 001 TA4S005 P1 C34 L2 T2 0000c9d1658a pci 800000020000238 ethernet 0 1 n a physical Enter a number 1 7 Enter a number 1 7 4 Figure 9 3 Interactive installios continued The FSM activates the new VIOS virtual server to determine the network devices that are available to it from the hardware that is allocated to it within its activated profile A list of options is presented and one should be selected The proper selection should be based on information about the hardware that is assigned in the partition profile and the I O modules to which the adapters connect The list that us
69. Implementation Guide The IBM Flex System FC3172 2 port 8Gb FC Adapter has the following specifications gt Bandwidth 8 Gbps maximum at half duplex and 16 Gbps maximum at full duplex per port gt Throughput 3200 MBps full duplex gt Support for FOP SCSI and IP protocols gt Support for point to point fabric connections F Port Fabric Login gt Support for Fibre Channel Arbitrated Loop FCAL public loop profile Fibre Loop FL Port Port Login gt Support for Fibre Channel services class 2 and 3 gt Support for FCP SCSI initiator and target operation gt Support for full duplex operation gt Copper interface AC coupled Figure 4 26 shows the IBM Flex System FC3172 2 port 8Gb FC Adapter Figure 4 26 FC3172 2 port 8 Gb FC Adapter for IBM Flex System Chapter 4 Product information and technology 115 For more information about this adapter see the IBM Redbooks Product Guide that is available at this website http www redbooks ibm com abstracts tips0867 html 0pen 4 9 11 IBM Flex System FC5052 2 port 16Gb FC Adapter The FC5052 2 port 16Gb FC Adapter from Emulex enables high speed access for IBM Flex System Enterprise Chassis compute nodes to connect to a Fibre Channel SAN This adapter is based on the Emulex XE201 ASIC design and works with the FC5022 16Gb SAN Scalable switch The FC5052 2 port 16Gb FC Adapter has the following features and specifications gt Based on a single Emulex XE201 controller
70. Installing Linux 593 2 Select New installation and click Next The Installation Settings window opens as shown in Figure 12 48 Preparation Installation Settings af Welcome Click any headline to make changes or use the Change menu below af System Analysis Overview Expert af Time Zone i aS i keyboard Layout Installation English US qf Server Scenario amp Installation Summary Partitioning Perform Installation Create partition dew sdal 23 53 MB with id 41 Create swap partition dewsdaz 2 01 GB Configuration oe nuts Create root partition dew sda 17 96 GB with ext3 Check Installation Software Hostname Product SUSE Linux Enterprise Server 11 Network Patterns Customer Center Base System Novell App rmor 32 6t Runtime Environment Service Help and Support Documentation Clean Up Minimal Systern Appliances GNOME Desktop Environment X Window System Hardware Configuration Print Server Size of Packages to Install 2 3 GB os Ld Ld Online Update Release Notes Ld Language Primary Language English US Figure 12 48 Installation settings 3 Accept the default values or click Change to change the following values Keyboard layout Partitioning Software Language Click Next to continue 594 IBM Flex System p270 Compute Node Planning and Implementation Guide 4 The Perform Installation window open
71. LPAR can be activated The value also represents the lower end of the Dynamic LPAR DLPAR range or the minimum number or processors that are assigned without disruption Desired processors The desired value is the total number of processors to allocate when the LPAR starts The LPAR normally starts with this value available but might be activated if any value between the desired and minimum can be allocated Maximum processors The maximum value represents the upper end of the DLPAR range or the total number of processors that can be made available without disruption 378 IBM Flex System p270 Compute Node Planning and Implementation Guide In this example the number of dedicated processes can vary between two and eight dynamically without disruption Changing the minimum or maximum values of a running LPAR is an LPAR profile change that requires a stop and start of the LPAR https 9 42 171 90 hme wel T2d87 Create Lpar Wizard Server 7954 24X SN107732B Processing Settings x Create Partition Partition Profile Specify the desired minimum and maximum processing settings in the fields below Memory Settings Total number of processors 24 00 Minimum processors 5 3 Fj Virtual Adapters Optional Settings Desired processors P 4 Profile Summary R Maximum processors le lt Back Finish Figure 8 30 HMC Processing Settings window 3 Click Next to continue to the Memory Settings windo
72. Meg N 491 Chapter 11 Installing IBM i 0 0 0 cee 497 Tash Planning NS installation acest Y a ME a a Date ee an Bae i 498 11 1 1 Concepts of virtualized I O for IBMi 0008 498 TLZ GIGI SLO AGS sad te 3 8 atta Bd Sew oh Sole aa ate eee ae 8 es ah eA as 499 11 2 Creating an IBM i client virtual Server 0 0000 e eee 501 11 3 Configuring an IBM i console connection 0000005 512 11 4 Installing the IBM i operating system 0 000 eee eee 513 11 5 Installing Licensed Programs 0 0 c eee eee 528 11 6 IPL and Initialize System iw cs cineca Kei bes ea teod ea wea ds 536 11 7 Installing Program Temporary Fix packageS 00005 537 11 7 1 Reviewing fix cover letters before installation 537 IBM Flex System p270 Compute Node Planning and Implementation Guide 11 7 2 Preparing the system for installation of PIFs 537 11 7 3 Installing a Cumulative PTF package 0005 538 11 7 4 Completing fix installation nana aaan aaa 541 11 7 5 Verifying fix installation a na naaa aaaea 543 11 8 Installing software license keyS anaana 545 11 8 1 License key repository 0 0 0 ees 545 11 8 2 Setting usage limit of license managed programs 546 11 9 Basic TCP IP configuration 0 0 0 cc ees 547 11 9 1 Configuring a line description 0 000 cee eee 547 11 9 2 Turning
73. Memory Modeled Memory CPU Usage Factor Modeled Size Gain Estimate t 21 6 75 GB 1 25 GB 19 0 00 1 31 6 25 GB 1 75 GB 28 0 20 1 41 5 75 GB 2 25 GB 39 0 35 Peed 5 50 GB 2 50 GB 45 0 58 1 61 5 00 GB 3 00 GB 60 1 46 The recommended AME configuration for this workload is to configure the LPAR with a memory size of 5 50 GB and to configure a memory expansion factor of 1 51 This will result in a memory expansion of 45 from the LPAR s current memory size With this configuration the estimated CPU usage due to Active Memory Expansion is approximately 0 58 physical processors and the estimated overall peak CPU resource required for the LPAR is 3 72 physical processors Figure 4 15 Output from the AIX Active Memory Expansion planning tool For more information see the white paper Active Memory Expansion Overview and Usage Guide which is available at this website http www ibm com systems power hardware whitepapers am exp html Note AME is only available for the AIX operating system 4 8 Storage The Power Systems compute nodes have an onboard SAS controller that can manage up to two non hot pluggable internal drives It also has an optional second SAS controller IBM Flex System Dual VIOS Adapter that installs in the Expansion connector and can then split control of the drives to be one to each controller to allow for dual VIOS support 98 IBM Flex System p270 Compute Node Planning and Implementation Guide Orderi
74. Node Planning and Implementation Guide 1 1 IBM PureFlex System If you are looking for a highly integrated system for infrastructure consolidation or cloud implementation IBM PureFlex System offerings can help simplify your IT experience IBM PureFlex Systems are comprehensive infrastructure systems that provide an expert integrated computing system which combines servers enterprise storage networking virtualization and management into a single structure Its built in expertise enables organizations to manage and flexibly deploy integrated patterns of virtual and hardware resources through unified management These systems are ideally suited for customers who are interested in a system that delivers the simplicity of an integrated solution but that also want control over tuning middleware and the runtime environment IBM PureFlex Systems recommend workload placement is based on virtual machine compatibility and resource availability By using built in virtualization across servers storage and networking the infrastructure system enables automated scaling of resources and true workload mobility IBM PureFlex Systems undergo significant testing and experimentation so they can mitigate IT complexity without compromising the flexibility to tune systems to the tasks that businesses demand By providing flexibility and simplicity an IBM PureFlex System can provide extraordinary levels of IT control efficiency and operating agility that e
75. Open Terminal Console as shown in Figure 7 112 Systems Management gt Servers Server 7954 24X SN107782B sas ad ae E pR F Filter F Tasks a Views z mae Processing pee Active MEPR F Properties F Units i l Memory GB Profile ee Environment A t Change Default Profile Operations 4 DefauttProfie ARX or Linux 4 DefauttProfie Virtual VO Server T Configuration Hardware Information Total 2 Filtered 2 Selected 1 Dynamic partitioning Console Window Open Terminal Window serviceability Close Terminal Connection Tasks itsoAIX1 E a Properties Configuration E Console Window a manana j i Hardware Information Open Terminal Window Operations Close Terminal Connection Dynamic partitioning Serviceability Figure 7 112 Opening a virtual terminal console to a partition from the HMC 2 Acknowledge any Java security messages so that the console applet can start and open the console window 288 IBM Flex System p270 Compute Node Planning and Implementation Guide 3 When the terminal console opens as shown in Figure 7 113 direct access to the virtual terminal of the selected partition is available No other authentication to the HMC is required The virtual console window frame header indicates the HMC IP address partition name and server name Sj 9 42 171 90 itsoAlX Server 7954 24 SN1 07 7828 eq File Edit Font Enc
76. Planning and Implementation Guide 25 As shown in Figure 12 20 the summary page shows a summary of the choices that were made Click Next to begin the installation of the Linux distribution and the packages that were selected IBM Installation Toolkit for PowerLinux summary What will be installed Linus distribution Red Hat Enterprise Linux 6 Update 4 Profile Default workloads fileprint Linus distribution media CO DVD ROM IBMIT media CD DVO ROM IBM packages PAM authenticate esagent pLinux ibm power managed rhel6 ibmPMLinux lbmittinux libservicelag devel Ipa nmon sct pexpect Data that will be LOST Partitions sdal sda2 sda3 LVM all logical volumes in vg_rheyvs RAID none uit Prev Mest Figure 12 20 Summary of the installation Review the installation settings IBM packages and partition settings that you selected If you want to change a setting click Prey After verifying that the settings are correct click Next to start the installation process Important Review these settings carefully After you click Next the settings cannot be changed Chapter 12 Installing Linux 573 26 When prompted to change media see Figure 12 21 unload the IBMIT4LINUX virtual media and then load the Linux installation virtual media in the VIOS via a command line on the VIOS partition The following commands are used to perform these tasks unloadopt vtd vtopt0 release loadopt disk Linux
77. SN1077827B Virtual Adapters x Create Partition Processors ms Create Virtual Adapter P Ethernet Adapter Edit Fibre Channel Adapter f Properties SCSI Adapter Serial Adapter s Processing Settings x Memory Settings Optional Settings Profile Summary Le GO a Select Action AdapterID 4 Ethernet 2 N A N A Server Serial Any Partition Any Partition Slot Server Serial 1 Any Partition Any Partition Slot Figure 8 37 HMC Virtual Adapters window updated showing first virtual Ethernet adapter 4 Repeat steps 1 and 2 and use the following values as shown in Figure 8 38 on page 388 Accept the default Adapter of 3 This value change can be changed if needed Set the Port Virtual Ethernet also referred to as PVID option to 1 Select the This adapter is required for virtual server activation option Select the IEEE 802 1Q capable adapter option to allow future dynamic adds of VLANs Inthe Add VLAN ID field enter 4092 then click Add Select the Use this adapter for Ethernet bridging option and set the Priority value This virtual adapter is used for a second SEA and has a different Port Virtual Ethernet value The priority value can be the same in as the first virtual adapter or different as one method to load balance network traffic across the two SEAs in a dual VIOS environment Chapter 8 Virtualization 387 SEA The mkvdev s
78. TB 2 5 inch 7 2K RPM 1 2 TB 2 5 inch 10K RPM gt Expansion Unit 4939 A29 IBM Storwize V7000 Expansion Enclosure 24 disk slots gt Optional software IBM Storwize V7000 Remote Mirroring IBM Storwize V7000 External Virtualization IBM Storwize V7000 Real time Compression 7226 Multi Media Enclosure The 7226 system that is shown in Figure 2 8 on page 44 is a rack mounted enclosure that can be added to any PureFlex Enterprise configuration and features two drive bays that can hold one or two tape drives one or two RDX removable disk drives and up to four slim design DVD RAM drives These drives can be mixed in any combination of any available drive technology or electronic interface in a single 7226 Multimedia Storage Enclosure Chapter 2 IBM PureFlex System 43 Figure 2 8 7226 Multi Media Enclosure The 7226 enclosure media devices offers support for SAS USB and Fibre Channel connectivity depending on the drive Support in a PureFlex configuration includes the external USB and Fibre Channel connections Table 2 16 shows the Multi Media Enclosure and available PureFlex options Table 2 16 Multi Media Enclosure and options 2 5 7 Video keyboard and mouse option The IBM 7316 Flat Panel Console Kit that is shown in Figure 2 9 is an option to any PureFlex Enterprise configuration that can provide local console support for the FSM and x86 based compute nodes Figure 2 9 IBM 7316 Flat Pane
79. The compute node s FSP IP address is within the same subnet as the CMM as described in Component IP configuration on page 211 The compute node is added as a Server in the HMC as described in Servers on page 279 The chassis that contains the Power compute node is not managed by an FSM Chapter 7 Powernode management 269 270 This section describes the following topics gt HMC networking gt HMC adapter configuration gt Adding a Power compute node as an HMC managed system or server HMC networking overview An HMC can have multiple Ethernet adapters In a traditional HMC and Power based rack server environment the HMC typically has a private and open network connection The private network with the HMC acting as a DHCP server is used to communicate with a rack server s dedicated FSP Ethernet port The open network is used for access to the HMC s user interfaces from a more general use or management network In an HMC and Power based compute node environment the network configuration typically consists of one or more open networks connections The DHCP server that is provided by the private side of the HMC might not be desirable in the overall network configuration in a Flex environment because of the limited options available All the service processors in a Flex chassis including the FSPs communicate on the chassis internal management network All network connectivity with the FSP to a compute node must
80. Total 1 Filtered 1 Job log Message filter All June 20 2013 2 19 33 PM EDT Level 1 MEID 0 MSG Job Import Updates June 20 2013 2 19 03 PM EDT activated 19 35 PM EDT Level 200 MEID 0 MSa Subtask Import Updates activated 119 35 PM EDT Level 200 MEID 0 MSa Mo clients to start 1 19 35 PM EDT Level 200 MEIC 0 MSG Subtask activation status changed to Active 1 19 35 PM EDT Level 200 MEIC 0 MSa Subtask activation status changed to 19 35 PM EDT Level 1 MEID 0 MSG Job activation status changed to Active 1 19 35 PM EDT Level 200 MEID 0 MSa Subtask activation status changed to Active 19 35 PM EDT Level 150 MEID 3881 MSG ATKUSC2061 Generating SOD for path fhomefUSERID pawer June 20 2013 2 19 36 PM EDT Level 150 MEID 3881 MSG ATKUPD2931 Update OLAFY FS 016 016 was successfully imported to the library June 20 2013 2 19 36 PM EDT Level 150 MEID 3881 MSG ATKUPDS 31 Running compliance for all new updates that were found June 20 2013 2 19 54 PM EDT Level 150 MEID 2881 MSa ATKUPD2861 The import updates task has rot Figure 7 66 Update import job log The update import part of the overall update task is now complete The steps in the next section are a continuation of the compute node update process 250 IBM Flex System p270 Compute Node Planning and Implementation Guide Applying the system firmware update When you close the Active and Scheduled Jobs tab the Acquire Updates task can continu
81. Type menu item number and press Enter or select Navigation key 1 Figure 12 33 Select Install Boot Device Chapter 12 Installing Linux 583 3 Select option 1 Select Instal1 Boot Device The panel that is shown in Figure 12 34 opens Version AF 73 021 SMS 1 7 c Copyright IBM Corp 2000 2008 All rights reserved Select Device Type Diskette Tape CD DVD IDE Hard Drive Network List all Devices Navigation keys M return to Main Menu ESC key return to previous screen Type menu item number and press Enter or select Navigation key 3 Figure 12 34 Select Install Boot Device 584 IBM Flex System p270 Compute Node Planning and Implementation Guide 4 Booting from a virtual optical drive is required so select option 3 CD DVD The panel that is shown in Figure 12 35 opens Version AF 73 021 SMS 1 7 c Copyright IBM Corp 2000 2008 All rights reserved Select Media Type 1 SCSI 2 SSA 3 SAN 4 SAS 5 SATA 6 USB 7 IDE 8 ISA 9 List All Devices Navigation keys M return to Main Menu ESC key return to previous screen X eXit System Management Services Type menu item number and press Enter or select Navigation key 1 Figure 12 35 Selection of the SCSI DVD reader Chapter 12 Installing Linux 585 5 For the virtual optical media select option 1 SCSI The panel that is shown in Figure 12 36 opens Version AF 73 021 SMS 1 7 c Copyright IBM Corp 2000 2008 All rights reserved Se
82. U78AE 001 WZ500R2 10GbE 4 port Mezzanine Adapter a2191007df1033e7 View Pi C16 L2 Children UV6AE 001 W2500R2 Dual Port 8Gb FC Mezzanine Card 7710322577107501 View Pisses Children U7V8AE 001 W2ZS500R2 P1 T1 USB Enhanced Host Controller 3310e000 View Children UV6AE 001 W2500R2 P1 T2 PCI Express x8 Planar 3Gb SAS Adapter View Children Figure 8 61 IVM Partition Properties Physical Adapters tab When all changes for the tabs are made click OK to commit the changes and return to the View Modify Partitions view 13 Figure 8 62 shows the View Modify Partitions view after the changes are made to the management partition Also an information symbol is displayed for this example in the Processors column View Modify Partitions To perform an action on a partition first select the partition or partitions and then select the task System Overview Total system memory 16 GB Total processing units 16 Memory available 13 12 GB Processing units available 14 4 Reserved firmware memory 896 MB Processor pool utilization 0 16 1 0 System attention LED Inactive Partition Details i Ia im Shutdown More Tasks x Select ID Name State Uptime Memory Processors Entitled Utilized Reference Processing Units Processing Units Code 319 4 E 1 itsoVIOS64A Running ae 2 GB 0 5 0 16 2001 A oo Days Ay details as Figure 8 62 IVM View Modify Partitions view showing sy
83. USERID Password F3 Exit F12 Cancel Figure 11 12 Console Authentication 5250 window Chapter 11 Installing IBMi 513 514 2 After you are signed on select the power compute node that the IBM i virtual server in which you want to install the operating system is on as shown in Figure 11 13 Remote 5250 Console System Selection Management Console FSM 5CF3FC5F518A Select one of the following and press Enter Option System Name Type Mode Serial State 1 Server 7954 24X S 7954 24X 1077E3B Started 2 Server 7954 24X S 7954 24X 107782B Started System F3 Exit F5 Refresh F12 Cancel Figure 11 13 IBM i 5250 Console selection menu IBM Flex System p270 Compute Node Planning and Implementation Guide 3 Enter 1 to select Connect dedicated for an operating system installation as shown in Figure 11 14 Remote 5250 Console Partition Selection Management Console FSM 5CF3FC5F518A System Type option press Enter 1 Connect dedicated 2 Connect shared 3 Show Details Reference Use Option Partition Partition State Code Count Console Status 50 v7ritr6 Stopped 00000000 0 Unknown 55 v7rltr6_ 2 Stopped 00000000 0 Unknown F3 Exit F5 Refresh F12 Cancel Figure 11 14 Console Partition Selection menu 4 The virtual server opens a window in which you are prompted to select the Language Group the default is 2924 For more information about language groups see the Information Center at this website http pic dhe ibm
84. Up and Down buttons Columns Order Width Available Colurnns Selected Colurnns Agent Time Zane Offset a Available Processing Units Allocated Storage MB Server System F Add gt Available Memory GB Architecture Configurable Processors Asset Tag Configurable Memory Available Capacity Serial Number Model Available Memory MB Available Processing Units Available Processors Problems IF Addresses Restore Defaults Figure 7 46 Table column formatting options When the wanted changes are made click OK to save and apply In the example that is shown in Figure 7 47 the Problems column was moved up in the list or to the left in the table The Detailed State column moved to the right and out of view of this example Manage Power Systems Resources k Welcome Flex System Manager Yersion Fl Hosts H server 7o54 24x snio77e2 Performance Summary Search the table Search re Virtual Servers LA Operating Systems Power Units Select Name Access 2 St Reference Code Prablems a P Server 7954 248 5110778 M ck Started H information Figure 7 47 Revised table view of hosts Chapter 7 Power node management 235 Object menu options Most objects in the FSM that are light blue in color can be clicked for more information and right clicked to show the main operations that can be performed on that object The Power On example in Figure 7 48 shows an example of
85. Yellow Some bays must be left empty in the chassis Table 5 6 Maximum number of supported compute nodes for installed power supplies Power N 1 N 1 N 1 N N N 1 N 1 N 1 N N supply N 5 N 4 N 3 N 3 N 5 N 4 N 3 N 3 configuration 6 total 5 total 4 total 6 total 6 total 5 total 4 total 6 total p260 p270 p460 Note For more information about the exact configuration for the Power configurator System x see this website http www ibm com systems bladecenter resources powerconfig html 156 IBM Flex System p270 Compute Node Planning and Implementation Guide 5 8 Cooling The flow of air within the Enterprise Chassis follows a front to back cooling path cool air is drawn in at the front of the chassis and warm air is exhausted to the rear There are two cooling zones for the nodes a left zone and a right zone The cooling is scaled up as required based on which node bays are populated The number of cooling fans that are required for a number of nodes is described further in this section Air is drawn in through the front node bays and the front airflow inlet apertures at the top and bottom of the chassis When a node is not inserted in a bay an airflow damper closes in the midplane meaning that no air is drawn in through an unpopulated bay When a node is inserted into a bay the damper is opened mechanically by the insertion of the node which allows for cooling of the node in that bay 5 8 1 Enterprise Chassis fan populatio
86. _ If the installed status value of a licensed program is COMPATIBLE it is ready for use If the installed status value of a licensed program is BACKLEVEL the licensed program is installed but its version release and modification is not compatible with the currently installed level of the operating system Verify the current version release and modification of the licensed program and reinstall where applicable The following status values of installed LICPGMs are possible COMPATIBLE The product is installed Its version release and modification are compatible with the installed level of the operating system You can use this program with the installed level of the operating system INSTALLED The product is installed but not be compatible with the installed level of the operating system Note Licensed programs that are part of the single set are listed on the display window as INSTALLED You must verify that the release level of the licensed program is compatible with the release level of the operating system For IBM products check the current release levels for licensed programs or check with your software supplier before you use the licensed program ERROR The product has not installed successfully or the product is only partially installed For example a language or a language object for the product is not installed Use the Check Product Option CHKPRDOPT command to determine the cause of the fail
87. _RH6 DVD vtd vtopt0 release 27 Click Next after the new virtual media is loaded IBM Installation Toolkit for PowerLinux Insert CD DVD media Insert the first disc of Red Hat Enterprise Linux 6 Update 4 and click Next to start the installation process Insert the requested CD DVD disc and click Next uit Prev Mest Figure 12 21 Insert CD DVD media page 574 IBM Flex System p270 Compute Node Planning and Implementation Guide 28 The installation of the distribution begins as shown in Figure 12 22 After a few minutes the LPAR reboots IBM Installation Toolkit for PowerLinux Installation progress Loidioiey Manage parca py LOadInG BALL Operations to be perrormed PEE 15 46 29 Manage parts py Deleting RAID arrays The beginning of the installation 15 46 29 Manage parts py Stopping all RAID arrays process Is displayed until the 15 45 29 Manage parts py Removing LYM logical volumes system automatically reboots 15 46 29 manage partes py Renoving LVM volume groups vg_rh6yvs After the system reboots you 15 46 29 manage parts py Stopping all LVM volume groups can monitor the installation 15 45 29 Manage parts py Loading Conventional disk operations to be progress fram the terminal that ik performed annected to the system 15 45 29 Manage parts py Adjusting conventional disk operations to be used by parted command 15 45 30 Manage parts py Deleting Partitions 15 46 30 Manage parts py Del
88. a task is selected a resource menu may be presented showing all resources supported by the task Resource Selection This selection will list the resources in the system that are supported by these procedures Once a resource is selected a task menu will be presented showing all tasks that can be run on the resource s F1 Help FIO Exit F3 Previous Menu Figure 7 160 Diagnostics function selection IBM Flex System p270 Compute Node Planning and Implementation Guide 3 The task selection option present the function selection window as shown in Figure 7 161 By using the down arrow key scroll to the bottom of the list until the Update and Manage System Flash option is shown Press Enter to display the Update and Manage Flash menu options TASKS SELECTION LIST 801004 From the list below select a task by moving the cursor to the task and pressing Enter To list the resources for the task highlighted press List MORE 24 Display or Change Bootlist Format Media Gather System Information Hot Plug Task Identify and Attention Indicators Load ISO Image to USB Mass Storage Device Local Area Network Analyzer Log Repair Action Microcode Tasks RAID Array Manager Update Disk Based Diagnostics Update and Manage System Flash BOTTOM F1 Help F4 List F10 Exit Enter F3 Previous Menu Figure 7 161 Diagnostics task selection list The Update and Manage Flash window that is shown in Figure 7 162 on page 322 includes the li
89. access structure Chapter 4 Product information and technology 89 90 Flexible POWER7 processor packaging and offerings POWER 7 processors can optimize to various workload types For example database workloads typically benefit from fast processors that handle high transaction rates at high speeds Web workloads typically benefit more from processors with many threads that allow the breakdown of web requests into many parts and handle them in parallel POWER7 processor cores The architectural design for the POWER7 processor is an eight core processor with 80 MB of on chip L3 cache 10 MB per core However the architecture allows for differing numbers of processor cores to be active from one core to the full eight core version On chip L3 intelligent cache A breakthrough in material engineering and microprocessor fabrication enabled IBM to implement the L3 cache in eDRAM and place it on the POWER7 processor die L3 cache is critical to a balanced design as is the ability to provide good signaling between the L3 cache and other elements of the hierarchy such as the L2 cache or SMP interconnect The L3 cache that is associated with the implementation depends on the number of active cores For the six core variant in the p270 this means that 6 x 10 60 MB of L3 cache is available The on chip L3 cache is organized into separate areas with differing latency characteristics Each processor core is associated with a Fast Local Reg
90. and PowerLinux partitions IBM i does not use SMS and uses 5250 emulation for its system console For more information see 11 3 Configuring an IBM i console connection on page 512 1 Open an SSH session to the FSM and log in with a valid user ID and password At the command prompt use the vtmenu command 246 IBM Flex System p270 Compute Node Planning and Implementation Guide 2 The vtmenu initially shows all the Power compute nodes under management control of the FSM as shown in Figure 7 61 1 Server 7954 24X SN107782B 2 Server 7954 24X SN1077E3B Enter Number of Managed System q to quit 2 Figure 7 61 Vtmenu initial window 3 Choose a Managed System the example uses server 7954 24X SN107782B 4 A list of partitions that are running on the compute node are displayed as shown in Figure 7 62 Choose the partition for example for itsoAIX1 choose 1 Partitions On Managed System Server 7954 24X SN1077E3B 0S 400 Partitions not listed 1 itsoAIX1 Open Firmware 2 itsoVIOS6A Running Enter Number of Running Partition q to quit 1 Figure 7 62 Vtmenu Partitions 5 When the partition is chosen the virtual terminal session starts You might need to press Enter to update the sessions and display the current output 6 To exit the virtual terminal session enter the key sequence of tilde then a period to return to the partition selection menu Updating system firmware The FSM updates system
91. and connecting virtual server or partition IVM and the automatic storage management in the FSM virtual server wizard creates both sides of these pairs or partner adapters gt Physical I O adapters are typically not assigned but can be if available In most cases the VIOS was defined to provide virtualized access to network and storage gt An AlX Linux virtual server can be configured to use all physical resources and run as a full system partition gt The virtual server can be defined as Suspend capable gt The virtual server can be defined as Remote Restart capable For more information about operating system installation to virtual servers and LPARS see Chapter 9 Operating system installation methods on page 437 8 6 1 Using the IVM GUI The IVM user interface or command line can be used to create more LPARs on the Power compute node The GUI method is described in this section Access the IVM GUI from a web browser http and https protocols are supported After the proper login credentials are entered the View Modify Partitions view as shown in Figure 8 64 on page 414 normally is displayed If it is not click this option at the top of the Navigation menu IVM usage note Unlike FSM or HMC profiles each IVM partition configuration reserves the amount of memory and CPU that is specified for that partition regardless whether the partition is active Chapter 8 Virtualization 413 View Modify Par
92. and is used to select the Power Off method option Normal or Fast A normal power off ends all active jobs in a controlled manner During that time programs that are running in those jobs can perform cleanup end of job processing A Fast power off ends all active jobs immediately The programs that are running in those jobs cannot perform any cleanup A best practice is to shut down all active partitions before a server power off is performed With no active partitions a fast power off can safely be used The example that is shown in Figure 7 110 uses the Fast power off option Click OK to continue and return to the work pane view Power Off Managed System Server 7954 24X SN107782B Powering off the managed system will make all of the partitions gt unavailable until the machine is powered on again Select a power off option below and click OK to power off the managed system or click Cancel Power Off Options Normal power off Fast power off Figure 7 110 HMC managed server power off options The work pane view shows the selected server powering down with a message and reference codes as shown in Figure 7 111 Systems Management Servers View lable pw s ne oe ee Filter j Tasks Views Available Select Name a eras a Processing a Available a Reference Memory GB Code Units M E Server 7954 24x Sh1 077828 Pl Power Off In Progress 716 22 625 C1922000
93. are not possible anage Virtual Server Host Server 7 954 44e SHLOFFYESBR Hamme RHES Environment AIS Linux Stabe Started AMZ not available General Settings F Overview Processor Virtual server name Metwork OS installed Linux Red Hat 2 6 32 3538 el ppc 4 6 4 Storage Adapters IF address 42 171 69 Storage Devices Processors 0 2 Media Devices Memory 16 00 GB SRIOV Logical Ports Physical 10 General Configuration Figure 12 1 RMC not available RMC not available The RMC not available message appears when there is no synchronization between the RMC daemons in the virtual server or LPAR and the management appliance HMC FSM and so on This can be because of missing software packages but also for other reasons such as network communication issues between the LPAR and the management appliance IBM Installation Toolkit for PowerLinux in addition to preparing and facilitating the installation of Linux on IBM Power Servers helps selecting software Service and Productivity Tools packages for the distribution IBM Installation Toolkit for PowerLinux offers the possibility to install yum repositories which make the update of packages easier provided there is access to repositories externally via the Internet or previously created on an internal network 554 IBM Flex System p270 Compute Node Planning and Implementation Guide IBM Installation Toolkit for PowerLinux offers also some other t
94. as shown in Figure 7 90 on page 269 268 IBM Flex System p270 Compute Node Planning and Implementation Guide Hardware Management Console Manage User Profiles and Access Properties View HME Events hecroot Help Figure 7 90 Active tasks in the taskbar Navigation pane The navigation pane in the left portion of the window contains the primary navigation links for managing your system resources and the HMC The following links can be found on the navigation pane Welcome Systems Management System Plans HMC Management Service Management Updates Work pane The work pane in the right portion of the window displays information that is based on the current selection from the navigation pane For example when you select Welcome in the navigation pane the Welcome window content displays in the work pane as shown in Figure 7 89 on page 268 Status bar The status bar in the lower left portion of the window provides visual indicators of current overall system status It also includes a status overview icon that can be selected to display more detailed status information in the work pane 7 9 2 Connecting a Power compute node to an HMC The following dependencies are available for managing a Power based compute node from an HMC gt The CMM must successfully complete the discovery process of the node as described in 7 7 2 Connecting a Power compute node to the CMM on page 208
95. based storage for 1xVIOS is used external storage via an FC type adapter or the CN4058 8 port 10Gb Converged Adapter that uses FCoE addressed storage Hosting both VIOS on external based storage via CN or FC type adapters gt Atleast one Ethernet I O module if you are running a converged network with a CN4058 8 port 10Gb Converged Adapter gt Atleast one Fibre Channel I O module if the compute nodes have an FC Adapter for storage connectivity As described previously in this chapter 4 port adapters such as the FC5054 4 port 16Gb FC Adapter or 8 port adapters such as the CN4058 8 port 10Gb Converged Adapter can be assigned at an ASIC level This configuration allows 50 of the adapter s ports to be assigned to each VIOS in a dual VIOS environment While not all inclusive the options described here provide the basics for a dual VIOS environment Memory requirements for other partitions beyond the base order amounts are not considered and must be evaluated before ordering Tip Consider the memory and CPU that is required for each VIOS to drive the hardware that is assigned to it to adequately provide network and storage performance for all client LPARs When the two virtual I O servers are installed the normal methods of creating a Shared Ethernet Adapter SEA failover for virtual networking and redundant paths for the client partition disks NPIV and vSCSI can be used Chapter 5 Planning 151 5 7 Power planning W
96. c Copyright IBM Corp 2000 2008 All rights reserved ain Menu Select Language Setup Remote IPL Initial Program Load Change SCSI Settings Select Console Select Boot Options Type menu item number and press Enter or select Navigation key Figure 9 43 SMS main menu options 4 Select option 5 Select Boot Options to display the multiboot options The window that is shown in Figure 9 44 opens Version AF773_ 033 SMS 1 7 c Copyright IBM Corp 2000 2008 All rights reserved Multiboot 1 Select Install Boot Device 2 Configure Boot Device Order 3 Multiboot Startup lt OFF gt 4 SAN Zoning Support 5 Management Module Boot List Synchronization Navigation keys M return to Main Menu ESC key return to previous screen X eXit System Management Services Type menu item number and press Enter or select Navigation key Figure 9 44 Multiboot options menu 474 IBM Flex System p270 Compute Node Planning and Implementation Guide 5 Select option 1 Select Install Boot Device The window that is shown in Figure 9 45 opens Version AF773_ 033 SMS 1 7 c Copyright IBM Corp 2000 2008 All rights reserved Select Device Type Diskette Tape CD DVD IDE Hard Drive Network List all Devices Navigation keys M return to Main Menu ESC key return to previous screen Type menu item number and press Enter or select Navigation key Figure 9 45 Boot device options Chapter 9 Operating system installation met
97. cannot be created from the GUI on these new adapters The chhwres command change should be used to create a virtual adapter with the wanted other VLANs The following example shows the command to create a virtual adapter in virtual slot 15 a PVID of 555 and other VLANS of 20 30 and 40 chhwres r virtualio rsubtype eth o a id 1 s 15 a port vlan_id 555 ieee virtual eth 1 add1_vlan_ids 20 30 40 is _trunk 1 trunk_priority 1 After the adapter is created through the command line the GUI reflects the new adapter and the other VLANs 410 IBM Flex System p270 Compute Node Planning and Implementation Guide 12 As shown in Figure 8 61 click the Physical Adapters tab to view or modify the physical adapters that are assigned to the management or VIOS partition These unassigned resources can be assigned to other partitions as real devices if wanted Partition Properties itsoVIOS6A 1 General Memory Processing Ethernet Physical Adapters The selected rows in the table of physical adapters represent the adapters currently assigned to the partition All unselected rows represent adapters that have not been assigned You can change the adapter assignments for the partition by deselecting existing items or selecting items that are not currently assigned Selection assistant Select Physical Location Code Description UY6AE 001 W2s500R2 10GbE 4 port Mezzanine Adapter a2191007df1033e7 View P1 C18 L1 Children
98. client partition 11 1 1 Concepts of virtualized I O for IBM i IBM i that is running on Power Systems compute nodes has a prerequisite that all of its I O is virtualized so IBM i must be installed as a client partition of one or more VIOS host partitions This means that a VIOS host has ownership of I O adapters which can provide TCP traffic and Fibre Channel traffic because no hardware can be dedicated to an IBM i operating system partition For more information about supported I O adapter options see 4 9 I O adapters on page 102 IBM i workloads are not necessarily different from AIX in their I O profile but often do have a higher throughput requirement which is measured as I O operations per second or IOPS and is more sensitive to changes in response times For more information about performance considerations when you are sizing your client partition s I O see the Performance Capabilities Reference that is available at this website http ibm com systems power software i management performance resource s html For more information about virtualizing I O for IBM i see IBM PowerVM Virtualization Managing and Monitoring SG24 7590 which is available at this website http www redbooks ibm com abstracts sg247590 html 498 IBM Flex System p270 Compute Node Planning and Implementation Guide 11 1 2 Client storage Since IBM i 6 1 1 the VSCSI client driver can support multipath through two or more VIOS partitions t
99. com infocenter iseries v7r1m0 Chapter 11 Installing IBMi 515 5 The Install License Internal Code LIC window opens Select Option 1 to install the LIC as shown in Figure 11 15 Install Licensed Internal Code System E1277E3B Select one of the following 1 Install Licensed Internal Code 2 Work with Dedicated Service Tools DST 3 Define alternate installation device Selection 1 Licensed Internal Code Property of IBM 5770 999 Licensed Internal Code c Copyright IBM Corp 1980 2010 All rights reserved US Government Users Restricted Rights Use duplication or disclosure restricted by GSA ADP schedule Contract with IBM Corp Figure 11 15 Install License Internal Code console menu 6 Select the Load Source device in the next window as shown in Figure 11 16 Press Enter If more than one disk was assigned to the IBM i virtual server choose the disk with the lowest Controller number as a rule of thumb for the load source device Press F10 to confirm your choice Select Load Source Device Type 1 to select press Enter Sys Sys 1 0 1 0 Opt Serial Number Type Model Bus Card Adapter Bus Ctl YGEYXXFKUJWE 6B22 050 255 4 0 0 1 F5 Refresh F12 Cancel Figure 11 16 Selecting the Load Source device 516 IBM Flex System p270 Compute Node Planning and Implementation Guide 7 As shown in Figure 11 17 select Option 2 Install Licensed Internal Code and Initialize system Confirm the LIC installation in the confir
100. command S gt F3 Exit F4 Prompt F9 Retrieve F12 Cancel F13 Information Assistant Fl6 System Main menu C COPYRIGHT IBM CORP 1980 2009 Figure 11 29 Work with Licensed Programs menu Chapter 11 Installing IBMi 529 3 Select Option 11 to Install licensed programs You are taken to the Install Licensed Programs menu as shown in Figure 11 30 The list of programs spans multiple windows Install Licensed Programs System E1277E3B Type options press Enter 1 Instal Licensed Product Option Program Option Description 5770SS1 Library QGPL 5770SS1 Library QUSRSYS 5770SS1 Extended Base Support 5770SS1 Online Information 5770SS1 Extended Base Directory Support 5770SS1 System 36 Environment 5770SS1 System 38 Environment 5770SS1 Example Tools Library 5770SS1 AFP Compatibility Fonts 5770SS1 PRV CL Compiler Support 5770SS1 2 Host Servers 5770SS1 3 System Openness Includes Pere WO ON DD OW PDP FF More F3 Exit Fll Display status release F12 Cancel F19 Display trademarks Figure 11 30 Install Licensed Programs menu 4 Page through the display to find the licensed programs you want Enter a 1 next to the licensed programs to be installed The following LICPGMs are preselected as part of a new system installation 5770 SS1 Library QGPL 5770 SS1 Library QUSRSYS 5770 SS1 option 1 Extended Base Support 5770 SS1 option 3 Extended Base Directory Support 5770 SS1 option 30 QGHELL 577
101. countries Linux is a trademark of Linus Torvalds in the United States other countries or both Microsoft and the Windows logo are trademarks of Microsoft Corporation in the United States other countries or both Java and all Java based trademarks and logos are trademarks or registered trademarks of Oracle and or its affiliates UNIX is a registered trademark of The Open Group in the United States and other countries Other company product or service names may be trademarks or service marks of others xii IBM Flex System p270 Compute Node Planning and Implementation Guide Preface To meet today s complex and ever changing business demands you need a solid foundation of compute storage networking and software resources that is simple to deploy and can quickly and automatically adapt to changing conditions You also need to make full use of broad expertise and proven preferred practices in systems management applications hardware maintenance and more The IBM Flex System p270 Compute Node is an IBM Power Systems server that is based on the new dual chip module POWER7 processor and is optimized for virtualization performance and efficiency The server supports IBM AIX IBM i or Linux operating environments and is designed to run various workloads in IBM PureFlex System The p270 Compute Node is a follow on to the IBM Flex System p260 Compute Node This IBM Redbooks publication is a comprehensive guide
102. different orderable configurations within the enterprise PureFlex offerings These offerings cover various redundant and non redundant configurations with the different types of protocol and storage controllers Table 2 11 summarizes the PureFlex Enterprise offerings that are fully configurable within the IBM configuration tools Table 2 11 PureFlex Enterprise Offerings Networking 10 GbE 10 GbE 10 GbE 10 GbE 10 GbE 10 GbE 10 GbE 10 GbE Ethernet Networking Fibre FCoE FCoE FCoE FCoE 16 Gb 16 Gb 16 Gb 16 Gb Channel Number of 2 0 2 0 1x 2 8 1x 2 8 4 0 4 0 1x 4 10 1x 4 10 Switches up to 18 2x 4 10 2x 4 10 2x 8 14 2x 8 14 maximum 3x 6 12 3x 6 12 3x 3x 12 18 chassis TOR 12 18 V7000 Storwize V7000 Storwize V7000 Storwize V7000 Storwize Storage V7000 Storage V7000 Storage V7000 Storage V7000 Node Node Node Node V7000 Storage Node or Storwize V7000 Chassis 1 2 or 3x Chassis with two Chassis management modules fans and PSUs Rack 42 U Rack mandatory TF3 KVM Tray Optional Media enclosure DVD only DVD and tape optional V7000 Storage Options 24 HDD 22 HDD 2 SSD 20 HDD 4 SSD or Custom Options Storwize expansion limit to single rack in Express overflow storage rack in Enterprise nine units per controller Up to two Storwize V7000 controllers up to nine IBM Flex System V7000 Storage Nodes V7000 VIOS AIX IBM i and Solutions Consultant Express on first Controller Content
103. dual FC dual SAN switch redundancy which is connected with storage attached through a SAN for a dual width compute node In this scenario the operating system has four paths to each storage and the behavior of the multipathing driver might vary depending on the storage and switch type This scenario is one of the best scenarios for high availability The two adapters prevent an adapter fault the two switches prevent the case of a switch fault or firmware upgrade and as the SAN has two paths to each storage device the worst scenario is the failure of the complete storage Figure 5 4 shows this scenario V7000 Storage SAN switch FC switch St FC adapter Storage Area Compute node J FC adapter SAN switch FC switch Figure 5 4 Dual FC and dual SAN switch redundancy connection This configuration might be improved by adding multiple paths from each Fibre Channel switch in the chassis to the external switches which protects against a single cable or port failure Another scenario for the p270 is the use of the CN4093 10Gb Converged Scalable Switch to give the p270 the capability of retaining adapter level hardware redundancy while still providing 10 GbE for TCP Figure 5 5 on page 149 shows this scenario 148 IBM Flex System p270 Compute Node Planning and Implementation Guide SSS SAN switch FC switch aaa ees Ets oka FC adapter CN adapter Storage Area Network Compute node
104. example https 9 42 170 140 DHCP The default TCP IP network configuration that is used during the installation is DHCP Client If a DHCP server is present in the network the installation process is automatically assigned an IP address There is an opportunity to change for a permanent IP address later in the configuration process 14 Accept the license agreement when prompted 15 The toolkit main menu opens as shown in Figure 12 11 Choose Install Linux IBM Installation Toolkit for PowerLinux What would you like to do Install Linux Update the firmware of this system Create an IBM Installation Toolkit bootable USB key Clone or restore systems Configure network Access documentation resources Register at IBM Monitor tasks 0 Oo 0 oO 0 Oo 0 oO Figure 12 11 IBM Installation Toolkit for PowerLinux main menu 562 IBM Flex System p270 Compute Node Planning and Implementation Guide 16 In Figure 12 12 on page 565 you must choose the software that you want to install The following options are available in this panel Linux distribution Select one of the supported Linux distributions and matching the DVD Linux distribution to use At the time of this writing IBM Toolkit version 5 4 1 supports the following distributions SUSE Linux Enterprise 10 SP4 SUSE Linux Enterprise 11 SP2 and SP3 Red Hat Enterprise Server Linux 6 3 and 6 4 Red Hat Enterprise Server Linux 5 8 and 5 9 Supported operating syst
105. fan modules Four 80 mm and two 40 mm fan modules are standard in model 8721 A1x 8721 LRx and 7953 94X Dimensions Height 440 mm 17 3 inches Width 447 mm 17 6 inches Depth measured from front bezel to rear of chassis 800 mm 31 5 inches Depth measured from node latch handle to the power supply handle 840 mm 33 1 inches Weight Minimum configuration 96 62 kg 213 Ib gt Maximum configuration 220 45 kg 486 Ib Declared sound level 6 3 6 8 bels Operating air temperature 5 40 C Electrical power Input power 200 240 V AC nominal 50 or 60 Hz Minimum configuration 0 51 kVA two power supplies Maximum configuration 13 kVA six 250 W power supplies Power consumption 12 900 W maximum Chapter 3 Introduction to IBM Flex System 55 3 2 Compute nodes The IBM Flex System portfolio of servers or compute nodes includes IBM POWER7 POWER7 and Intel Xeon processors Depending on the compute node design the following form factors are available gt Standard node This node occupies one chassis bay or half of the chassis width An example is the IBM Flex System p270 Compute Node gt Double wide node This node occupies two chassis bays side by side or the full width of the chassis An example is the IBM Flex System p460 Compute Node Figure 3 2 shows a front view of the chassis with the bay locations identified and several standard width nodes installed
106. firmware on a Power compute node with Update Manager an FSM plug in Update Manager can download updates directly from IBM across the internet Updates can also be manually imported to the update library if Internet access is not available The following example describes the manual import process and updating of a Power compute node Chapter 7 Powernode management 247 Acquiring system firmware package The firmware update for a Power compute node call be downloaded from IBM Fix Central This package consists of the payload or fix file and other files that are used by update manager and the FSM Figure 7 63 shows a file list for a typical Power compute node system firmware update O1AF773_016 016 dd xm1 O1AF773_ 016 016 htm1 01AF773 016 016 pd sdd O1AF773_ 016 016 readme txt O1AF773_ 016 016 rpm O1AF773_016 016 xml Figure 7 63 Power compute node system firmware file list FSM and IBM Fix Central When a Power compute node firmware update is requested from Fix Central ensure that the option that includes the packaging for IBM System Director is selected Use SCP to transfer these files from the local workstation to the FSM Normal user access to the FSM CLI limits the typical commands that can be run However the mkdir command is available and the files can transfer to a directory such as home USERID power Importing into the update library The import process and the actual application of the updates can be started as t
107. flow through the CMM s network external 1 Gb connection The HMC can manage a Power compute node from anywhere in the network if the IP address of the FSP can be reached However for reasons of security and fault tolerance for example it is recommended that the HMC open network connection be connected to the same switch as the CMM s 1 Gb network connection HMC network adapter configuration This section describes network configuration settings that are available for the HMC To open the Change Network Setting window select HMC Management gt Change Network Settings from the navigation and work pane areas to open the Customize Network Settings window IBM Flex System p270 Compute Node Planning and Implementation Guide Identification HMC identification provides information that is needed to identify the HMC in the network as shown in Figure 7 91 u Customize Network Settings i Identification LAM Adapters Mame Services Routing Use the following information to identify your console on the network Specify host name damain name and a short description of this computer Console name localhost Domain name localdomain O Figure 7 91 Identification tab The Identification tab of the Customize Network Settings window see Figure 7 91 includes the following information gt Console name HMC name that identifies the console to other consoles in the network This console name is
108. following attributes gt Submits the problems to IBM through the network gt Is disabled by default as shown in Figure 7 169 Integrated Virtualization Manager Welcome padmin itsovios6A Edit my profile Help Log out Partition Management Electronic Service Agent View Modify Partitions The Electronic Service Agent application automatically monitors and collects hardware problem information and sends this information to IBM e View Modify System Properties support It also can collect hardware software system configuration and performance management information which may help IBM support View Modify Shared Memory Pool assist in diagnosing problems 1 0 Adapter Management View Modify Virtual Ethernet W Electronic Service Agent has not been activated on the managed system To activate Electronic Service Agent open a terminal session and View Modify Physical Adapters execute the cfgassist command e View Virtual Fibre Channel Virtual Storage Management e View Modify Virtual Storage IVM Management View Modify User Accounts e View Modify TCP IP Settings e Guided Setup e Enter PowerVM Edition Key Service Management e Electronic Service Agent Service Focal Point e Manage Serviceable Events Service Utilities Create Serviceable Event e Manage Dumps e Collect VPD Information e Updates Backup Restore e Application Logs e Monitor Tasks e Hardware Inventory Figure 7 169 IV
109. for internal and external FC traffic This installation can consist of SAN switch modules that provide integrated switching capabilities or pass through modules that act as an FC access gateway to make internal compute node ports available to the outside All switch capable I O modules can be set to Access Gateway mode if required to act as such To verify compatibility with storage infrastructure you want to connect the FC I O module to check the System Storage Interoperation Center SSIC which is available at this website http ibm com systems support storage ssic interoperability wss Ensure that the external interface ports of the switches or pass through modules that are selected are compatible with the physical cabling types that are to be used in your data center Also ensure that the features and functions that are required in the SAN are supported by the proposed switch modules or pass through modules For more information about these modules see Chapter 3 in BM PureFlex System and IBM Flex System Products and Technology SG24 7984 The available switch and pass through options are listed in Table 5 5 Table 5 5 SAN switch options for the chassis 3591 IBM Flex System FC3171 8Gb SAN Pass thru 3595 IBM Flex System FC3171 8Gb SAN Switch 3770 IBM Flex System FC5022 16Gb SAN Scalable Switch 140 IBM Flex System p270 Compute Node Planning and Implementation Guide 5 4 Converged networking For more information about the
110. for VIOS and client LPARs terminals as shown in Figure 7 150 Enter the padmin password File Edit Font Encoding Options In order to access the terminal you musr first authenticate with IWM a Hostname 9 42 171 685 User ID padmin Password Figure 7 150 IVM virtual terminal to an LPAR 4 The terminal session authenticates with IVM and logs you in to the VIOS command line as shown in Figure 7 151 In order to access the terminal you musr first authenticate with IWM Hostname 9 42 171 865 User ID padmin Password r Connecting Cornection successful Last unsuccessful login Tue Jun 25 10 48 31 CDT 2013 on ash from 9 44 168 209 Last login Tue Jun 25 10 52 04 CDT 2013 on devyspts O0 from 9 44 168 209 Figure 7 151 IVM virtual terminal to the VIOS 314 IBM Flex System p270 Compute Node Planning and Implementation Guide When the terminal that is opened connects to a client LPAR you are prompted for the operating system level user ID and password credentials before access to the command line access is granted Opening a virtual terminal by using the VIOS command line By using the command line you can open a virtual terminal only for VIO clients not for the VIOS By using the command line complete the following steps to open the virtual terminal for VIO clients 1 Use Telnet or SSH to Virtual I O Server 2 Run the mkvt id lt partition ID gt command to open the virtual t
111. for the Automatic IPL option the system is initially loaded automatically After the IPL completes the Sign On display is shown and the new PTFs are active Otherwise if you entered an N No for the Automatic IPL option the display shows the licensed programs for which PTFs are loaded and marked to be temporarily applied upon the next unattended IPL When this procedure completes the Program Temporary Fix display is shown 4 Ifthe Program Temporary Fix display is shown end all jobs on the system and perform a normal mode IPL to the B IPL source After the IPL completes the Sign On display is shown and the new PTFs are active 11 7 5 Verifying fix installation It is recommended that you develop the habit of verifying whether you were successful in installing your fixes In general if fixes did not install determination of whether the failure occurred during the load or apply phase of the installation is important If the system did not initially load it is possible the failure occurred during the load phase Click Help on the failure message and then press F10 Display messages in the job log Look for all escape messages that might identify the problem You should fix these errors and then try your request again After verification if the cover letter includes any post installation special instructions follow those instructions If the system initially loaded successfully but the PTFs did not apply complete the following st
112. for virtualization performance and efficiency The server supports IBM AIX IBM i or Linux operating environments and is designed to run various workloads in IBM PureFlex System The p270 Compute Node is a follow on to the IBM Flex System p260 Compute Node This IBM Redbooks publication is a comprehensive guide to the p270 Compute Node We introduce the related Flex System offerings and describe the compute node in detail We then describe planning and implementation steps including converged networking management virtualization and operating system installation This book is for customers IBM Business Partners and IBM technical specialists who want to understand the new offerings and plan and implement an IBM Flex System installation that involves the Power Systems compute nodes SG24 8166 00 ISBN 0738439002 INTERNATIONAL TECHNICAL SUPPORT ORGANIZATION BUILDING TECHNICAL INFORMATION BASED ON PRACTICAL EXPERIENCE IBM Redbooks are developed by the IBM International Technical Support Organization Experts from IBM Customers and Partners from around the world create timely technical information based on realistic scenarios Specific recommendations are provided to help you implement IT solutions more effectively in your environment For more information ibm com redbooks
113. four ports on I O expansion cards on each compute node Two separate communication paths to I O modules through dual midplane connections Two I O module bays per dual port for device redundancy For a sample connection topology between I O adapters and I O modules see Chapter 3 of IBM PureFlex System and IBM Flex System Products and Technology SG24 7984 Chapter 5 Planning 141 Implement technologies that provide automatic failover in the case of any failure This implementation can be done by using certain feature protocols that are supported by network devices with server side software Consider implementing the following technologies which can help you to achieve a higher level of availability in an IBM Flex System network solution depending on your network architecture Spanning Tree Protocol Layer 2 failover also known as Trunk Failover Virtual Link Aggregation Groups VLAG Virtual Router Redundancy Protocol VRRP Routing protocol such as RIP or OSPF Redundant network topologies The IBM Flex System Enterprise Chassis can be connected to the enterprise network in several ways as shown in Figure 5 1 on page 143 142 IBM Flex System p270 Compute Node Planning and Implementation Guide Topology 1 Enterprise Switch 1 seit Enterprise Chassis Switch 2 Trunk Compute node NO Topology 2 Enterprise Switch 1 emia Enterprise Chassis Enterprise Swit
114. if Servers Units Memory GB 9 42 171 37 T 1 Ty custom Groups a 9 42 171 37 Failed Authentication 0 0 Incorrect LDAP password Max Page Size Total 1 Filtered 1 Selected 0 i System Plans 500 AB HMC Management t Service Management cI Updates i asks Servers Connections Figure 7 103 Managed system add failing password authentication 282 IBM Flex System p270 Compute Node Planning and Implementation Guide To enter a new password complete the following steps 1 In the work pane area select the wanted server click the task selection then click Update Password or click Update Password from the Tasks options in the lower half of the work pane Hardware Management Console F a l hscroot Help Logoff Systems Management Servers View 4 ARA E Welcome w e 9 E C Fiter Tasks v Views E i systems Management Available PPRS a Select Name Status Processing a Reference Code B Servers Units Memory GB 9 42171 37 LF Custom Groups 9 42171 372 O Incorrect LDAP password Update Password Operations ib Syst PI Filtered 1 Selected 0 em Plans Configuration Connections a HMC Management Hardware Information Updates Serviceability t i Service Management v ee vV Y m Updates asks 9 42 171 37 l8 Update Password Connections Updates Operations amp Configuration
115. image catalog entry Add an entry in the image catalog for each media object that you imported or transferred from Fix Central You should add images in the same order as though you were installing them if they are part of a set as shown in the following example ADDIMGCLGE IMGCLG PTFCATALOGUE FROMFILE path iptfxxxx_x bin TOFILE iptfxxx_x bin Load the image catalog This step associates the virtual optical device to the image catalog Only one image catalog can be associated with a specific virtual optical device at any time Enter the following command to load the image catalog LODIMGCLG IMGCLG ptfcatalogue DEV OPTVRTO1 OPTION LOAD Verify that the images are in the correct order by using the following command VFYIMGCLG IMGCLG ptfcatalogue TYPE PTF SORT YES The system puts the images in the correct order By default the volume with the lowest index is mounted all the other volumes are loaded Use the Work with Catalog Entries WRKIMGCLGE command to see the order of the images 3 Enter GO PTF and press Enter to see to the PTF menu 4 Select Option 8 Install program temporary fix package and press Enter The Install Options for Program Temporary Fixes window opens as shown in Figure 11 35 on page 540 The window features the following selections For Device enter your optical or virtual optical device type which has the loaded fix media lf you want to automatically initially load yo
116. images The media device can be a physical drive that is attached to the front USB port of the Power compute node and assigned to the wanted virtual server or partition The physical optical device and physical media can be virtualized by the VIOS and presented to a virtual server or partition Images of optical media can be stored in a VIOS media library that is assigned to the virtual server or partition as a virtual optical device All of the supported systems that are listed in 5 1 2 Software planning on page 132 are available through DVD or CD media installation 462 IBM Flex System p270 Compute Node Planning and Implementation Guide Note IBM i installation can be performed from optical media The IBM i process is different from what is described here for AIX and Linux For more information about IBM i installation see Chapter 11 Installing IBM i on page 497 To perform a physical optical media installation a powered external USB optical drive is required Such a drive is not provided as standard with the chassis or the Power Systems compute node The optical drive is attached to the external USB port of the compute node 9 5 1 Preparing for a physical optical device With the physical device plugged into the front panel USB port it must be assigned to the wanted virtual server or partition FSM managed compute node When you are creating any type of virtual server with the FSM by using the virtual server wizard sel
117. in Figure 7 40 Manage Power Systems Resources k Welcome Flex System Manager Version Power Systems Resources E l Hosts Server 954 244 SHLO7 7 S2B B Server 7954 245 SHN107 782 g eee eee Performance Summary Search the table Search Operssngre ys terse Select Name Part Id Access State aa Blitsovroses 1 on Started O Bylitsoarxa 2 ox Stopped l Figure 7 40 Displaying single host virtual servers As shown in Figure 7 39 on page 231 clicking the server name in the content area list opens a new main tab that is labeled Resource Explorer as shown in Figure 7 41 This view shows the virtual servers that are associated with physical server or host It also lists other resources that are part of the physical server such as virtual Ethernet switches Resource Explorer Server 7954 245 5H107782B Computer System Search the table Search Select Mame Access Compliance Problems LED Status lt Communi ETHERNETO IBM 7954 24 Mork P or P or P or Communicatig O Eliso on ok ok P or Communicatig O Blitsoviosea on P or P or P or Communicatig Figure 7 41 Resource Explorer view of a Server object 232 IBM Flex System p270 Compute Node Planning and Implementation Guide Virtual Servers All virtual servers that are created under each individual host can be shown in a single table in the content area by clicking the Virtual Servers option in the navig
118. in Figure 8 82 on page 429 Instead click Next to proceed to the Load Source and Console settings 428 IBM Flex System p270 Compute Node Planning and Implementation Guide e Virtual Server Server 7S95 424 SH1058008 ASNNSNSN o gt Physical 1 0 Adapters Name r Memory Select one or more physical adapters from the list of available physical adapters Note Virtual servers that are assigned pH Processor Ethernet Fi Display only adapters that are currently available Storage selection ae a Select Location Code ae Description Virtual Storage USAF 001 FIREBIR P1 Ti Pcl to PCl bridge Adapters UFSAF 001 FIREBIR P1 c34 L1 Ethernet controller physical 1 0 ee ee ee Load source console LUYSAF 001 FIREBIR P1 T2 PCI E S45 Controller UFSAF 001 FIREBIR P1 c36 L1 Ethernet controller UFSAF 001 FIREBIR P1 c36 L2 Ethernet controller g Adapters that are currently assigned to other running virtual servers may not be available when this virtual server is star Summary UFSAF 001 FIREBIR P1i c34 L2 Ethernet controller OVO N N N Figure 8 82 IBM i virtual server physical adapter settings 10 In the Load Source and Console settings window choose the virtual SCSI as the Load Source as shown in Figure 8 83 If you are planning to perform an operating system installation set the type of virtual adapter that is planned in the Alternate restart resource list This can be vSCSI for optical or vFC for tape Clic
119. in fabric and speed With the ability to use Ethernet InfiniBand FC FCoE RoCE and iSCSI the Enterprise Chassis is uniquely positioned to meet the growing I O needs of the IT industry Figure 1 2 shows the IBM Flex System Enterprise Chassis Figure 1 2 The IBM Flex System Enterprise Chassis 1 4 2 Management IBM Flex System Manager The IBM Flex System Manager FSM is designed to optimize the physical and virtual resources of the IBM Flex System infrastructure while simplifying and automating repetitive tasks From easy system set up procedures with wizards and built in expertise to consolidated monitoring for all of your resources compute storage networking virtualization and energy the FSM provides core management functionality and automation The FSM is an ideal solution to reduce administrative expense and focus freed up resource on business innovation 6 IBM Flex System p270 Compute Node Planning and Implementation Guide The following features are available from a single user interface gt Intelligent automation Resource pooling Improved resource usage Complete management integration Simplified setup YY vV Yy The FSM is a high performance scalable systems management appliance with a preinstalled software stack As an appliance the FSM software runs on a dedicated compute node and is designed to provide a specific purpose configure monitor and manage IBM Flex System resources in multiple IBM Flex Sy
120. initial setup with a locally attached workstation A new CMM or a CMM that is reset via the pinhole has the following default settings gt IP address DHCP if no response then 192 168 70 100 gt Subnet 255 255 255 0 gt User ID USERID all capital letters gt Password PASSWORD all capital letters with a zero instead of the letter O and requires changing on the first use IBM PureFlex System defaults For PureFlex System configurations the following default settings are used gt Static IP address DHCP off gt IP address 192 168 93 100 gt Subnet 255 255 252 0 gt User ID USERID all capital letters gt Password PASSWORD all capital letters with a zero instead of the letter O and requires changing on the first use A pinhole reset of a CMM in a PureFlex configuration reverts the CMM to the non PureFlex defaults 7 2 4 CMM requirements At least one CMM is required for each chassis for control and management a second CMM is optional but recommended for redundancy reasons The CMM and all service processors on compute nodes FSP and IMMv2 storage nodes IMMv2 or I O modules are required to be on the same subnet For more information about the CMM when it is used to manage a Power based compute node see 7 7 Management by using a CMM on page 204 190 IBM Flex System p270 Compute Node Planning and Implementation Guide 7 3 IBM Flex System Manager This section gives a brie
121. isda CL Installation Profile Select use driver isk No Sasori swab epee sets Minimal Installs the smallest set of packages that allows the system to start up and to perform basic tasks The disk Usage is kept to a minimum You can install additional packages in the future using the standard method provided by each Linux distribution Minimal with X Installs all the packages included in Minimal but also installs the Window System a graphical environment that runs on ina Chis option is useful for Figure 12 12 Installation settings for the target system Chapter 12 Installing Linux 565 17 As shown in Figure 12 13 select the available workloads to install depending upon your requirements and click Next IBM Installation Toolkit for PowerLinux Workloads to be installed Available workloads Guit Prev Mest Workload File and Print Server LAMP Server Mail Server Network Infrastructure Description This workload prepares your server to work optimally as a file and print server setting up the best configuration options for Samba and Common Unix Printing System CUPS This workload prepares your server to work optimally as a LAMP server setting up the best configuration options for Apache MySOL and PHP This workload prepares and optimizes your server with a high performance configuration for sending and receiving e mails using Postfix and Dovecot or Cyrus This workload prepares
122. logical partition and the I O Server VIOS The storage virtualization is accomplished by pairing two adapters a virtual SCSI server adapter on the VIOS and a virtual SCSI client adapter on IBM i Linux or AIX partitions The combination of Virtual SCSI and VIOS provides the opportunity to share physical disk adapters ina flexible and reliable manner Virtual Fibre Channel A virtual Fibre Channel adapter is a virtual adapter that provides client logical partitions with a Fibre Channel connection to a storage area network SAN through the VIOS logical partition The VIOS logical partition provides the connection between the virtual Fibre Channel adapters on the VIOS logical partition and the physical Fibre Channel adapters on the managed system N_Port ID virtualization NPIV is a standard technology for Fibre Channel networks You can use NPIV to connect multiple logical partitions to one physical port of a physical Fibre Channel adapter Each logical partition is identified by a unique worldwide port name WWPN which means that you can connect each logical partition to independent physical storage on a SAN Enabling NPIV To enable NPIV on a managed system you must have VIOS V2 1 or later NPIV is only supported on 8 Gb Fibre Channel and Converged Network adapters on a Power Systems compute node IBM Flex System p270 Compute Node Planning and Implementation Guide You can configure only virtual Fibre Channel adapters on client logical
123. methods that are described in Chapter 9 Operating system installation methods on page 437 IBM i not supported IBM i is not supported in a full system partition on Power Systems compute nodes IBM i must be in a virtual server or LPAR that is serviced by a VIOS 8 8 1 Creating a full system partition with the FSM UI 430 The process to create a full system partition is similar to the process that is described in Creating the virtual server on page 358 using the FSM GUI Complete the following steps 1 Complete the steps in Creating the virtual server on page 358 to reach the point that is shown in Figure 8 8 on page 359 The window that is shown in Figure 8 84 on page 431 opens IBM Flex System p270 Compute Node Planning and Implementation Guide Create Virtual Server Server 7954 244 SMHLO77E2B Name Name This wizard helps you create and assign resources to a virtual server Host name Server 7954 24H SHLO7 726 Wirtual server name full_sys_par Wirtual server ID 2 Environment Als Linus C Suspend capable Assign all resources to this virtual server Fi Enable virtual trusted platform module VTPM Warning The VTAM key is set to default kepy Figure 8 84 Assigning all resources to a full system partition with FSM 2 Complete the fields that are shown in Figure 8 84 with the following information Virtual server name Assign a node a name such as full_sys_par
124. module Management of the Enterprise Chassis with the CMM and FSM provides the most comprehensive management over the chassis and all components Other functions such as VM Control Storage Management Update Manager and operating systems monitoring and management are also included in this combination Copyright IBM Corp 2013 All rights reserved 183 Management that uses the CMM with an HMC provides basic management of the chassis complete control of all PowerVM functionality and management of the Power based compute node These functions are available across all Power based compute nodes in the same chassis with the HMC managing up to 48 Power compute nodes Management with a CMM and IVM provides basic management of the chassis and control of most of the PowerVM functionality IVM can manage only a single Power based compute node therefore each node is independently managed Important Note These three methods of managing a Power based compute node are mutually exclusive only one platform manager type can manage a node at a time An FSM managed chassis that contains Power nodes cannot use any other platform manager to manage Power nodes in the same chassis This chapter includes the following topics 7 1 Management network on page 185 7 2 Chassis Management Module on page 187 7 3 IBM Flex System Manager on page 191 7 4 IBM HMC on page 196 7 5 IBM IVM on page 199 7 6 Comparing F
125. node is selected before the function can be applied 216 IBM Flex System p270 Compute Node Planning and Implementation Guide Clicking one of the names in the Device Name column opens a window with details about that server as shown in Figure 7 22 Events General Boot Mode Severity Source Informational Node_04 Informational Node_04 Informational Node_04 Informational Node_04 Informational Node_04 Informational Node_04 1 6 of 6 items 4 OK Cancel Hardware Firmware Sequence 00000524 00000523 00000522 00000521 00000520 0000051e x Power Environmentals IO Connectivity SOL status Boot Sequence LEDs Date EventID Message Node SN Y032BG1AV01W messag Mar 9 2012 06 00 AM 77777701 blade mgmt subsystem health Lov present more Node SN Y032BG1AV01W messag Mar 9 2012 06 00 AM 77777701 Power power off more Node SN Y032BG1AV01W messag Mar 9 2012 06 00 AM TTTTTTO1 Performance Mode disabled ma Node SN Y032BG1AV01W system Mar 9 2012 00 00 A4 00216002 reset Persistent events will be rege i The device SN Y032BG1AV01W ha Mar 9 2012 06 00 AM 0e002104 nodebay04 more Mar 9 2012 05 54 AM 0e002004 Hardware inserted in nodebay04 10 25 50 100 All mm p Figure 7 22 Compute Nodes tab Serial Over LAN Serial Over LAN SOL provides a virtual console session to the first partition or virtual server of a Power compute node IVM requires the
126. of 4 This value change can be changed if needed Set the Port Virtual Ethernet PVID option to 4094 Do not select the IEEE 802 1Q capable adapter or Use this adapter for Ethernet bridging options Click OK 366 IBM Flex System p270 Compute Node Planning and Implementation Guide Virtual Ethernet Create Adapter Specify an adapter ID and virtual Ethernet for this adapter Adapter Id 4 PBort Virtual Ethernet 4094 VSI Type Id VSI Type Version VSI Manager Id IEEE Settings Select this option to allow additional virtual LAN IDs for the adapter E IEEE 802 19 compatible adapter Maximum number of VLANs 20 Additional VLAN IDs 2 20 48 Shared Ethernet Settings Select Ethernet bridging to link bridge the virtual Ethernet to a physical network F Use this adapter for Ethernet bridging Priority filor2 p Advanced virtual ethernet configuration Figure 8 16 Create Adapter window 6 Review the virtual Ethernet adapters that were modified or added as shown in Figure 8 17 on page 368 Click Next to save the settings and move on to the Virtual Storage Adapters window Chapter 8 Virtualization 367 Ethernet Configure the virtual network adapters for the virtual server Physical I O network adapters can be selected later in the Physical I O page of this wizard Two virtual Ethernet adapters will be created by default however you can add edit or remove adapters to suite your needs
127. partitions that run the following operating systems gt AIX V6 1 Technology Level 2 or later AIX 5L V5 3 Technology Level 9 or later IBM i V6 1 1 V7 1 or later SUSE Linux Enterprise Server 11 or later RHEL 5 5 6 or later YY vV Yy Systems that are managed by the Integrated Virtualization Manager a Systems Director Management Console or IBM Flex System Manager can dynamically add and remove virtual Fibre Channel adapters from logical partitions Figure 8 2 shows the connections between the client partition virtual Fibre Channel adapters and external storage Client logical Client logical partition 2 partition 1 Client virtual Client virtual fibre channel fibre channel adapter adapter Virtual I O Server ja arver virtual fibre i channel adapter i Server virtual fibre channel adapter 7 Storage Area Network Figure 8 2 Connectivity between virtual Fibre Channel adapters and external SAN devices Chapter 8 Virtualization 345 Virtual serial adapters TTY console Virtual serial adapters provide a point to point connection from one logical partition to another or the IBM Flex System Manager to each logical partition on the managed system Virtual serial adapters are used primarily to establish terminal or console connections to logical partitions Each partition must have access to a system console Tasks such as operating system installation network setup and certain problem an
128. planning and implementation of a converged Fibre Channel and Ethernet network that uses FCoE see Chapter 6 Converged networking on page 163 5 5 Configuring redundancy Your environment might require continuous access to your network services and applications Providing highly available network resources is a complex task that involves the integration of multiple hardware and software components This availability is required for network and SAN connectivity 5 5 1 Network redundancy Network infrastructure availability can be achieved by implementing certain techniques and technologies Most of these items are widely used standards but several are specific to the IBM Flex System Enterprise Chassis This section describes the most common technologies that can be implemented in an IBM Flex System environment to provide a highly available network infrastructure A typical LAN infrastructure consists of server NICs client NICs and network devices such as Ethernet switches and the cables that connect them The potential failures in a network include port failures on switches and servers cable failures and network device failures The following guidelines should be followed to provide high availability and redundancy gt Avoid or minimize single points of failure that is provide redundancy for network equipment and communication links The IBM Flex System Enterprise Chassis has the following built in redundancy Two or
129. savings when the partition is under used Chapter 4 Product information and technology 123 The maximum achievable clock speed in this situation can vary because of factors such as available power to the compute node and cooling capability in the chassis If DPS infringes upon power or cooling capability to the compute node clock speed is dynamically throttled back to stay within the confines of such capabilities DPS mode is mutually exclusive with Static Low Power mode Only one of these modes can be enabled at a time 4 11 3 Energy consumption estimation An estimation of the energy consumption for a certain configuration can be calculated by using the IBM Power Configuration for Flex system tool which is available at this website http ibm com systems bladecenter resources powerconfig html In this tool select the type and model for the system enter several details of the configuration and a wanted CPU usage result The tool shows the estimated energy consumption the waste heat at idle the wanted usage and the full usage 4 12 Anchor card As shown in Figure 4 30 on page 125 the anchor card also known as a management card in the product publication contains the smart vital product data chip that stores system specific information The pluggable anchor card provides a means for this information to be transferable from a faulty system board to the replacement system board Before the service processor knows what system i
130. selected rows in the table of physical adapters represent the adapters currently assigned to the partition All unselected rows represent adapters that have not been assigned You can change the adapter assignments for the partition by deselecting existing items or selecting items that are not currently assigned Selection assistant Physical Location Code Description U78AE 001 W2Z500R2 P1 T1 USB Enhanced Host Controller 3310e000 Figure 9 31 Using IVM to assign the USB port to an existing partition configuration 466 IBM Flex System p270 Compute Node Planning and Implementation Guide 9 5 2 Preparing for a physical optical device virtualized by the VIOS The VIOS can virtualize a physical optical device to another virtual server or partition that it services The VIOS must own the USB device and a virtual SCSI connection is required between the VIOS and client virtual server or partition Dual VIOS The VIOS cannot virtualize an optical device to another VIOS virtual server or partition This connection requires a partner pair of virtual SCSI adapters one for the client partition and one for the VIOS partition The virtual SCSI adapters are used to attach disks to the client virtual server or partition The VIOS side of this pair is represented by a vhostx device The vhost device points to or is associated with a client virtual server or partition The association can be determined by using the Ismap a11 command as shown
131. speed options to determine what processor configuration most closely matches your needs IBM provides measurements for each operating system Relative Performance rperf for AIX and spec_int2006 for SLES Linux on Power Compute Nodes that can be used to compare the relative performance of Power Systems in absolute values The charts can be found at this website http www ibm com systems power hardware reports system perf html IBM i Commercial Processing Workload CPW performance metrics charts can be found at this website http www 03 ibm com systems power software i management performanc e resources html gt Optical media The IBM Flex System Enterprise Chassis and the Enterprise Chassis do not provide CD ROM or DVD ROM devices as the BladeCenter chassis do If you require a local optical drive use an external USB drive Ensure that any optical device is low power usage or has its own external power source because the USB port might not provide sufficient power for all devices gt Interoperability For interoperability of Flex System components see the Flex System Interoperability Guide which can be found at this website http www redbooks ibm com fsig Chapter 5 Planning 131 5 1 2 Software planning 132 Determine the primary uses for your Power Systems compute node and how it is set up Will you be using full system partition or a virtualized environment that includes virtual servers formerly named logical par
132. ssic interoperability wss 506 IBM Flex System p270 Compute Node Planning and Implementation Guide Storage yf Name yM Virtual storage allows client partitions to share physical devices that are used to emory access block storage qf Processor af Ethernet To ease storage management the console can automatically manage the virtual storage adapters required for the virtual server or you may individually customize the o gt Storage selection virtual storage adapters Would you like to have virtual storage adapters automatically managed by the console Ono I want to manage the virtual storage adapters for this Virtual Server ves Automatically manage the virtual storage adapters for this Virtual Server Select the type of storage to use O Virtual Disks Physical Yolumes E Fibre Channel Figure 11 7 Creating a Virtual Server Storage selection 7 In our example we are using physical volumes that we mapped to the VIOS host We selected Physical Volumes and then clicked Next Chapter 11 Installing IBMi 507 508 8 As shown in Figure 11 8 in the Storage page select the hdisks that you want the IBM i client to use and click Next Name Memory Processor Ethernet Storage selection Storage Storage You may select any number of storage devices that are not currently assigned to a Wirtual server Physical Volumes Search Search the table WlOS Shared Storage Pool ITSO W
133. system_name is the host name or IP address of the FSM node https system_name A login window opens as shown in Figure 8 4 on page 355 354 IBM Flex System p270 Compute Node Planning and Implementation Guide Figure 8 4 IBM Flex System Manager login window 2 Enter a valid FSM user ID and password and click Log in The Welcome window opens Chapter 8 Virtualization 9355 3 Click Home and the main window opens as shown in Figure 8 5 IBM Flex System Manager Welcome USERID Problems o o Compliance o o i mm Home Chassis Man Select Use these tabs to perform some initial setup tasks view or activate plug ins perform administration tasks and access Check and Update Flex Syst additional information Inform Check and Update Flex System Manager Obtain and install updates for IBM Flex System Manager This will require a restart of IBM Flex System Manager Giep three tiep Paing Select Chassis to be Managed View all chassis and Flex System Managers in your environment and select which to manage You are curently managing 1 chassis View chassis Configure Chassis Components Configure basic settings for chassis components including compute nodes storage nodes and I O modules Deploy Compute Node Images For Red Hat Enterprise Linux 6 2 6 4 Red Hat Enterprise Linux 6 2 6 4 with Kernel based Virtual Machine KVM and VMware vSphere 5 1 with IBM Customization you can deploy the ima
134. tape devices None Physical adapters None Figure 8 72 IVM Create Partition Summary window Chapter 8 Virtualization 421 The View Modify Partitions view that is shown in Figure 8 73 is updated with the new partition The new partition is now ready to be activated and installed View Modify Partitions To perform an action on a partition first select the partition or partitions and then select the task System Overview Total system memory 32 GB Total processing units 24 Memory available 22 62 GB Processing units available 21 2 Reserved firmware memory 1 38 GB Processor pool utilization 0 05 0 2 System attention LED Inactive Partition Details i ap rF Create Partition cs ee Shutdown Siea More Tasks 18 3 itsoVIOS64A Running Minutes a 6 a oo0o000000 itsolpar2 Figure 8 73 Updated IVM View Modity Partitions view 8 7 Creating an IBM i virtual server You can install the IBM i operating system in a client virtual server of a VIOS Begin by completing the steps that are described in 8 5 Creating a VIOS virtual server on page 349 to create the VIOS For more information about installing IBM i in a virtual server see the topic Getting started with IBM i on a PureFlex Power node which is available at this website https www ibm com developerworks mydevel operworks wikis home 1ang en wiki 1BM 201 20Technol ogy 20Updates page 1BM 201 200n 20a 20F 1 ex 20Com pute 20N
135. the environment for future commands to always be to the same blade slot number and then issues the console command When the console command is run the virtual terminal session to the first LPAR is opened No other authentication is required to open the console however depending on the operational state of the LPAR an operating system prompt might request login credentials If the env command was used the prompt changes to indicate the target blade slot number as shown in Figure 7 147 To revert to the system prompt use the env command with no other parameters system gt env T blade 10 OK system blade 10 gt system blade 10 gt env OK system gt Figure 7 147 Setting the environment to a blade slot for additional CMM commands IBM Flex System p270 Compute Node Planning and Implementation Guide If SOL is not enabled at the node and globally for the chassis the message that is shown in Figure 7 148 is displayed when you are attempting the console command by using either of the two options system gt env T blade 10 OK system blade 10 gt console SOL on blade is not enabled system blade 10 gt env OK system gt console T blade 10 SOL on blade is not enabled system gt Figure 7 148 SOL console command failure when SOL is not enabled Press ESC then Shift 9 to exit the SOL console session and return to the CMM prompt Opening a virtual console terminal for IVM LPARs You can open a virtual termina
136. the method to use to acquire the updates If the IBM Flex System Manager server does not have an Internet connection select the Import updates from the filesystem option to download and import the updates manually O Check for updates Internet connection required Import updates from the file system Learn more about acquiring updates After you download the updates type the path to the directory or archive file below and then click OK to import the updates The updates to import must reside on the management server For example you can enter ftripfupdates fhomefUSERID power Click OK to launch of schedule an import task This task will copy the updates from the given path to the update library Figure 7 65 Importing the update Chapter 7 Power node management 249 3 When the OK button is clicked the job scheduler opens and asks to run now or schedule in the future The option to display the running job is shown For import jobs it is good practice to verify that an update was processed and the job was completed without errors as shown in Figure 7 66 Auctive and Scheduled Jobs Active and Scheduled Jobs Properties Mame Import Updates June 20 2013 2 19 03 PM EDT General Targets History Logs Click an job instance in the Name column in order to view its logs Job Instance Search the table Search Select Hame o gt Status oe 6 20 13 at 2 19 PM Complete M4 PagelofiFre ji Selected 1
137. the node For virtual storage access virtual SCSI or NPIV can be used Virtual SCSI adapters are configured in a client server relationship with the client adapter in the client virtual server that is configured to refer to the server adapter that is configured in the VIOS The server adapter in the VIOS can be configured to refer to one client adapter or allow any client to connect NPIV configuration differs in that the VIOS serves as a pass through module for a virtual Fibre Channel adapter in the client virtual server The SAN administrator assigns LUNs to the virtual Fibre Channel adapters in the virtual servers as they do for a real Fibre Channel adapter The WWPNs are generated when the virtual Fibre Channel adapter is defined for the client This configuration can be provided to the SAN administrator to ensure the LUNs are correctly mapped in the SAN For more information about planning and configuring virtualized environments including configuring for availability see the following publications gt IBM PowerVM Virtualization Introduction and Configuration SG24 7940 348 IBM Flex System p270 Compute Node Planning and Implementation Guide gt IBM PowerVM Best Practices SG24 8062 gt IBM PowerVM Virtualization Managing and Monitoring SG24 7590 8 5 Creating a VIOS virtual server In this section we describe creating a VIOS virtual server Only an AIX or Linux virtual server can be created on a compute node but the number of
138. therefore does not impose the same limitation LP or VLP DIMMs can be used with SSDs to provide all available memory options 4 8 2 Local storage and cover options Local storage options are shown in Table 4 10 None of the available drives are hot swappable If you use local drives you must order the appropriate cover with connections for your drive type The maximum number of drives that can be installed in any Power Systems compute node is two SSD and HDD drives cannot be mixed As you see in Figure 4 16 on page 99 the local drives HDD or SDD are mounted to the top cover of the system When you are ordering your Power Systems compute nodes choose which cover is appropriate for your system SSD HDD or no drives Table 4 10 Local storage options 100 IBM Flex System p270 Compute Node Planning and Implementation Guide 4 8 3 Local drive connection On covers that accommodate drives the drives attach to an interposer that connects to the system board when the cover is properly installed This connection is shown in more detail in Figure 4 17 p ki Ai RE _ gt 25 22 ey wees cee ruame 2 sence ea TEE A oes meet at ede TandGow we tas mm ee ma Ce d addim aaa TENET ANA MOR ADDON TONDE W TONNU EC 077007C 2011 08 30 Figure 4 17 Connector on drive interposer card mounted to server cover On the system board the connection for the cover s drive interposer is shown in Figure 4 18 Figure 4 1
139. to the p270 Compute Node We introduce the related Flex System offerings and describe the compute node in detail We then describe planning and implementation steps including converged networking management virtualization and operating system installation This book is for customers IBM Business Partners and IBM technical specialists who want to understand the new offerings and plan and implement an IBM Flex System installation that involves the Power Systems compute nodes Copyright IBM Corp 2013 All rights reserved xiii Authors This book was produced by a team of specialists from around the world working at the International Technical Support Organization Raleigh Center David Watts is a Consulting IT Specialist at the IBM ITSO Center in Raleigh He manages residencies and produces IBM Redbooks publications about hardware and software topics that are related to IBM Flex System IBM System x and BladeCenter servers and associated client platforms He has authored over 250 books papers and Product Guides He holds a Bachelor of Engineering degree from the University of Queensland Australia and has worked for IBM in the United States and Australia since 1989 David is an IBM Certified IT Specialist and a member of the IT Specialist Certification Review Board Kerry Anders is a Consultant for IBM POWER systems and IBM PowerVMg in IBM Lab Services that is based in Austin Texas He is part of the Lab Service core te
140. two ports per adapter are enabled Figure 4 23 shows the IBM Flex System CN4058 8 port 10Gb Converged Adapter Figure 4 23 IBM Flex System CN4058 8 port 10Gb Converged Adapter For more information about this adapter see the IBM Redbooks Product Guide that is available at this website http www redbooks ibm com abstracts tips0909 html 0pen Chapter 4 Product information and technology 111 4 9 8 IBM Flex System EN4132 2 port 10Gb RoCE Adapter The IBM Flex System EN4132 2 port 10Gb ROCE adapter provides high bandwidth RDMA over Converged Ethernet ROCE for low latency application requirements Applications such as clustered DB2 and high frequency trading applications can achieve significant throughput and latency improvements which results in faster access and real time response By using Data Center Bridging DCB capabilities RoCE provides efficient low latency RDMA services over Layer 2 Ethernet The IBM Flex System EN4132 2 port 10Gb RoCE Adapter has the following features and specifications Based on Mellanox Connect X2 technology with a single ASIC CPU offload of transport operations Core Direct and GPU Direct application offload End to end QoS and congestion control Hardware based I O virtualization Ethernet encapsulation YYYY YV Y Figure 4 24 shows the IBM Flex System EN4132 2 port 10Gb RoCE Adapter Figure 4 24 IBM Flex System EN4132 2 port 10Gb RoCE Adapter 112 IBM Flex System
141. user interfaces 2 000 cee 195 7 3 3 FSM requirements 000 060 cee eee 196 7A IBM AMO wis os cect tee aris as Soe a ed we ee Baie Se 196 7 4 1 HMC overview 0 ccc eet eee eee 197 7 4 2 HMC user interfaces 0 0 0c cee 197 7 4 3 HMC requirements 0 0 00 ccc tee ene 198 725 IBM INM i eu au acti God Sis eta cha a aed wee Bea ee 199 70 1 INIMGOVCIVIGW aie octet sek le Seed ae ares Meee acre oh ee ee 199 7 5 2 IVM user interfaceS 0 0 0 ees 200 7 5 3 IVM requirements 0 000 ce ees 201 7 6 Comparing FSM HMC and IVM management 202 7 7 Management by using a CMM 0 00 eee ees 204 7 7 1 Accessing the GMM 55 ew ee Bh ek eo oe oe ws 204 7 7 2 Connecting a Power compute node to the CMM 208 7 7 3 Power compute node management 00000 eee eae 209 7 7 4 Service and Support option 2 0 aaaea ee 220 7 8 Management by using FSM 0 00 cee ees 224 T81 ACCESSING the FSM L scarsia iaaa ose ah acta kyo ecb 224 7 8 2 Connecting a Power compute node to the FSM 226 7 8 3 Manage Power Systems Resources navigation basics 229 7 8 4 Managing Power compute node basics 0005 238 7 8 5 Service and Support Manager 00 0 e eee eee 255 7 9 Management by using an HMC 0 0 0 ees 265 7 9 1 ACCESSING an AMC cost take et he ok Ena aE sa tat ee 265 7 9 2 Connectin
142. w Processing Settings cween logical partitions The current Properties Delete Required rl Ethernet 2 N A N A Yes a Ethernet 3 N A N A Yes F Server Serial 0 Any Partition T Server Serial 1 Any Partition Total 4 Filtered 4 Selected 0 Figure 8 39 HMC Virtual Adapters window updated showing second virtual Ethernet adapter 6 Repeat steps 1 and 2 and use the following values as shown in Figure 8 40 on page 390 Accept the default Adapter ID of 4 This value change can be changed if needed Set the Port Virtual Ethernet also referred to as PVID option to 4094 Select the This adapter is required for virtual server activation option Clear the IEEE 802 1Q capable adapter option to allow future dynamic adds of VLANs Clear the Use this adapter for Ethernet bridging option Chapter 8 Virtualization 389 W https 9 42 171 90 hmec wel T2d87 Create Virtual Ethernet Adapter Server 7954 24X SN107732B General Advanced Virtual ethernet adapter Adapter ID a VSwitch ETHERNETO Default 7 Port Virtual Ethernet VLAN ID Zgg4 a E This adapter is required for virtual server activation IEEE Settings Select this option to alow additional virtual LAN IDs for the adapter E IEEE 802 1g compatible adapter Shared Ethernet Settings Select Ethernet bridging to link bridge the virtual Ethernet to a physical n
143. will not synchronize because the pending and current minimum or maximum values are Henen not synchronized Restart your partition in order to complete the synchronization Latest commands run on partition Reason Time Return Command Output Code Synchronization 10 23 13 o successful code 0 8 02 48 PM Figure 8 63 IVM Resource Synchronization Details view In this example the node is restarted When the management partition becomes active the GUI can be used for more setups of the VIOS such as shared Ethernet adapter SEA creation other partition creation and virtual storage configuration 412 IBM Flex System p270 Compute Node Planning and Implementation Guide 8 6 Creating an AIX or Linux virtual server Creating an AIX or Linux virtual server is similar to creating a VIOS virtual server Use the same process that is described in 8 5 Creating a VIOS virtual server on page 349 but with some differences The following differences are featured between creating a VIOS and an AIX or Linux virtual server gt The Environment option in the initial window is set to AlX Linux gt Virtual Ethernet adapters are configured with Port VLAN values that match the Port VLAN values or other VLANs that are configured on the VIOS virtual Ethernet adapters gt Virtual SCSI or virtual Fibre Channel NPIV adapters are configure to point to or pair up with the matching VIOS side adapters by using the connecting adapter ID
144. without VIOS 2 0 0 0 cee ee 5 9 2 Virtual servers with VIOS 0 00 00 eee Chapter 6 Converged networking 0000 cece eas G21 WAWOGUCUON sore oo Bog ceeded gems tn Rae ates ee aa doe ee Bent eerie E 6 1 1 Fibre Channel over Ethernet 0 0000 cee eee eee 6 1 2 FCOE protocol stack s ar aidi ae ae aad dbhemewe 4 odce 6 1 3 Converged Network Adapters 0 0000 eee eee eee 6 1 4 Fibre Channel Forwarders 0 0 00 cece eee eee 621 51 GO DOM IW DGS 2344 05 5 ciara ta eect Re Bae eee lM oe eee 6 2 Configuring an FCoE network with the CN4093 6 25 SF COE VEANS orriek on eet or eine eRe boo hele eee ee es 6 2 2 Administration interface for the CN4093 6 2 3 Configuring for Fibre Channel Forwarding 55 6 2 4 Creating zoning on CN4093 with CLI 20005 Chapter 7 Power node management 2000000es 7 1 Management network 0 00 ee eee eens 7 2 Chassis Management Module 0 000 eee ees C22 SGMNOVEIWIEW d 2 indo e Seas Dette ee ieee eee adhe 7 2 2 CMM user interfaces 0 0 0 0 ee 7 2 3 CMM default network information 000 0c eee eee IBM Flex System p270 Compute Node Planning and Implementation Guide 7 2 4 CMM requirements 0000 c cee eee 190 7 3 IBM Flex System Manager 000 cee eee eee ees 191 7 3 1 FSM overvieW eee eens 191 7 3 2 FSM
145. 0 Compute Node Planning and Implementation Guide 3 6 Cooling The flow of air in the Enterprise Chassis follows a front to back cooling path where cool air is drawn in at the front of the chassis and warm air is exhausted to the rear Air movement is controlled by hot swappable fan modules in the rear of the chassis and a series of internal dampers The cooling is scaled up as required based on the number of nodes installed The number of cooling fan modules that is required for a number of nodes is shown in Table 3 6 on page 71 Chassis cooling is adaptive and node based rather than chassis based Inputs into the cooling algorithm are determined from the following factors gt Node configurations gt Power monitor circuits gt Component temperatures gt Ambient temperature With these inputs each fan module has greater independent granularity in fan speed control This results in lower airflow volume CFM and lower cooling energy that is spent at the chassis level for any configuration and workload Chapter 3 Introduction to IBM Flex System 69 Figure 3 7 shows the location of the fan modules Fan
146. 0 Compute Node Planning and Implementation Guide At a minimum the required information that is marked by an asterisk must be completed before you click Next Figure 7 76 shows the request for the system location information Getting Started with Electronic Service Agent System location af Welcome Your company Provide default information about the physical locations of your systems Information can be overridden for vf conic Specific systems by clicking Resource Explorer selecting a system and clicking Location under the Additional Properties heading G gt System location Connection Extension Authorize IBM IDs Summary Country or region A Street address city State or province Postal code Building Floor Room number Rowi Aisle Displaced height cm Altitude meters Other information Figure 7 76 Getting started with ESA System location window Chapter 7 Power node management 259 4 Enter the required information and click Next to continue to the Connection page as shown in Figure 7 77 Getting Started with Electronic Service Agent per Connection elcome g Your company An Internet connection is required to use this function Specify how the Internet should be accessed contact Wf System location Specify settings for the Internet connectivity that IBM Flex System Manager uses to obtain updates gt Connection Choose the method to use to access the Internet Authorize IBM ID
147. 0 SS1 option 33 Portable App Solutions Environment 5770 DG1 IBM HTTP Server for i 5761 JV1 IBM Developer Kit for Java 5761 JV1 option 11 Java SE 6 32 bit 5 After all required LICPGMs are selected press Enter and the Confirm Install of Licensed Programs window that shows all LICPGMs that are selected opens Press Enter to confirm your choices 530 IBM Flex System p270 Compute Node Planning and Implementation Guide 6 The Install Options menu opens as shown in Figure 11 31 OPT0O1 is the default device description DEVD on a base operating system and must be changed if the DEVD was renamed Select Option 2 for Nonaccepted agreement otherwise LICPGM installations are skipped Leave Automatic IPL at the default value of N Install Options System E1277E3B Type choices press Enter Installation device OPIO1 Name Objects to install 1 1 Programs and language objects 2 Programs 3 Language objects Nonaccepted agreement 2 1 Do not install licensed program 2 Display software agreement Automatic IPL N Y Yes N No F3 Exit F12 Cancel Figure 11 31 Licensed Programs Install Options menu Chapter 11 Installing IBMi 531 7 Figure 11 32 shows the status of the licensed programs and language objects as they are installed on the system Installing Licensed Programs System E1277E3B Licensed programs processed 3 of 9 Licensed Program Option Description 5770SS1 3 Extended Base Directo
148. 00 0 cee 430 8 8 1 Creating a full system partition with the FSM Ul 430 8 8 2 Creating a full system partition with the HMC Ul 432 Chapter 9 Operating system installation methods 437 9 1 Comparison of methods 0 eee 438 9 2 Accessing System Management Services 00000 eens 438 9 3 Installios installation of the VIOS 0 0 00 eee 440 9 3 1 Interactive installation 0 0 00 cee eee 440 973 2 CLIMNStallanOns cc 0 2 a getuacapaca e aoe Sod MR oe Rega kal ada Sona 445 9 4 Network Installation Management method 00005 446 9 5 Optical media installation 0 0 0 aaaea ee 462 9 5 1 Preparing for a physical optical device 008 463 9 5 2 Preparing for a physical optical device virtualized by the VIOS 467 9 5 3 Using a VIOS media repository 0000 eee eee 468 9 5 4 Using the optical device as an installation source 472 9 6 TFTP network installation for Linux 0 000 ee eee 478 9 6 1 SUSE Linux Enterprise Server 11 0000000 479 9 6 2 Red Hat Enterprise Linux6 0 0 02 cc ees 485 97 Cloning Method Sin nx tude era ath ddd dee tind At bon armed dh aire a ee 487 Chapter 10 Installing VIOS and AIX 000 008 489 OT MAS tallinnG VIOS vx acta tacrsect we dee a 4p E E deco 490 TOZ NSAIG AIK sereia a3 eset ek Ak See On IN Or oa Se aad Ss J ge
149. 000 cece eee ees 22 2 4 1 Available Express configurations 0 0 0 0 cee eee eee 22 PA MAS SIO sweeten AY Buena ear een tee eee een E eae ene iat 26 24 30 GOMPUIE NODES 2 72 44d06 6Ge82 inet heeke Sao e teed weed Hees 27 ZAA ABM FSN eeni eana aa aa a aa tee aa esas 27 2 4 5 PureFlex Express storage requirements and options 28 2 4 6 Video keyboard mouse option 0 000 cee ee 32 Copyright IBM Corp 2013 All rights reserved iii iv 2 4 7 Rack Cabinet eaaa bad wwe ad wae he a a a a a eds 33 2 4 8 Available software for Power Systems compute nodes 33 2 4 9 Available software for x86 based compute nodes 34 2 5 IBM PureFlex System Enterprise 0 000 cee eee 35 2 5 1 Enterprise configurations anaa 000 ee ees 35 252 GIASSIS a autis Siento Bh E cease eed ee ste Bie Be 39 2 5 3 Top of rack switcheS 0 0 0 0 cee eee 40 2 5 4 Compute nodes n n ana aaaea a eens 40 2T Or NIV ro set 2 ae aes garar orne wader 2 te meuion ge ease ce acaba graeerace eae 41 2 5 6 PureFlex Enterprise storage optionS n nananana anaana 41 2 5 7 Video keyboard and mouse option n aaa aoaaa 44 2 00 RACK CaDINCl i234 o e ee a EEEn eee ahd EE ae ee ali 45 2 5 9 Available software for Power Systems compute node 46 2 5 10 Available software for x86 based compute nodes 46 2 6 Services for IBM PureFlex System Express and Enterprise 47 2 6 1 P
150. 1 1 1 IBM PureFlex System 0 ccc eee eee 3 1 2 Choosing an IBM PureFlex System or IBM Flex System 4 1 2 1 PureFlex System 0 0 0 eens 4 1 3 IBM Flex System p270 Compute Node 0 0 0 5 1 4 Flex System components 0 0 00 cece eee eee eee 5 1 4 1 IBM Flex System Enterprise Chassis 0000 ee eees 5 1 4 2 Management IBM Flex System Manager 0 0005 6 1 4 3 Power Systems virtualization management FSM HMC and IVM 8 1 4 4 Chassis I O modules 0 00 00 ee eee 9 1 4 5 Compute nodes rsa eaa a a e a ee ees 10 TAO SIOVAGS s sein esp de catictet anh ile lees wa Airey ha RE react A seth atch boc ed 11 VA NEIWOIKINO lt i Stn Airs eh Ne teh a ah feck ae Wan ah to cer e im heh deel ghee I 11 1248 MATPASTIUCGIUNG s ao brcrdcrne ap ecnse Ea kone he aw Bee ane 12 1S ANI DOOK n eae cette 288 otha e cr h An Sat a Bt aan aera ls aaa teen a Suen a 13 Chapter 2 IBM PureFlex System 0 0 0 cece ees 15 ZT WAGOCUGUON s arera oak re ard 5 ah Abas Gees eaten are Rigs Shae ee 16 222 GOMPONEINS asa ee ek ee ie eel e i Gd date i enn Gomes Sb ad 17 2 2 1 Configurator for IBM PureFlex System 2000 0 eee 19 23 FPULCrICX SOIMONS eerca inae deb eeate anna amp Rearend a E dae 20 2 3 1 PureFlex Solution for IBM i nnna anaana eee 20 2 3 2 PureFlex Solution for SmartCloud Desktop Infrastructure 21 2 4 IBM PureFlex System Express 0 0
151. 1 SUSE Linux Enterprise Server 11 Complete the following steps when you are using SLES 11 1 Obtain the distribution ISO file and copy it to a work directory of the installation server We configure a Network File System NFS server this server can be the installation server or another server and mount this shared directory from the target virtual server to unload the software 2 On the installation server install the tftp and the dhcpd server packages we use dhcpd only to run bootp for a specific MAC address 3 Copy in the tftpboot directory the default for SUSE Linux Enterprise Server 11 is tftpboot the netboot image and the yaboot executable file from the DVD directory sles11 suseboot The following files are used The netboot image is named inst 64 The yaboot executable file is named yaboot ibm 4 Boot the target virtual server and access SMS see Figure 9 49 to retrieve the MAC address of the Ethernet interface to use for the installation Version AF773_ 033 SMS 1 7 c Copyright IBM Corp 2000 2008 All rights reserved ain Menu Select Language Setup Remote IPL Initial Program Load Change SCSI Settings Select Console Select Boot Options Type menu item number and press Enter or select Navigation key 2 Figure 9 49 Setup remote IPL selection Chapter 9 Operating system installation methods 479 The MAC address that is shown in Figure 9 50 is the Hardware Address Version AF773 033 SMS 1
152. 10 1 9 16 IPv6 Addresses fe80 5ef3 fdt fes4 12dd Figure 7 18 Reviewing the current node IP configuration with the view option Chapter 7 Power node management 213 214 To edit or configure the IPv4 and IPv6 addresses click the entry in the Device Name column as shown in Figure 7 18 on page 213 of the wanted node and then the appropriate tab in the configuration window as shown in Figure 7 19 Enter the wanted network configuration information and click Apply IP Address Configuration Node 06 p270 General Setting IPv4 Current IP Configuration Network Interface ethi Configuration Method Use Static IP Address IP Address 10 1 9 16 Subnet Mask 255 255 255 0 Gateway Address 10 1 9 1 Change IP Configuration Configuration Method Use Static IP Address New Static Address Information IP Address Subnet Mask Gateway Address Apply Close Figure 7 19 Configuring IPv4 information for an FSP Figure 7 20 shows the confirmation message Click Close on the confirmation message and Close on the configuration window to return to the Component IP Configuration page IPv4 Configuration x v The IPv4 address information was successfully updated The new IP will take effect in a few minutes Figure 7 20 IP configuration change confirmation The configuration changes take several moments to occur and the Component IP Configuration view must be manually refreshed to update the Vi
153. 106 4 9 6 IBM Flex System EN4054 4 port 10Gb Ethernet Adapter 108 4 9 7 IBM Flex System CN4058 8 port 10Gb Converged Adapter 110 4 9 8 IBM Flex System EN4132 2 port 10Gb RoCE Adapter 112 4 9 9 IBM Flex System IB6132 2 port QDR InfiniBand Adapter 113 4 9 10 IBM Flex System FC3172 2 port 8Gb FC Adapter 114 4 9 11 IBM Flex System FC5052 2 port 16Gb FC Adapter 116 4 9 12 IBM Flex System FC5054 4 port 16Gb FC Adapter 117 4 10 System management 00 0 cee ees 118 4 10 1 Flexible Support Processor 0 000 c cee eee 119 4 10 2 Serial OVERrLAN 264 enis6 oo 29h weet ee ie se BE BSectoAy tesa 119 4 11 IBM ERGY SCale 03a cc idke tL eatiwtad tata eid ee wetad oes 119 4 11 1 IBM EnergyScale technology 000 eee eee 120 4 11 2 Power Capping and Power Saving options and capabilities 122 4 11 3 Energy consumption estimation 0 0000 eee eee 124 As 12 ANCHO CA anran cnc tone is ea Elon ma Nee eee GS 124 4 13 External USB device support 0 0 0 eee 125 4 13 1 Supported IBM USB devices 0 0 00 eee ee 125 4 13 2 Supported non IBM USB devices 000008s 127 4 14 Operating system Support 0 0 eee 127 4 15 Warranty and maintenance agreements 0000 eee ae 128 4 16 Software support and remote technical Support 128 Chapter 5 Planning wise en can caches oe esa Eee See eee et aake
154. 12 Ordering part number and feature code 1763 EN2024 4 port 1Gb Ethernet Adapter The IBM Flex System EN2024 4 port 1Gb Ethernet Adapter has the following features gt Dual Broadcom BCM5718 ASICs gt Connection to 1OOOBASE X environments by using Ethernet switches gt Compliance with US and international safety and emissions standards 106 IBM Flex System p270 Compute Node Planning and Implementation Guide gt Full duplex FDX capability enabling simultaneous transmission and reception of data on the Ethernet local area network LAN gt Preboot Execution Environment PXE support gt Wake on LAN support gt MSI and MSI X capabilities gt Receive Side Scaling RSS support gt NVRAM a programmable 4 MB flash module gt Host data transfer PCle Gen 2 one lane Figure 4 21 shows the IBM Flex System EN2024 4 port 1Gb Ethernet Adapter Figure 4 21 The EN2024 4 port 1Gb Ethernet Adapter for IBM Flex System For more information about this adapter see the IBM Redbooks Product Guide at this website http www redbooks ibm com abstracts tips0845 html 0pen Chapter 4 Product information and technology 107 4 9 6 IBM Flex System EN4054 4 port 10Gb Ethernet Adapter The IBM Flex System EN4054 4 port 10Gb Ethernet Adapter from Emulex enables the installation of four 10 Gb ports of high speed Ethernet into an IBM Power Systems compute node These ports connect to chassis switches or pass through modul
155. 129 5 1 Planning your system An overview 0 00 c eee eee 130 5 1 1 Hardware planning 0 cece ee eee eee 130 Contents V 5 12 SoftWare PIANMING ns baraenn od a a Base aad ee aad aha RR 5 2 Network connectivity 0 0 0 ce ee eee 5 2 1 Ethernet switch module connectivity 0000 e eae 52 2 Vinul EAN S gt 2 55 todas ent ook oe ae Bee Reed Bree ee ond eee ate 5 3 SAN COMMCCHVINY sen 2 wat ieee achat eo Rhee Oca tale edad aoa Wanna 5 4 Converged networking 0 00 cee eee ee eens 5 5 Configuring redundancy 000 c eee ee eens 5 5 1 Network redundancy 0 00 eee eee eee eee 5 5 2 SAN and Fibre Channel redundancy 200005 9 6 DUal VWIOSn sn chon bhai bhi Brat ie els wea eee UY deo Wee 5 6 1 Dual VIOS on Power Systems compute nodes Sf POWEF DIGNNING 3 220 oe Beatie sate t CRU Bee Been eee eek es 5 7 1 Power supply features 0 00 ees 5 7 2 PDU and UPS planning cstern ererat ira C4 ee Roe ee 5 7 3 Chassis power supplies 2 0 0c cc eee 5 7 4 Power limiting and capping policies 0000 eae 5 7 5 Chassis power requirements 0 0 00 eee eee ee es De GOONG yaaran aa a ode athe eae Mth Bong thine areeh Ba ake eet 5 8 1 Enterprise Chassis fan population 2 0000 e ee 5 8 2 Supported environment 2 0 00 cc eee 5 9 Planning for virtualization 1 0 0 cc eee 5 9 1 Virtual servers
156. 2 Editing and adding virtual Ethernet adapters for a VIOS Complete the following steps 1 Check the wanted adapter number and click Edit The Modify Adapter window that is shown in Figure 8 13 on page 364 opens In this window you can edit the virtual adapter s attributes 2 Enter or accept the following characteristics for the bridging virtual Ethernet adapter Accept the default Adapter of 2 This value can be changed if needed Set the Port Virtual Ethernet PVID option to 4091 Select IEEE 802 1Q capable adapter to allow future dynamic adds of other VLANs Chapter 8 Virtualization 363 364 Select Use this adapter for Ethernet bridging and set the Priority value In a dual VIOS environment that intends to use one of the high availability modes the corresponding adapters on each VIOS with the same Port Virtual Ethernet value must have a unique priority Click OK Virtual Ethernet Modify Adapter Specify an adapter ID and virtual Ethernet for this adapter Adapter Id 3 Port Virtual Ethernet 4091 VSI Type Id VSI Type Version VSI Manager Id IEEE Settings Select this option to allow additional virtual LAN IDs for the adapter Fi IEEE 802 19 compatible adapter Maximum number of VLANs 20 Additional VLAN IDs 2 20 48 Shared Ethernet Settings Select Ethernet bridging to link bridge the virtual Ethernet to a physical metwork Fil Use this adapter for Ethernet bridging Priority
157. 2 or 4 a Available in IBM Flex System only not supported in PureFlex System configurations 64 IBM Flex System p270 Compute Node Planning and Implementation Guide The default chassis configuration ships with two 2500 W supplies but it is possible to specify installation of two 2100 W supplies if required See Table 3 4 on page 65 for information about how to meet your power requirements As shown in Table 3 3 the 2100 W power supplies are rated at 2100 W output that is rated at 200 240 VAC with oversubscription to 2895 W for a short duration The 2100 W supplies have two independently powered dual 40 mm cooling fans that draw power from the midplane included within the power supply assembly The 2500 W power supplies are rated at 2500 W output that is rated at 200 VAC with oversubscription to 3538 W output at 200 VAC Both power supply types have a C20 socket that is provided for connection to a power cable such as a C19 C20 They also have two independently powered 40 mm cooling fans that are integrated into the power supply assembly which draw power from the midplane Table 3 3 Power supplies comparisons Power supplies Operation voltages Oversubscription 2500 W 200 240 V 3538 W 2100 W 200 240 V 2895 W Table 3 4 shows the maximum number of configurable Power compute nodes for the power supplies that are installed in the chassis The following color codes are used in the table gt Green No restriction t
158. 20 Create Virtual Server Server 7954 24 SN107792B8 Physical I O Adapters af Name y Memory Select one or more physical adapters from the list of available physical adapters Note Virtual servers that are assigned physical adapters cannot be relocated af Processor af Ethernet Display only adapters that are currently available Virtual Storage Adapters Location Code Description Se UF SAE 001 W2SR02E P1 Ri1 PCI E 5A5 Controller Physical ai O K UFSAE 001 W2SR02E P1 T1 PCI to PCI bridge S UFSAE 001 W2SR02E P1 C18 Li Ethernet controller U7SAE 001 WZ5SR02E P1 C18 L2 lt Ethernet controller Figure 8 20 Physical adapter selections on VIOS virtual server The default view of Physical I O Adapters is to show only available adapters or adapters that are not assigned to another virtual server This view can be altered by clearing the Display only adapters that are currently available option 2 Click Next to proceed to the Summary window Chapter 8 Virtualization 371 Virtual server summary The definitions and options that are selected in the wizard can be reviewed on one page as shown in Figure 8 21 Summary Name The following is a summary of your virtual server settings You can select Back to make changes You can also use the virtual server properties task to make changes after the virtual server is created Processor
159. 255 255 0 host rh6l vsl fixed address 192 168 20 12 hardware ethernet XX XX XX XX XX XX3 next server 192 168 20 11 filename yaboot rh6 Figure 9 60 The dhcpd conf file for Red Hat Enterprise Linux 6 486 IBM Flex System p270 Compute Node Planning and Implementation Guide 9 7 Cloning methods Two cloning methods are available for an AIX installation The most common method of cloning is to create a mksysb image on one machine and restore it in the cloned machine This method clones all of your operating system rootvg but no non rootvg vg operating systems or file systems This method is a fast way of cloning your AIX installation and it can be performed by using tape devices DVD media or a NIM installation Ensure that the IP address is not cloned in this process If you are using NIM to restore the mksysb the IP address that is given to the client during the network boot overrides the IP address on the interface that is used by NIM It is also important to determine whether all device drivers that are needed to support the hardware on the target system are in the mksysb This task can be accomplished by installing the necessary device drivers in the image before the mksysb is created or when you are using NIM to restore the mksysb ensure that an lpp_source is specified that contains the needed drivers You can also use the ALT_DISK_INSTALL method but this method works only if you have SAN disks attached or r
160. 3 10 24 22 PM EDT Display Properties Close Message k Welcome Flex System Manager Version Power Systems Resources E l Hosts H semer rosa 29x hiner mene Summary Search the table Search Virtual Servers Select Name Access State Referenc Le Operating Systems j E Z oS 7 EH i Server 7954 244 SN1O77S OK Power Units Figure 7 55 Inventory job start notification Clicking Display Properties opens the window that is shown in Figure 7 56 The job properties window has several tabs that can be used to review other job details The General tab that is shown indicates that the inventory collection job completed without errors Active and Scheduled Jobs Active and Scheduled Jobs Properties Mame Collect Inventory August 6 2013 10 24 22 PM EDT General Targets History Logs Status Complete Progress 100 Last Run Status Complete view log Description Run once on 8 6 13 at 10 24 PM Next Run Last Run amp gria at 10 24 PM Task Collect Inventory Created By USERIC Edit Figure 7 56 Inventory job status The Active and Scheduled Jobs tab and the View and Collect Inventory tabs near the top of the window can be closed IBM Flex System p270 Compute Node Planning and Implementation Guide With access and inventory collection complete the FSM can manage the compute node Opening a virtual terminal console with the FSM GUI One virtual terminal console for each virtual se
161. 34 IBM Flex System p270 Compute Node Planning and Implementation Guide Features and technologies Function provided by Hypervisor VIOS Integrated Virtualization Manager IVM Host Ethernet Adapter HEA Hypervisor a Some other documents might call it as N_Port ID Virtualization NPIV b Supported only by mid tier and large tier POWER7 Systems or later including Power 770 780 and 795 c HEA is a hardware based Ethernet virtualization technology that is used in IBM POWER6 and early POWER processor based servers Future hardware based virtualization technologies will be based on Single Root I O Virtualization SR IOV For this reason we do not describe HEA configuration in this publication The technologies in Table 8 2 also are frequently mentioned with PowerVM Table 8 2 Complementary technologies Features and technologies Function provided by POWER processor compatibility modes Simultaneous Multithreading Hardware AIX Active Memory Expansion Hardware AIX Chapter 8 Virtualization 9335 Features and technologies Function provided by a Only available on POWER7 Systems and later b Only available on AIX version 6 1 or later 8 2 1 PowerVM editions This section provides information about the virtualization capabilities of PowerVM The are three versions of PowerVM which are suited for the following purposes gt PowerVM Express Edition PowerVM Express Edition is designed for customers looking f
162. 4 Enter 5 to select option Select Boot Options as shown in Figure 12 4 PowerPC Firmware Version AF 73 021 SMS 1 7 c Copyright IBM Corp 2000 2008 All rights reserved Main Menu Select Language Setup Remote IPL Initial Program Load Change SCSI Settings Select Console Select Boot Options Type menu item number and press Enter or select Navigation key Open Completed Figure 12 4 SMS menu Chapter 12 Installing Linux 557 5 Enter 1 to select option Select Install Boot Device as shown in Figure 12 5 Version AF773 021 SMS 1 7 c Copyright IBM Corp 2000 2008 All rights reserved Multiboot 1 Select Install Boot Device 2 Configure Boot Device Order 3 Multiboot Startup lt OFF gt 4 SAN Zoning Support 5 Management Module Boot List Synchronization Navigation keys M return to Main Menu ESC key return to previous screen X eXit System Management Services Type menu item number and press Enter or select Navigation key 1 Figure 12 5 Install Boot Device 6 Enter 3 to select option CD DVD as shown in Figure 12 6 Version AF 73 021 SMS 1 7 c Copyright IBM Corp 2000 2008 All rights reserved Select Device Type Diskette Tape CD DVD IDE Hard Drive Network List all Devices Navigation keys M return to Main Menu ESC key return to previous screen Type menu item number and press Enter or select Navigation key 3 Figure 12 6 CD DVD 558 IBM Flex System p270 Compute Node Planning an
163. 4 244 SH1L07 FESR ibro_i_ ritr 4 IBMi 22 0 GB Dedicated 2 Dedicated z2 i 3 None 24 0 GB Adisk9 ITSO VIOS T More I Base 01 RSG_R710 ITSO VIOSF None Figure 11 11 Creating a Virtual Server Summary panel The virtual server is now created and should be visible in the Virtual Servers panel in the Manage Power Systems Resources tab Tip Return to the profile and verify that the memory and processor values are what you require The defaults tend to be set high As the IBM i virtual server is created by using the FSM any created adapters were created dynamically on the VIO server You must ensure that the adapters are also added to the profile of the VIO Server or servers A simple method to do this is to right click the VIOS virtual server and select System Configuration gt Save current configuration This saves all dynamically assigned adapters to the profile to which you select to save the active profile The partition can now be activated and is ready for operating system installation Chapter 11 Installing IBMi 511 11 3 Configuring an IBM i console connection 512 IBM i requires a 5250 emulator client to be used as the console for the operating system IBM System i Access has an emulator option that can be used or you can use IBM Personal Communications A trial version of Personal Communications is available at this website http ibm com software products us en pcomm After you
164. 512 1 Open an SSH session to the FSM and log in with a valid user ID and password At the command prompt use the vtmenu command 290 IBM Flex System p270 Compute Node Planning and Implementation Guide 2 The vtmenu initially shows all the Power compute nodes under management control of the FSM as shown in Figure 7 115 1 Server 7954 24X SN107782B 2 Server 7954 24X SN1077E3B Enter Number of Managed System q to quit 2 Figure 7 115 Vtmenu initial window 3 Choose the Managed System server 7954 24X SN107782B as shown in Figure 7 115 4 A list of partitions that are running on the compute node is displayed as shown in Figure 7 116 Choose the partition for example for itsoAIX1 choose 1 Partitions On Managed System Server 7954 24X SN1077E3B 0S 400 Partitions not listed 1 itsoAIX1 Open Firmware 2 itsoVIOS6A Running Enter Number of Running Partition q to quit 1 Figure 7 116 Vtmenu Partitions 5 When the partition is chosen the virtual terminal session starts The Enter key might need to be pressed to update the sessions and display the current output 6 To exit the virtual terminal session press tilde then a period to return to the partition selection menu Updating system firmware The HMC updates system firmware on a Power compute node through communication with the FSP The updates can be retrieved from the IBM service website by the HMC removable media such as a DVD or USB fla
165. 77 IBM i virtual server settings for virtual Ethernet Important These steps are critical because the IBM i virtual server must be defined to use only virtual resources through a VIOS At the least a virtual Ethernet and a virtual SCSI adapter must be defined in the IBM i virtual server The virtual SCSI adapter is also used to virtualize optical devices Optionally a virtual Fibre Channel drive can be used for disk or tape media library access Chapter 8 Virtualization 425 5 In the Virtual Storage definitions window Indicate that you do not want automatic virtual storage definition configure the adapters manually as shown in Figure 8 78 Click Next Create Virtual Server Server 7895 421 5N1058008 Storage wf Mame eee vf Memory Wirtual storage allows client partitions to share physical devices that are used ta access block storage qf Processor To ease storage management the console can automatically manage the virtual storage adapters yf Ethernet required for the virtual server or you may individually customize the virtual storage adapters gt Storage selection Would you like to have virtual storage adapters automatically managed by the console Physical Io Load source console Ho I want to manage the virtual storage adapters for this Virtual Server Summary O ves Automatically manage the virtual storage adapters for this Virtual Server Back Next Finish Cancel Figure 8 78 I
166. 8 the left side navigation options are used to directly access the following components gt Hosts Servers gt Virtual servers LPARS gt Operating Systems Separate discovery process gt Power Units not used Manage Power Systems Resources k Welcome Flex System Manager Version Power Systems Resources El Hosts H ferver 7954 244 SN 107782 E 4 Virtual Servers La Operating Systems Power Units Sel i Figure 7 38 Power Systems Resources navigation Selecting these navigation options displays objects in a table inside the content area Each object has informational and operational options available by a left or right click We introduce each of these in the following subsections Hosts All known servers in all managed chassis by an FSM are listed under the Hosts option Clicking Hosts displays the physical hosts or servers in the content area on the right side of the window as shown in Figure 7 39 Manage Power Systems Resources t Welcome Flex System Manager Version Power Systems Resources Performance Surmmar i z rE Virbual Servers LA Operating Systems tai Power Units Select Mame Access 2 State F 4 Server 7954 248 SN10778 Mov Started Figure 7 39 Host list in content area Chapter 7 Power node management 231 All virtual servers that are created under an individual host can be displayed in the content area by clicking the host name as shown
167. 8 31 on page 381 IBM Flex System p270 Compute Node Planning and Implementation Guide i https 9 42 171 90 hme wel T2d87 Create Lpar Wizard Server 7954 24X SN107732B Memory Settings Create Partition f Partition Profile Physical Memory Installed Memory 32768 Pe Current memory available for Partition usage MB 31360 Oo Desired Memory Virtual Adapters Maximum Memory Optional Settings Profile Summary Figure 8 31 HMC Memory Settings window Assigning physical I O resources In this section we describe the process that is used to assign physical I O resources to the LPAR in the I O window as show in Figure 8 32 on page 382 Any virtual server can be assigned installed physical I O adapters from one of the following sources on the p270 gt gt gt gt Expansion cards Integrated SAS Storage controller SAS Storage controller also know as dual VIOS adapter Integrated PCl to PCI bridge USB port Complete the following steps to assign the physical I O resources 1 Assign the desired physical I O resources by selecting one of the following resources Required Represents the I O resource that is required to make the partition active Required I O resource cannot be dynamically DLPAR removed from the partition Chapter 8 Virtualization 381 Desired If during the partition startup the desired I O resource is not assigned to any other running partitions it is assigned to that partit
168. 8 IBM Flex System p270 Compute Node Planning and Implementation Guide 3 Enter the company name and contact details for configuring the ESA as shown in Figure 7 172 Then press Enter to confirm the configuration Configuring Electronic Service Agent Type or select values in entry fields Press Enter AFTER making all desired changes TOP Entry Fields Company name IBM ITSO Service contact Name of the contact person Ben Author Telephone number of the contact person 5555551234 Email address myuserid mycompany com bauthor ibmisto com Country or region of contact person UNITED STATES IBM ID System location Telephone number where the system is located 5555555678 Country or region where the system is located UNITED STATES Street address where the system is located 123 Redbooks Drive MORE 7 F1 Help F2 Refresh F3 Cancel F4 List F5 Reset F6 Command F7 Edit F8 Image F9 Shel FIO Exit Enter Do Figure 7 172 ESA contact configuration Chapter 7 Power node management 329 4 The initial configuration process adds and starts ESA as shown in Figure 7 173 on page 330 In this particular example the outbound connectivity test to IBM Service failed Internal firewalls in the ITSO facility prevent outbound communications However the starting of the ESA is not dependent on the connectivity test COMMAND STATUS Command OK Stdout yes stderr no Before command completion additional i
169. 8 L1 T1 None r Figure 8 68 IVM Create Partition Ethernet window Chapter 8 Virtualization 417 7 The Storage Type window that is shown in Figure 8 69 allows for the creation of a virtual disk assignment of an existing virtual disk logical volume or physical volume SAN LUN or physical drive or to not make any assignment Complete the following steps in the Storage Type window a Select a storage type In our example Assign existing virtual disks and physical volumes was selected The Create virtual disk option branches the wizard to a series of windows that guide the creation of a virtual disk b Click Next to open the Storage window Create Partition Storage Type Storage Type You may create a new virtual disk or assign existing virtual disks and physical volumes which are not curre assigned to a partition You will be able to assign optical devices such as a CD ROM regardless of which chg make pevereeres Figure 8 69 IVM Create Partition Storage Type window 418 IBM Flex System p270 Compute Node Planning and Implementation Guide 8 As shown in Figure 8 70 the Storage window that is shown lists all of the available virtual disks and physical volumes SAN LUNs and physical drives Complete the following steps on the Storage window a Select an available storage volume In our example the virtual disk lpar2rootvg was selected b Click Next to open the Optical Tape window
170. 9 3P N G Power cables Figure 5 6 Example power cabling 32 A at 380 415 V three phase international The maximum number of Enterprise Chassis that can be installed with a 42 U rack is four so this configuration requires a total of four 32 A 3 phase wye feeds into the rack to provide for a fully redundant N N configuration Chapter 5 Planning 153 Power cabling for 60A at 208V 3 phase North America In North America this configuration requires four 60 A 3 phase delta supplies at 200 208 VAC so an optimized 3 phase configuration is shown in Figure 5 7 IEC320 16A C19 C20 3m power cable 46M4003 1U 9 C19 3 C13 Switched and monitored DPI PDI 46M4003 Includes fixed IEC60309 3P G 60A line cord Figure 5 7 Example power cabling 60 A at 208 V 3 phase configuration 5 7 3 Chassis power supplies For more information about chassis power supply options and features see 3 5 Power supplies on page 63 The number of power supplies that are required depend on the number of nodes that are installed within a chassis and the level of redundancy that is required When more nodes are installed the power supplies are installed starting at the bottom of the chassis A maximum of six power supplies can be installed in the IBM Flex System Enterprise Chassis The power supplies are 80 PLUS Platinum certified and are 2500 W output which is rated at 200 VAC
171. 9 Select the IP protocol version ipv4 or ipv6 as shown in Figure 9 17 For our example we select ipv4 Version AF773_ 033 SMS 1 7 c Copyright IBM Corp 2000 2008 All rights reserved Select Internet Protocol Version IPv4 Address Format 123 231 111 222 IPv6 Address Format 1234 5678 90ab cdef 1234 5678 90ab cdef Navigation keys M return to Main Menu ESC key return to previous screen Type menu item number and press Enter or select Navigation key Figure 9 17 Internet protocol version selection 456 IBM Flex System p270 Compute Node Planning and Implementation Guide 20 Select option 1 BOOTP as the network service to use for the installation as shown in Figure 9 18 Version AF773_ 033 SMS 1 7 c Copyright IBM Corp 2000 2008 All rights reserved Select Network Service 1 BOOTP 2 ISCSI Navigation keys M return to Main Menu ESC key return to previous screen Type menu item number and press Enter or select Navigation key Figure 9 18 Select a network service 21 Set up your IP address and the IP address of the NIM server for the installation To do so select option 1 IP Parameters as shown in Figure 9 19 Version AF773_033 SMS 1 7 c Copyright IBM Corp 2000 2008 All rights reserved Network Parameters Interpartition Logical LAN U7954 24X 1077E3B V5 C4 T1 1 IP Parameters 2 Adapter Configuration 3 Ping Test 4 Advanced Setup BOOTP Navigation keys M return to Main Menu E
172. AIX which is available at this website http publib boulder ibm com infocenter aix v7rl topic com ibm aix doc doc base eicbd_ aix pdf 332 IBM Flex System p270 Compute Node Planning and Implementation Guide Virtualization If you create virtual servers also known as logical partitions LPARs on your Power Systems compute node you can consolidate your workload to deliver cost savings and improve infrastructure responsiveness As you look for ways to maximize the return on your IT infrastructure investments consolidating workloads and increasing server use becomes an attractive proposition The chapter includes the following topics 8 1 Introduction on page 334 8 2 PowerVM on page 334 8 3 POWER Hypervisor on page 340 8 4 Planning for a virtual server environment on page 346 8 5 Creating a VIOS virtual server on page 349 8 6 Creating an AIX or Linux virtual server on page 413 8 7 Creating an IBM i virtual server on page 422 8 8 Creating a full system partition on page 430 YYYY YV YV Y Copyright IBM Corp 2013 All rights reserved 333 8 1 Introduction IBM Power Systems combined with PowerVM technology are designed to help you consolidate and simplify your IT environment and include the following key capabilities gt Improve server use by consolidating diverse sets of applications gt Share processor memory and I O resources to reduce the total co
173. AM Components 2 100 000 000 components transistors which offers the equivalent function of 2 700 000 000 for more information see On chip L3 intelligent cache on page 90 Max execution threads core or 4 32 chip L2 cache per core or per chip 256 KB 2 MB On chip L3 cache per core per 10 MB 80 MB chip Compatibility Compatible with prior generations of the POWER processor POWER7 processor core Each POWER7 processor core implements aggressive out of order OoO instruction execution to drive high efficiency in the use of available execution paths The POWER7 processor as an Instruction Sequence Unit can dispatch up to six instructions per cycle to a set of queues Up to eight instructions per cycle can be issued to the instruction execution units The POWER7 has the following set of 12 execution units Two fixed point units Two load store units Four double precision floating point units One vector unit One branch unit One condition register unit One decimal floating point unit YY YYY V Y 86 IBM Flex System p270 Compute Node Planning and Implementation Guide The following caches are tightly coupled to each POWER7 processor gt Instruction cache 32 KB gt Data cache 32 KB gt L2 cache 256 KB which is implemented in fast SRAM gt L3 cache 10 MB eDRAM Simultaneous multithreading The POWER7 processor supports Simultaneous Multi Threading SMT mode four Known as SMT4 which enables up
174. AM AND More F3 Exit F6 Print F12 Cancel F13 Select available language F14 Accept F16 Decline F17 Top F18 Bottom Started processing 588 objects completed 500 objects Figure 11 34 Licensed Program Software Agreement window 9 Perform one of the following tasks Read the agreement and press F14 to accept the agreement and allow the licensed program to continue installing Read the agreement and press F16 to decline the agreement and end the installation of that licensed program Note It is vital for operating system normal functionality to read and accept any agreements for the default preselected Licensed Programs 10 You are returned to the Work with Licensed Programs menu when the installation process is completed One of the following messages appears at the bottom of the Work with Licensed Programs display Work with licensed programs function has completed This message means that all licensed programs installed successfully Chapter 11 Installing IBMi 533 Work with licensed programs function not complete This message means that an agreement was not accepted or there was an installation issue Troubleshoot the issue by following the instructions that are described next for LICPGMs that are not COMPATIBLE or INSTALLED After the installation process completes use LICPGM menu option 10 Display licensed programs to see the release and installed status values of the installed licensed programs
175. Advanced BIST connectivity status redundant status Chassis Active Events Download Service Data Obtain a compressed file of relevant service Figure 7 27 Selecting CMM Service and Support Settings option 2 Read and acknowledge any licensing information that is presented to continue Chapter 7 Power node management 221 3 Complete the mandatory contact information as shown in Figure 7 28 IBM Chassis Management Module USERID Settings Log Gut Help A System Status Multi Chassis Monitor Events Service and Support Chassis Management Mot Module Management Thu 8 Aug 201 Search iM Service and Support is not yet enabled IBM Support File Transfer Server Enable IBM Support To successfully call home IBM Support make sure the DNS settings are valid Domain Name System DNS z Enable IBM Support IBM Service Support Center Select the country for your IBM Service Support Center If you do not see your country listed the electronic service is not supported for your country contact Country WS United States Contact Information The information you supply will be used by IBM Support for any follow up inquiries and shipment Company Mame IEM Contact Mame ITSO Group Phone 555 555 5555 Phone Extension E mail ermployeedtus ibrm corm Address 3039 Cornwallis Rd City RTP State Province pyc Postal code 27709 Figure 7 28 Required information to enable CMM phone home capability 222 IBM
176. After you exit to normal boot a window opens that shows the network parameters for BOOTP as shown in Figure 9 25 on page 462 f The VIOS installation windows are presented after the BOOTP process The selection options are identical to the AIX installation that is described in 10 2 Installing AIX on page 491 gt Install by using optical media This method is supported by FSM HMC and IVM Complete the following steps to install VIOS a Follow the setup procedure that is described in 9 5 Optical media installation on page 462 b When the VIOS installation windows are presented the selection options are the same as the NIM installation of the VIOS 490 IBM Flex System p270 Compute Node Planning and Implementation Guide 10 2 Installing AIX The following methods are available to install AIX on your Power Systems compute node gt gt gt gt NIM installation with 1pp_ source installation NIM installation with mksysb Optical media installation VIOS media library installation and a virtual optical device with an AIX installation media ISO images as a backing device To install AIX by using the NIM Ipp_source or mksysb method complete the following steps 1 The first part of the process setting up the environment for installation is described in 9 4 Network Installation Management method on page 446 A machine resource is created with the AIX name IP address and so on Installatio
177. B Specify the location of the LIC repository OQ IBM service web site Removeable Media FTF site Hard drive Figure 7 120 Choosing a LIC repository 5 The FTP option requires specifying a directory on the FTP server Click Change Directory as shown in Figure 7 121 FTP Site Access Information Server 7954 24 5N1077827B Enter the FTP site address and account access information reste Coool vermi password SS C Accessing a mounted Discovery CO Directory fopt cctw data Figure 7 121 FIP server information 294 IBM Flex System p270 Compute Node Planning and Implementation Guide 6 The Change FTP Directory window is shown in Figure 7 122 Enter the full path on the FTP server to the system firmware update then click OK Change FTP Directory Server 7954 24X 5N107782B Type the directory to use for the specified FTP site Click Default Location to use the default management console hard drive location Directory ftmp fw Default Location Figure 7 122 Specifying the FTP directory 7 The previous operation returns to the FTP Site Access Information with the updated path information as shown in Figure 7 123 Enter the FTP site IP address user ID and password information then click OK FTP Site Access Information Server 7954 24x 5N107782B Enter the FTF site address and account access information reste O Eoo Password d Accessing a mounted Discovery CD Directory
178. BM i virtual server manual virtual storage definition 6 Because no virtual storage adapters exist the Create Adapter option is displayed in the main Virtual Storage window as shown in Figure 8 79 Any virtual storage adapters that already are created are shown Click Create Adapter Create Virtual Server Server 7G95 424 SN1055008 OM Virtual Storage Adapters aw Memory qf Processor qf Ethernet Maximum number of virtual adapters Specify the virtual storage adapters required for this virtual server qf Storage selection 5 Virtual Storage Adapters No adapters configured Select Create Adapter button to create a new virtual adapter Pie tesiine Create Adapter Load source console Summary Mote Storage adapters configuration can be automatically handled if VIOS servers with active rroc connection are available Back Mest gt Finish Cancel Figure 8 79 IBM i virtual server create virtual storage adapter 426 IBM Flex System p270 Compute Node Planning and Implementation Guide T In the Create Virtual Adapter window complete the fields as shown in Figure 8 80 Choose an adapter ID Specify SCSI Client for the adapter type Specify a virtual SCSI adapter on the VIOS as the Connecting virtual server Create Virtual Adapter Specify the virtual storage adapter ID and client information Adapter ID 13 Adapter type SCSI Client Connecting virtual server in
179. BM product program or service is not intended to state or imply that only that IBM product program or service may be used Any functionally equivalent product program or service that does not infringe any IBM intellectual property right may be used instead However it is the user s responsibility to evaluate and verify the operation of any non IBM product program or service IBM may have patents or pending patent applications covering subject matter described in this document The furnishing of this document does not grant you any license to these patents You can send license inquiries in writing to IBM Director of Licensing IBM Corporation North Castle Drive Armonk NY 10504 1785 U S A The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION AS IS WITHOUT WARRANTY OF ANY KIND EITHER EXPRESS OR IMPLIED INCLUDING BUT NOT LIMITED TO THE IMPLIED WARRANTIES OF NON INFRINGEMENT MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE Some states do not allow disclaimer of express or implied warranties in certain transactions therefore this statement may not apply to you This information could include technical inaccuracies or typographical errors Changes are periodically made to the information herein these changes will be incorporated in new editions of the publication IBM may make
180. C Perform and manage updates on your system View details of status and messages Provides a step by step process to configure your HMC Provides an online version of fnstaifing and configuring the HMC vy guide for system administrators and system operators using the HMC Provides an online version of Managing the HMC v guide for system administrators and system operators using the HMC Provides an online version of Servicing the HMC v guide for system administrators and system operators using the HMC Provides hints and errata information about the HMC Additional related online information Figure 7 89 HMC workplace window As shown in Figure 7 89 the HMC workplace window features the following components 1 Banner The banner that is across the top of the workplace window identifies the product and logo It is optionally displayed and is set by using the Change User Interface Setting task 2 Taskbar The taskbar is below the banner It displays the names of any tasks that are running the user ID you are logged in as online help information and the ability to log off or disconnect from the console The taskbar provides the capability of an active task switcher You can move between tasks that were started and are not yet closed However the task switcher does not pause or resume existing tasks For example when you run three tasks on the HMC you can see tasks name in the taskbar and click to switch them
181. Compute Node Planning and Implementation Guide Figure 2 1 shows the connections including the Fibre Channel and Ethernet data networks and the management network that is presented to the Access Points within the PureFlex Rack The green box signifies the chassis and its components with the inter switch link between the two switches Because this is an Express solution it is an entry configuration l l l l l l l l l l mc ce ew ws ws ow oe Access Points Midplane Connections Chassis Boundary Management 1GbE label Chassis Elements Data 10GbE label Rack Mounted Elements Data 40GbE Data 8Gb FC Figure 2 1 PureFlex Express with FCoE and external V7000 Storwize Chapter 2 IBM PureFlex System 25 2 4 2 Chassis The IBM Flex System Enterprise Chassis contains all the components of the PureFlex Express configuration except for the IBM Storwize V7000 and any expansion enclosure The chassis is installed in a 25 U or 42 U rack The compute nodes storage nodes switch modules and IBM FSM are installed in the chassis When the V7000 Storage Node is chosen as the storage type a no rack option is also available Table 2 5 lists the major components of the Enterprise Chassis including the switches and options Feature codes The tables in this section do not list all featu
182. D enables you to access the service information transmitted to IBM by Electronic Service Agent If you do not have an IBM ID you can obtain one at http vwew ibm com registration Secure access to the Pi System service information is available wia the IBM Electronic Services Web site http vwow ibm com support electronic location Connection at any time regardless of your system status The My Systems link provides several functions aimed at saving you time and helping you solve problems more quickly Authorize o gt IBM IDs If you choose not to enter your IBM ID now you can enter it later using the Service and Support Manager Summary page Secondary IBM ID Figure 7 80 Getting started with ESA wizard Authorized IDs window The Authorized IDs page provides for a primary and secondary IBM ID to be listed and associated with the service information that is transmitted to IBM These IDs are optional and the wizard can continue without any values being entered Chapter 7 Power node management 261 7 Click Next to continue to the Summary page as shown in Figure 7 81 vf Welcome Uana ry TS The following settings will be established when you click Finish company contact Your company contact System pany i v location Contact name ai Company name y Connection Telephone number r Authorize IBM IDs Extension Fax number gt Summary Alternate fax number E mail Alternate e mail Help desk number E
183. DLPAR capable Yes Processing DLPAR capable Yes 1 0 adapter DLPAR capable Yes Figure 8 57 IVM Partition Properties General tab Physical Adapters Chapter 8 Virtualization 407 9 Click the Memory tab as shown in Figure 8 58 to change the Minimum Assigned and Maximum memory values as wanted Values that lower the existing minimum or increase the maximum values require a restart of the node to synchronize General Memory k Processing Ethernet Physical Adapters Modify the settings by changing the pending values The changes will be applie the current and pending values might take some time Memory mode Dedicated All memory values should be in multiples of 64 MB Property Current Minimum memory 1 GB 1024 MB 1 Assigned memory 2 GB 2048 MB 2 Maximum memory 2 GB 2048 MB 4j Figure 8 58 IVM Partition Properties Memory tab 408 IBM Flex System p270 Compute Node Planning and Implementation Guide 10 Click the Processing tab as shown in Figure 8 59 to change the values of processing units which are also known as entitlement virtual processors Capping values and processor compatibility mode As with memory changes values that lower the existing minimum or increase the maximum values require a restart of the node to synchronize Partition Properties itsoVIOS6A 1 General Memory Processing Ethernet Physical Adapters Modify the settings by changing the pending v
184. French Business Partners in Montpellier France where he gives presentations that are focused on building configurations with the e config configurator tool Thanks to the following people for their contributions to this project From IBM Development Debbie Anglin Roger Bullard Doug Evans Kaena Freitas David Drez Erich Hauptli Walter Lipp Jose Morales Hoa Nguyen Rob Ord Raymond Perry Mike Stys Lee Webber Kris Whitney Vvvvvvvrvrvrvrvvvvsiyv From IBM Marketing gt John Biebelhausen gt Richard Mancini gt Tim Martin gt Randi Wood From IBM Redbooks gt Tamikia Barrow gt Deana Coble gt Shari Deiana gt Ilya Krutov Preface XV IBMers from around the world gt Dave Ridley Fabiano Matassa Ricardo Marin Matinata Matt Slavin vy y Now you can become a published author too Here s an opportunity to spotlight your skills grow your career and become a published author all at the same time Join an ITSO residency project and help write a book in your area of expertise while honing your experience by using leading edge technologies Your efforts help to increase product acceptance and customer satisfaction as you expand your network of technical contacts and relationships Residencies run from two to six weeks in length and you can participate either in person or as a remote resident working from your home base Find out more about the residency program browse the residency index
185. HDD IBM Open Fabric Manager Optional FSM advanced adds VM Control Enterprise license YYYY Y Y 2 5 6 PureFlex Enterprise storage options Any PureFlex Enterprise configuration requires a SAN attached storage system The following storage options are available are the integrated storage node or the external Storwize unit gt IBM Storwize V7000 gt IBM Flex System V7000 Storage Node Chapter 2 IBM PureFlex System 41 The required numbers of drives depends on drive size and compute node type All storage is configured with RAID5 with a single Hot Spare that is included in the total number of drives The following configurations are available gt Power based nodes only 16 x 300 GB or 8 x 600 GB drives gt Hybrid both Power and x86 16 x 300 GB or 8 x 600 GB drives gt x86 based nodes only including SmartCloud Entry 8 x 300 GB or 8x 600 GB drives gt Hybrid both Power and x86 with SmartCloud Entry 16x 300 GB or 600 GB drives SSDs are optional however if they are added to the configuration they are normally used for the V7000 Easy Tier function to improve system performance IBM Storwize V7000 The IBM Storwize V7000 is one of the two storage options that is available in a PureFlex Enterprise configuration This option can be rack mounted in the same rack as the Enterprise chassis Other expansion units can be added in the same rack or a second rack depending on the quantity ordered The IBM Storwize V7000 c
186. IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM TFTP BOOT Server J Puctaate esmes eheeteek 192 168 20 11 Client TPs oscae eerie nene las 192 168 20 12 Subnet MaSk ssssessssssese 255 255 255 0 1 Filename sessssss yaboot ibm TFTP RECET CS ntaronas was Raa ows 5 BLOCK STZ tte wucewars uatanwiesd 512 FINAL PACKET COUNT 407 FINAL FILE SIZE 208348 BYTES Figure 9 58 Netbooting the boot loader For more information about the installation see 12 3 Installing SUSE Linux Enterprise Server on page 592 9 6 2 Red Hat Enterprise Linux 6 For Red Hat Enterprise Linux 6 we follow a procedure similar to the one that is described in SUSE Linux Enterprise Server 11 on page 479 The following description shows the differences between the two procedures Complete the following steps 1 Obtain the ISO file of Red Hat Enterprise Linux 6 and copy it to an accessible directory of the inst
187. IOS ae Mame al C hdisk er A Physical Location a3 UFAR 001 We2SHobI Pi 035 L1 T1 WIOOSOFTESO50C0370 LSo00000000000 F hdisk 7 ITSO WIOS P1i 235 L1 T1 WIOOSOTESO50C0370 L Eoooooo0odo0000 UFAR 001 We2SHoDI hdiskio ITSO WIOSs _ UFSAF 001 We2SHobJ Pi 235 L1 T1 WIOOSOTESOS0CO370 Leo00o000000000 hdisk 3 ITSO eH Beales UFSAF 001 We2SsHob Eirean WIOOSOTESOS0TCO0370 Leoooooooo00o0 Figure 11 8 Creating a Virtual Server Storage selecting physical disks panel IBM Flex System p270 Compute Node Planning and Implementation Guide Name Memory Processor Ethernet Storage selection a ay a SS Storage o gt Optical devices As shown in Figure 11 9 in the Optical devices panel if you plan to use an external optical device or the VIOS virtual media library select the applicable device Multiple ISO files can be selected for sequential access In our example we are selecting the base ISO for V7R1 TR6 which is the minimum supported V7R1 Technical Release for the p270 Click Next to continue Optical devices Assign physical virtual optical devices for this virtual server Physical Optical Devices No physical optical devices currently configured Virtual Optical Media Select Mame WlOS Shared Storage a we
188. If an IBM PureFlex System configuration was ordered the existing VIOS configuration can be edited as needed instead of installing new Physical adapters For the VIOS partitions planning for physical adapter allocation is important because the VIOS provides virtualized access through the physical adapters to network or disk resources For network adapters a link aggregation or IBM Flex System p270 Compute Node Planning and Implementation Guide Etherchannel is a common method to improve availability and increase bandwidth For storage adapters a multipathing package for example an MPIO PCM or EMC PowerPath is installed and configured in the VIOS after the operating system is installed To further enhance availability in a virtualized configuration implement two VIOS servers both capable of providing the same network and storage access to the virtual servers on the Power Systems compute node Identifying the I O resource in the system manager configuration wizard or CLI commands is necessary for assigning the correct physical resources to the intended virtual servers Figure 8 3 shows the physical location codes on a p270 The locations codes that are shown in the configuration menus contain a prefix as shown in the following example Utttt mmm ssssss Px Cyy where tttt Machine Type mmm Model ssssss 7 digit Serial Number Px planar number Cyy physical slot number For example an EN4054 4 port 10Gb Ethernet Adapter in a p270 is r
189. Install Resources as shown in Figure 9 9 A list of available machines opens Manage Network Install Resource Allocation Move cursor to desired item and press Enter List Allocated Network Install Resources Allocate Network Install Resources Deallocate Network Install Resources F1 Help F2 Refresh F3 Cancel F8 Image F9 Shel 1 FIO Exit Enter Do Figure 9 9 Select Allocate Network Install Resources Chapter 9 Operating system installation methods 449 7 Choose the machine you want to install in this example we use 7954AIXtest A list of the available resources to assign to that machine opens as shown in Figure 9 10 Manage Network Install Resource Allocation Target Name Move cursor to desired item and press Enter CURSO groups mac_group master STUDENT1 STUDENT2 STUDENT3 STUDENT4 STUDENT5 STUDENT6 tws01 7954nimtest 7954AIXtest bolsilludo tricolor decano machines machines machines machines machines machines machines machines machines machines machines machines machines master standalone Standalone Standalone Standalone Standalone Standalone Standalone Standalone Standalone Standalone Standalone Standalone F1 Help F8 Image F1 Find F2 Refresh F1O Exit n Find Next F3 Cancel Enter Do Figure 9 10 Machine selection for resource allocation 8 Assign lpp source and spot Press F7 to make multiple selections 450 IBM Flex System p270 Compute Node Planning and Impleme
190. Java based virtual console that is started from the GUI or the mkvt command from a command line session with the VIOS can be used for SMS access for other AIX or Linux partitions Table 9 2 lists the different possibilities and the page reference in this book Table 9 2 Starting virtual terminals manager Option Reference FSM CLI vtmenu Opening a virtual terminal console session with the FSM CLI on page 246 FSM GUI Opening a virtual terminal console with the FSM GUI on page 243 HMC CLI vtmenu Opening a virtual terminal console session with the HMC CLI on page 290 HMC GUI Opening a virtual terminal console session with the HMC GUI on page 288 IVM VIOS CLI mkvt Opening a virtual terminal by using the VIOS command line on page 315 IVM GUI Opening a virtual terminal with the IVM user interface on page 313 CMM CLI SOL Opening a SOL terminal for the VIOS LPAR on page 311 It might be preferable to start the virtual terminal session before a virtual server or partition is activated because the window does not refresh information that is already written to the terminal output However pressing ESC often generates new window output Figure 9 1 on page 440 shows a typical SMS main menu window and is the same regardless of the virtual terminal access method that is used Chapter 9 Operating system installation methods 439 Version AF773_ 033 SMS 1 7 c Copyright IBM Corp 2000 2008 All rig
191. Link Aggregation Groups virtual LAN very low profile virtual machine Virtual Management Channel Virtual Network Computing vital product data Virtual Protocol Interconnect virtual router redundancy protocol Virtual Service Providers Workload Partition world wide World Wide Name World Wide Port Name Extensible Markup Language IBM Flex System p270 Compute Node Planning and Implementation Guide Related publications The publications that are listed in this section are considered particularly suitable for a more detailed discussion of the topics covered in this book IBM Redbooks The following IBM Redbooks publications provide more information about the topic in this document Some publications that are referenced in this list might be available in softcopy only gt gt Product Guide IBM Flex System p270 Compute Node TIPS1018 IBM PureFlex System and IBM Flex System Products and Technology SG24 7984 Product Guide IBM Flex System p24L p260 and p460 Compute Nodes TIPSO880 IBM Flex System p260 and p460 Planning and Implementation Guide SG24 7989 IBM Power Systems HMC Implementation and Usage Guide SG24 7491 IBM PowerVM Best Practices SG24 8062 IBM PowerVM Virtualization Introduction and Configuration SG24 7940 IBM PowerVM Virtualization Managing and Monitoring SG24 7590 IBM System p Advanced POWER Virtualization PowerVM Best Practices REDP 4194 IBM System Storage N series Reporting With O
192. M Corp 2013 All rights reserved 129 5 1 Planning your system An overview One of the initial tasks for your team is to plan for the successful implementation of your Power Systems compute node This planning includes ensuring that the primary reasons for acquiring the server are effectively planned for Consider the overall uses for the server the planned growth of your applications and the operating systems in your environment Correct planning for these issues ensures that the server meets the needs of your organization This section includes the following topics gt 5 1 1 Hardware planning on page 130 gt 5 1 2 Software planning on page 132 5 1 1 Hardware planning The following important topics should be considered during your planning activities gt Network connectivity On Power Systems compute nodes several models of expansion cards are available as described in 4 9 I O adapters on page 102 Make sure that you choose the correct expansion cards for your environment and chassis switches to avoid compatibility issues or performance constraints Consider network resilience overall throughput and ToR compatibility in the decision process for what model chassis switches are required and any associated license upgrades of them gt Fibre Channel and storage area network SAN connectivity The same considerations that are described for the network connectivity decision process also apply to Fibre C
193. M IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM Elapsed time since release of system processors 202 mins 30 secs yaboot starting loaded at 00040000 00064028 0 0 00c3ba70 sp Ola3ffd0 Figure 12 51 Reboot and VNC automatic restart The installation and configuration continues with a prompt where the root password must be entered Other installation windows open Enter values as needed for your environment and a normal operating system installation Chapter 12 Installing Linux 597 7 After the installation is complete the Installation Completed screen opens as shown in Figure 12 52 Click Finish Preparation Installation Completed af Welcome af System Analysis a Time Zone Installation The installation has completed successfully Your sy
194. M showing ESA not activated Access to ESA from the IVM navigation area is done by clicking Electronic Service Area under the Service Management category New installations of VIOS IVM require that ESA is activated 326 IBM Flex System p270 Compute Node Planning and Implementation Guide To activate the ESA feature complete the following steps 1 Configure and start ESA by logging into the padmin user ID of the VIOS and running the cfgassist command Select Electronic Service Agent as shown in Figure 7 170 Press Enter Config Assist for VIOS Move cursor to desired item and press Enter Set Date and TimeZone Change Passwords Set System Security VIOS TCP IP Configuration Install and Update Software Storage Management Devices Performance Role Based Access Control RBAC Shared Storage Pools Electronic Service Agent F1 Help F2 Refresh F3 Cancel F8 Image F9 Shel 1 FIO Exit Enter Do Figure 7 170 ESA configuration by using the cfgassist command Chapter 7 Powernode management 327 2 Select Configure Electronic Service Agent as shown in Figure 7 171 and press Enter Electronic Service Agent Move cursor to desired item and press Enter Configure Electronic Service Agent Configure Service Connectivity Start Electronic Service Agent Stop Electronic Service Agent Verify Electronic Service Agent Connectivity F1 Help F2 Refresh F3 Cancel F9 Shel FIO Exit Enter Do Figure 7 171 ESA configure option 32
195. MA routing Zoning and other FC services As shown in Table 6 1 on page 169 the lower layers of FC are changed in FCoE but the upper layers are intact For example the forwarding of FCoE frames between a compute node and an IBM Flex System V7000 Storage Node are contained within the IBM Flex System Enterprise Chassis with the CN4093 10Gb Converged Scalable Switch providing the FCF switching functionality The CN4093 10Gb Converged Scalable Switch with its FCF function and FC ports can connect to external FC SANs In this case the CN4093 switch provides a gateway device function between FCoE and FC which transmits frames between the two types of networks and handles the encapsulation and de encapsulation process As shown in Figure 6 3 on page 168 the V7000 Storage Node can manage external storage controllers by using this capability to attach to FC SAN fabrics 6 1 5 FCoE port types 170 In an FCoE network virtual links are used across the lossless Ethernet network in place of the physical links in the FC network The host negotiates a connection to the FCF device across the Ethernet network by using the FIP The host end of this connection is called a VN_Port The FCF end is called the VF_Port Two FCFs can also negotiate an Inter Switch Link ISL across the Ethernet network in which case the virtual ISL has VE_Ports at both ends FCoE Initialization Protocol and snooping bridges In traditional FC networks with point to point
196. MM You can access the CMM by using SSH or a browser The browser method is described here Complete the following steps 1 Open a browser and point it to the following URL where system_name is the host name or IP address of the CMM The protocol to use is https not http https system_name 204 IBM Flex System p270 Compute Node Planning and Implementation Guide The window that is shown in Figure 7 8 opens Inactive session timeout ho timeout Licensed Materials Property of IBM Corp IBM Coporation and others 2011 IBM is a registered trademark of the IBM Corporation in the United States other countries or both Figure 7 8 CMM login window 2 Log in with your user ID and password The System Status window of the CMM opens as shown in Figure 7 9 on page 206 with the Chassis tab active If not click System Status from the menu bar at the top of the window Chapter 7 Power node management 205 IBM Chassis Management Module USERID Settings Log Out Help Vv System Status Multi Chassis Monitor Events Service and Support Chassis Management Mat Module Management ZA Systen nas nit vents Service and Suppi assis agem gt Module gem Fri 21 Jun 2013 17 52 mm ay mb SENT A Chassis Change chassis name System Information Chassis Graphical View Chassis Table View Active Events Figure 7 9 CMM opening view System Status The CMM web interface has a navigation menu
197. Mobility Partition Mobility Dynamic Dynamic Manage Virtual None LPAR DLPAR LPAR DLPAR Server 202 IBM Flex System p270 Compute Node Planning and Implementation Guide Table 7 3 compares the capabilities of the different management devices Although the CMM is technically not a Power based compute node management device it does have some unique capabilities in terms of power management that are not found on the other managers Table 7 3 Power compute node platform manager comparison Capability FSM o HMC im CMM winberafconpteredesmaneand fe fe o e Power Power Node Server on off restart on off restart Power Node Server on oft restart Yes Yes Yes Yes coe servers LPARs Dual Dual VIOS support support Dual VIOS support f Yes Yes fno No LPM Yes Yes Yes wooo o y ee e e samem fe fe e e Chapter 7 Power node management 203 covey ie oo a HMC compatible commands b BladeCenter AMM compatible commands c Power off restart only d Cannot start VIOS LPAR can stop or restart only the entire server e FSM to HMC or HMC to FSM supported f VM to IVM only g Command Line h With Inventory Scout i When used with IBM Systems Director and VM Control j Limited to setting Static power savings only 7 7 Management by using a CMM This section describes the basic steps of managing a Power based compute node from the CMM 7 7 1 Accessing the CMM Before you begin you need the IP address of the C
198. N2092 10 Gb Scalable Ethernet switch modules Two IBM Flex System 16 Gb FC5022 chassis SAN scalable switches One IBM Flex System V7000 Storage node Chapter 2 IBM PureFlex System 49 This service does not include the following features gt External SAN integration gt FCoE configuration changes gt Other chassis or switches Services descriptions The services descriptions that are described in this section including the number of service days do not form a contracted deliverable They are shown for guidance only In all cases contact an IBM Lab Services or your chosen Business Partner to define a formal statement of work 2 6 3 Software and hardware maintenance The following service and support offerings can be selected to enhance the standard support that is available with IBM PureFlex System gt Service and Support Software maintenance 1 year 9x5 9 hours per day 5 days per week Hardware maintenance 3 year 9x5 Next Business Day service 24x7 Warranty Service Upgrade gt Maintenance and Technical Support MTS Three years with one microcode analysis per year 2 7 IBM SmartCloud Entry for Flex system IBM SmartCloud Entry is an easy to deploy simple to use software offering that features a self service portal for workload provisioning virtualized image management and monitoring It is an innovative cost effective approach that also includes security automation basic metering and integrated p
199. Note that some compute nodes may nok be allowed to power on if doing so would exceed the policy power limit Power Module Redundancy with Compute Nodes Throttling Allowed very similar to Power Module Redundancy This policy allows For a higher power limit however capable compute nodes may be allowed to throttle down if one Power Module Fails Basic Power Management Maximum power limit is higher than other policies and is limited only by the nameplate power of all the Power Modules combined This is the least conservative approach since it does not provide any protection For power source or Power Module Failure IF any single power supply Fails compute node and or chassis operation may be affected t This is the maximum number of power supplies that can fail while still guaranteeing the operation of the selected policy tt The estimated utilization is based on the maximum power limit allowed in this policy and the current aggregated power in use of all components in the chassis OK Cancel Figure 3 6 Power Management Policies in CMM In addition to the redundancy settings a power limiting and capping policy can be enabled by the CMM to limit the total amount of power that a chassis requires For more information about power supplies see IBM PureFlex System and IBM Flex System Products and Technology SG24 7984 which is available at this website http www redbooks ibm com abstracts sg247984 html 68 IBM Flex System p27
200. OWER architecture that allows the processor frequency to be varied to reduce power requirements Chapter 3 Introduction to IBM Flex System 67 Figure 3 6 shows the available power management policies in the CMM Power Management Policies Power Maximum Supply power Estimated Failure Limit Utilization Limit Watts Power Source Redundancy 3 7515 23 Intended for dual power sources into the chassis Maximum power is limited to the capacity of half the number of installed power modules This is the most conservative approach and is recommended when all power modules are installed When the chassis is correctly wired with dual power sources one power source can Fail without affecting compute node server operation Mote that some compute nodes may not be allowed to power on if doing so would exceed the policy power limit Power Source Redundancy with Compute Node Throttling Allowed very similar to the Power Source Redundancy This policy allows For a higher power limit however capable compute nodes may be allowed to throttle down if one power source Fails Power Module Redundancy Intended for a single power source into the chassis where each Power Module is on its own dedicated circuit Maximum power is limited to one less than the number of Power Modules when more than one Power Module is present One Power Module can Fail without affecting compute node operation Multiple Power Module Failures can cause the chassis to power off
201. Partition Details VWiew Modify Virtual Storage ae Create Partition Activate Shutdown EN More Tasks kd IVM Management Uptime Memory Processors Entitled Utilized Reference view Modify User Accounts Processing Processing Code View Modify TCP IP Settings Units Units Guided Setup Enter PowerVM Edition Key pe 10 7782B Running Minutes 4 GB 24 Service Management Electronic Service Agent Service Focal Point Manage Serviceable Events Service Utilities Create Serviceable Event Manage Dumps Collect VPD Information Updates Backup Restore Application Logs Monitor Tasks Hardware Inventory Figure 7 7 IVM main view Because IVM is a software solution that is running on the VIOS it uses an enhanced VIOS command line structure HMC compatible commands are run directly from the protected shell padmin of the VIOS For more information see Virtual I O Server and Integrated Virtualization Manager commands which is available at this website http pic dhe ibm com infocenter powersys v3rlm5 topic p7hcg p7hcg pdf 7 5 3 IVM requirements IVM is an integrated part of VIOS Any supported version of VIOS on a Power Systems compute node can provide the IVM function Chapter 7 Power node management 201 Because one of the goals of IVM is simplification of management the following implicit rules apply to configuration and setup gt The designat
202. QCONSOLE WSO creat ot ie ae We ces bee S wea se 2s QSECOFR Program procedure MEMUS siia igs sae Gees Br coe es Gy tel ws GE ae ct Current library U S GOVERNMENT USERS RESTRICTED RIGHTS USE DUPLICATION OR DISCLOSURE RESTRICTED BY GSA ADP SCHEDULE CONTRACT WITH IBM CORP C COPYRIGHT IBM CORP 1980 2009 Figure 11 25 Installation Console Sign On window 524 IBM Flex System p270 Compute Node Planning and Implementation Guide 15 The IPL Options window opens as shown in Figure 11 26 The power down abnormal status message is to be expected on an installation of the operating system and can be ignored IPL Options Type choices press Enter System date s sosse e e e e 07 02 13 MM DD YY SYSTEM LIME mos a a Alle a Ae BO eee 09 33 00 HH MM SS System time zone 2 646 6468 e QOOOOUTC F4 for list Clear job queues 24 2 N Y Yes N No Clear output queues N Y Yes N No Clear incomplete job logs N Y Yes N No Start print writers y Y Yes N No Start system to restricted state Y Y Yes N No Set major system options Y Y Yes N No Define or change system at IPL N Y Yes N No Last power down operation was ABNORMAL Figure 11 26 Installation IPL Options menu 16 If you need to change system values you can do so now An example of a system value that you might change is the value for the security level QSECURITY system value to meet your s
203. S such as the PureApplication System and PureData System and Infrastructure as a Service laaS which can be enabled with IBM PureFlex System This chapter includes the following topics 2 1 Introduction on page 16 2 2 Components on page 17 2 3 PureFlex solutions on page 20 2 4 IBM PureFlex System Express on page 22 2 5 IBM PureFlex System Enterprise on page 35 2 6 Services for IBM PureFlex System Express and Enterprise on page 47 2 7 IBM SmartCloud Entry for Flex system on page 50 YYYY V YV Yy Copyright IBM Corp 2013 All rights reserved 15 2 1 Introduction IBM PureFlex System provides an integrated computing system that combines servers enterprise storage networking virtualization and management into a single structure You can use its built in expertise to manage and flexibly deploy integrated patterns of virtual and hardware resources through unified management PureFlex System includes the following features gt Configurations that ease acquisition experience and match your needs gt Optimized to align with targeted workloads and environments gt Designed for cloud with the SmartCloud Entry option gt Choice of architecture operating system and virtualization engine gt Designed for simplicity with integrated single system management across physical and virtual resources gt Shipped as a single integrated entity directly to you gt Inc
204. S 1982 2012 ALL RIGHTS RESERVED These programs contain diagnostics service aids and tasks for the system These procedures should be used whenever problems with the system occur which have not been corrected by any software application procedures available In general the procedures will run automatically However sometimes you will be required to select options inform the system when to continue and do simple tasks Several keys are used to control the procedures The Enter key continues the procedure or performs an action The Backspace key allows keying errors to be corrected The cursor keys are used to select an option Press the F3 key to exit or press Enter to continue Figure 7 159 Diagnostics initial window Chapter 7 Power node management 319 320 2 The function selection window that is shown in Figure 7 160 displays several options that are available in diagnostics By using the down arrow key move to Task Selection and press Enter FUNCTION SELECTION 801002 Move cursor to selection then press Enter Diagnostic Routines This selection will test the machine hardware Wrap plugs and other advanced functions will not be used Advanced Diagnostics Routines This selection will test the machine hardware Wrap plugs and other advanced functions will be used Task Selection Diagnostics Advanced Diagnostics Service Aids etc This selection will list the tasks supported by these procedures Once
205. SC key return to previous screen Type menu item number and press Enter or select Navigation key 11 Figure 9 19 Network parameters configuration Chapter 9 Operating system installation methods 457 22 Perform system checks for example ping or adapter speed to verify your selections as shown in Figure 9 20 Version AF773_ 033 SMS 1 7 c Copyright IBM Corp 2000 2008 All rights reserved IP Parameters Interpartition Logical LAN U7954 24X 1077E3B V5 C4 T1 1 Client IP Address 9 27 20 216 2 Server IP Address 9 42 241 191 3 Gateway IP Address 9 27 20 1 4 Subnet Mask 255 255 252 0 Navigation keys M return to Main Menu ESC key return to previous screen X eXit System Management Services Type menu item number and press Enter or select Navigation key Figure 9 20 IP configuration sample 23 Press M to return to the SMS main menu see Figure 9 15 on page 455 458 IBM Flex System p270 Compute Node Planning and Implementation Guide 24 Select option 5 Select boot options to display the Multiboot screen Select option 1 Select Install Boot Device as shown in Figure 9 21 Version AF773_ 033 SMS 1 7 c Copyright IBM Corp 2000 2008 All rights reserved Multiboot Select Install Boot Device Configure Boot Device Order Multiboot Startup lt OFF gt SAN Zoning Support Management Module Boot List Synchronization Navigation keys M return to Main Menu ESC key return to previous screen Ty
206. SM HMC and IVM management on page 202 7 7 Management by using a CMM on page 204 7 8 Management by using FSM on page 224 7 9 Management by using an HMC on page 265 7 10 Management by using IVM on page 299 Vvvvvvrvvrvyvyiyv 184 IBM Flex System p270 Compute Node Planning and Implementation Guide 7 1 Management network The IBM Flex System Enterprise Chassis is designed to provide separate management and data networks The management network is a private and secure Gigabit Ethernet network that is used to perform management related functions throughout the chassis including management tasks on compute nodes switches and the chassis The data network normally is used for operating system administrative and user access and applications The management network connection is externalized only through the CMM s network connection The data network is externalized through the external switch ports of the switch I O modules These switches and switch ports can be configured by using traditional methods The management network is shown in Figure 7 1 on page 186 blue lines It connects the CMM to the compute nodes the switches in the I O bays and the FSM The FSM connection to the management network is through a special Broadcom 5718 based management network adapter EthO The management networks in multiple chassis are connected through the external ports of the CMMs in each chassis via a GbE top of rack swi
207. SW7 IBM Flex System Fabric EN4093R 10Gb Scalable Switch EB28 IBM SFP SR Transceiver EB29 IBM SFP RJ45 Transceiver 3286 IBM 8 Gb SFP Software Optical Transceiver 3771 IBM Flex System FC5022 24 port 16Gb ESB SAN Scalable Switch 5370 Brocade 8 Gb SFP Software Optical Transceiver 9039 Base Chassis Management Module 3592 Other Chassis Management Module Chapter 2 IBM PureFlex System 39 2 5 3 Top of rack switches The PureFlex Enterprise configuration can consist of a compliment of six TOR switches two IBM System Networking RackSwitch G8052 two IBM System Networking RackSwitch G8264 and two IBM System Storage SAN24B 4 Express switches These switches are required in a multi chassis configuration and are optional in a single chassis configuration The TOR switch infrastructure is in place for aggregation purposes which consolidate the integration point of a multi chassis system to core networks Table 2 13 lists the switch components Table 2 13 Components of the Top of Rack Ethernet switches AAS feature XCC feature Description code code 1455 48E 7309 G52 IBM System Networking RackSwitch G8052R 1455 64C 7309 HC3 IBM System Networking RackSwitch G8264R 2498 B24 2498 24E IBM System Storage SAN24B 4 Express 2 5 4 Compute nodes The PureFlex System Enterprise requires one or more of the following compute nodes gt IBM Flex System p24L p260 p270 or p460 Compute Nodes IBM POWER or POWER7 based see Table 2 14 g
208. Select Option 3 Change TCP IP attributes and then press Enter 3 At the IP datagram forwarding prompt enter YES and then press Enter 11 9 3 Configuring an interface You must configure an IPv4 interface by assigning an IPv4 address for your network adapter To configure a TCP IP interface complete the following steps 1 From the CFGTCP menu select Option 1 Work with TCP IP interfaces and then press Enter 2 Inthe Work with TCP IP Interfaces menu select Option 1 Add for the Opt prompt and press Enter to access the Add TCP IP Interface menu 3 Atthe Internet address prompt specify a valid IPv4 address that you want to represent your system 4 Atthe Line description prompt specify the line name that you defined earlier 5 At the Subnet mask prompt specify a valid IPv4 address for the subnet mask and press Enter 6 To start the interface select Option 9 Start on the Work with TCP IP Interface menu for the interface you configured Press Enter 11 9 4 Configuring a default route Because your network can consist of many interconnected networks you must define at least one route for your system to communicate with a remote system on another network You must also add routing entries to enable TCP IP clients that are attempting to reach your system from a remote network to function correctly 548 IBM Flex System p270 Compute Node Planning and Implementation Guide You need to plan to have the routing tabl
209. Select a virtual optical device in the table to assign it to the new pan Clear the selection for a device if you do not want to assign it to the partition Click Modify to change the n media for a specific optical device Click Create Device to add a new optical device for the partition Current Media Current Media Size E Unknown None Modify Figure 8 71 IVM Create Partition Optical Tape window 10 Click Next to open the Summary window 420 IBM Flex System p270 Compute Node Planning and Implementation Guide 11 As shown in Figure 8 72 the Summary window lists all the options and actions that were selected in the previous windows If any changes are wanted click Back to move to the wanted window In the Summary window click Finish to complete the Partition Creation wizard and return to the View Modify view of IVM Create Partition Summary Summary This is a summary of your partition settings Select Finish to create the partition To make changes to the sg select Back ss z You can modify the partition by using the partition properties task after you complete this wizard ae 2E system name server 7954 24X SN107782B Partition ID 2 Partition name itsolpar2 Environment AIX or Linux Memory mode Dedicated Memory 4 GB 4096 MB Processors 4 virtual Virtual Ethernets 1 Host Ethernet adapter ports None Storage capacity 1 GB 1024 MB Ipar2rootvg Storage devices Optical devices None Physical
210. Server 7954 24 SM1 077828 4 Max Page Size Standby Total 1 Fiterecd 1 Selected 1 22 625 STANDBY Figure 7 108 HMC managed server Power On status messages Powering off a running server is started the same way as the Power On process from the task button or task list that is presented by selecting a server as shown in Figure 7 109 Click Operations Power Off Systems Management Servers View Table N s ad iG ia Big F Filter Tasks Views Available Select Mame Eie a Processing a Available a Reference A Laie Memory SE Code Max Fage Siz Operations Configuration Connections Hardware Information Updates Serviceability asks Server 7954 24 SN 1077326 Properties E Operations Power Ott Power Management LED Status Schedule Operations Launch Advanced System Management 4 Litilization Data Rebuild Change Password Configuration l l Connections Hardware Information Updates Capacity On Demand CoD D rens O O O O Management LED Status Power Schedule Operations Launch Advanced System Management 45h Utilization Data Rebuild Change Fassworgd Serviceability Capacity On Demand CoD Figure 7 109 HMC managed server Power Off 286 IBM Flex System p270 Compute Node Planning and Implementation Guide Figure 7 110 shows the Power Off server window that opens
211. System and PureFlex IBM i Solution PureFlex FCoE Customization Service and PureFlex Services for IBM i 2 6 1 PureFlex FCoE Customization Service This new services customization is one day in length and provides the following features gt vy y Design a new FCoE solution to meet customer requirements Change FCoE VLAN from default Modify internal FCoE Ports Change FCoE modes and Zoning The prerequisite for the FCoE customization service is PureFlex Intro Virtualized or Cloud Service and that FCoE is on the system Limited two pre configured switches in the single chassis no External SAN configurations other chassis or switches are included 2 6 2 PureFlex Services for IBM i This package offers five days of support the IBM i PureFlex Solution IBM performs the following PureFlex Virtualized services for a single Power node gt Provisioning of a virtual server through VMControl basic provisioning for the Power node Prepare capture and deploy an IBM i virtual server Perform System Health and Monitoring with basic Automation Plans Review Security and roles based access Services on a single x86 node Verify VMware ESXi installation create a virtual machine VM and install a Windows Server operating system on the VM Install and configure vCenter on the VM This service includes the following prerequisites gt gt One p460 Power compute node Two IBM Flex System Fabric E
212. The IBM FSM 7955 01M includes the following features Intel Xeon E5 2650 8 C 2 0 GHz 20 MB 1600 MHz 95 W 32 GB of 1333 MHz RDIMMs memory Two 200 GB 1 8 inch SATA MLC SSD in a RAID 1 configuration 1 TB 2 5 inch SATA 7 2 K RPM hot swap 6 Gbps HDD IBM Open Fabric Manager Optional FSM advanced which adds VM Control Enterprise license YYYY YV Y 2 4 5 PureFlex Express storage requirements and options The PureFlex Express configuration requires a SAN attached storage system The following storage options are available gt IBM Storwize V7000 gt IBM Flex System V7000 Storage Node The required number of drives depends on drive size and compute node type All storage is configured with RAID 5 with a single hot spare that is included in the total number of drives The following configurations are available gt Power Systems compute nodes only 16 x 300 GB or 8 x 600 GB drives gt Hybrid Power and x86 16 x 300 GB or 8 x 600 GB drives gt Multi chassis configurations require 24 x 300 GB drives SmartCloud Entry is optional with Express if it is selected the following drives are available x86 based nodes only including SmartCloud Entry 8 x 300 GB or 8 x 600 GB drives Hybrid both Power and x86 with SmartCloud Entry 16x 300 GB or 600 GB drives Solid state drives SSDs are optional However if they are added to the configuration they are normally used for the V7000 Easy Tier function which improves sys
213. The value that is specified here is the wanted value Minimum and maximum values can be edited after the virtual servers are created as described in 8 5 3 Modifying the VIOS profile on page 399 Chapter 8 Virtualization 361 2 Click Next to proceed to the processor settings The window that is shown in Figure 8 11 opens Create Virtual Server Server 7954 24 SM107782B p Processor Name vf Memory Specify the processing mode and number of processors gt Processor In dedicated processing mode each assigned processor uses 1 physical prod processor uses 0 1 physical processors Newer operating system levels suppg Processing Mode 3 Dedicated _ Shared Assigned Processors Total system processors 24 0 Available processors 24 0 Assigned processors Figure 8 11 Setting the processor characteristics for the VIOS virtual server We choose to allocate four dedicated processors for itsoVIOS6A Select the Dedicated option and enter the value Specifying processor units When a shared processor from a processor pool is used you cannot specify processing units entitlement either uncapped capped or weight These values can be edited after the virtual servers are created as described in 8 5 3 Modifying the VIOS profile on page 399 No memory or processing resources are committed In this step and in the rest of the steps for defining the virtual server we are defining only the resources that are
214. These solutions which can be selected within the IBM configurators for ease of ordering are integrated at the IBM factory before they are delivered to the client Services are also available to complement these PureFlex Solutions offerings 2 3 1 PureFlex Solution for IBM i The IBM PureFlex System Solution for IBM i is a combination of IBM i and an IBM PureFlex System with POWER and x86 processor based compute nodes that provide an integrated business system By consolidating their IBM i and x86 based applications onto a single platform the solution offers an attractive alternative for small and mid size clients who want to reduce IT costs and complexity in a mixed environment The PureFlex Solution for IBM i is based on the PureFlex Express offering and includes the following features gt Complete integrated hardware and software solution Simple one button ordering fully enabled in configurator All hardware is pre configured integrated and cabled Software preinstall of IBM i OS PowerVM Flex System Manager and V7000 Storage software gt Reliability and redundancy IBM i clients demand Redundant switches and I O Pre configured Dual VIOS servers Internal storage with pre configured drives RAID and Mirrored gt Optimally sized to get started quickly p260 compute node that is configured for IBM i x86 compute node that is configured for x86 workloads deal for infrastructure consolida
215. VIOS and click Actions gt System Configuration Manage Profiles as shown in Figure 8 47 Server 7954 24 SNLO7782B View Members Performance Summary Search the table Search Select Name Pat Id Access 2 State a gt itsovVIOses Related Resources SS 2 Oo Stopped Topology Perspectives b Create Group Change Default Profile Delete Remowe Rename Add to Automation Inventory Operations Security Group as Workload Manage Profiles System Configuration System Status and Health Sermice and Support d d d Release hdanagement e d d Manage Virtual Serwer Save Current Configuration 4 M4 PagelofiFrH i a Selected 1 Total Properties Sewer to Storage Wapping View Edit Location Figure 8 47 Manage VIOS profiles to change settings from FSM Chapter 8 Virtualization 399 400 A window opens and shows all of the profiles that are available for the selected virtual server Select the profile to edit and click Actions Edit or click the profile name Click the Processors tab to access the processor settings that were made by the Virtual Server Creation wizard The window that is shown in Figure 8 48 opens Options can be changed in this window to the values that are planned for the VIOS virtual server Change the minimum desired and maximum values as needed Logical Partition Profile Properti
216. View Rear View Figure 5 11 Eight 80 mm fan modules support 7 14 nodes 5 8 2 Supported environment The p270 and the Enterprise Chassis comply with ASHRAE Class A3 specifications The supported operating environment includes the following specifications 5 40 C 41 104 F at O 914 m 0 3 000 ft 5 28 C 41 82 F at 914 3 050 m 3 000 10 000 ft Relative humidity 8 85 Maximum altitude 3 050 m 10 000 ft v vy y 5 9 Planning for virtualization Power Systems compute nodes provide features that are available in high end POWER servers such as virtualization when it is connected to the IBM Flex System Manager or an HMC You can use virtualization to create and manage partitions and make full use of the PowerVM virtualization features such as IBM Micro Partitioning Active Memory Sharing AMS N Port ID Virtualization NPIV and Live Partition Mobility LPM Chapter 5 Planning 159 To partition your Power Systems compute node it must be attached to the IBM Flex System Manager HMC or IVM The process that is used to connect your Power Systems compute node to both nodes is described in 8 4 Planning for a virtual server environment on page 346 The key element for planning your partitioning is knowing the hardware that you have in your Power Systems compute node because that hardware is the only limit that you have for your partitions Adding VIOS to the equation solves many of t
217. WER7 gt SLES v11SP2 2 4 9 Available software for x86 based compute nodes x86 based compute nodes can be ordered with VMware ESXi 5 1 hypervisor preinstalled to an internal USB key Operating systems that are ordered with x86 based nodes are not preinstalled The following operating systems are available for x86 based nodes Microsoft Windows Server 2008 Release 2 Microsoft Windows Server Standard 2012 Microsoft Windows Server Datacenter 2012 Microsoft Windows Server Storage 2012 RHEL SLES YYYY Y Y 34 IBM Flex System p270 Compute Node Planning and Implementation Guide 2 5 IBM PureFlex System Enterprise The tables in this section show the hardware software and services that make up IBM PureFlex System Enterprise We describe the following items 2 5 1 Enterprise configurations 2 5 2 Chassis on page 39 2 5 3 Top of rack switches on page 40 2 5 4 Compute nodes on page 40 2 5 5 IBM FSM on page 41 2 5 6 PureFlex Enterprise storage options on page 41 2 5 7 Video keyboard and mouse option on page 44 2 5 8 Rack cabinet on page 45 2 5 9 Available software for Power Systems compute node on page 46 2 5 10 Available software for x86 based compute nodes on page 46 Vvvvvvrovrvyvvvy Y To specify IBM PureFlex System Enterprise in the IBM ordering system specify the indicator feature code that is listed in Table 2 10 for each machine type Table 2 10 Enter
218. X SN107782B filter Ipar_names itsoVIOS6A HMC CLI method The following sections show an example of the use of the HMC CLI to create a virtual server for a VIOS Accessing the HMC To access the HMC you must know the IP address or host name of the HMC and have a valid user ID and password You must start an SSH session with the HMC and log in Creating the VIOS virtual server by using the CLI The HMC uses the same command syntax and options as the FSM The command that is used in this example is the same as used on the FSM the only difference is the removal of the smcli prefix HMC usage The HMC command Issyscfg r sys F name can be used to display a list of all managed systems on the HMC IBM Flex System p270 Compute Node Planning and Implementation Guide To create a VIO Server by using a single command run the following command mksyscfg r lpar m Server 7954 24X SN107782B i name itsoVIOS6A profile name itsoVIOS6A new lpar_env vioserver lpar_id 1 min_mem 2048 desired_mem 8192 max_mem 10240 proc_mode ded min_ procs 2 desired procs 4 max_procs 6 sharing mode share idle procs active auto _start 0 lpar_io pool _ids 1 2 io slots 2101021A none 1 21010218 n one 1 21010238 none 1 21010219 none 0 max_virtual_slots 300 virtual _serial_adapters 0 server 1 any any 1 1 server 1 any any 1 virtua 1_scsi_adapters 5 server 2 102 0 virtual_eth adapters 2 1 4091 1 1 ETHERNETO al1 none 3 1 1 4092 1 1 ETHERNETO al1
219. XT12 type fc Jun 20 13 31 42 fd8c 215d 178e cOde 7699 75ff fe70 42ef NOTICE 1ldp LLDP TX amp RX are disabled on port EXT11 Jun 20 13 31 42 fd8c 215d 178e cOde 7699 75ff fe70 42ef NOTICE 1ldp LLDP TX amp RX are disabled on port EXT12 Router config vlan 1002 VLAN 1002 is created Router config vlan member INTA13 INTA14 INTA8 Port INTA8 is an UNTAGGED port and its PVID is changed from 1 to 1002 Port INTA13 is an UNTAGGED port and its PVID is changed from 1 to 1002 Port INTA14 is an UNTAGGED port and its PVID is changed from 1 to 1002 Router config vlan member EXT11 EXT12 Router config vlan Example 6 2 uses the show vlan command which shows all ports were successfully added to VLAN 1002 with VLAN enabled Example 6 2 Display VLAN and membership VLAN Name Status MGT Ports 1 Default VLAN ena dis INTA1 INTB14 EXT1 EXT16 1002 VLAN 1002 ena dis INTA8 INTA13 INTA14 EXT11 EXT12 4095 Mgmt VLAN ena ena EXIM MGT1 The next step is to enable FCF where Example 6 3 on page 178 shows the fcf enable ISCLI command run where on completion FCoE connections are established Chapter 6 Converged networking 177 Example 6 3 Enabling FCF Router config fcf enable Router config Jun 20 17 11 03 fd8c 215d 178e cOde 7699 75ff fe70 42ef NOTICE fcoe FCOE connection between VN PORT Oe fc 00 01 0c 00 and FCF 74 99 75 70 41 c3 has been established Jun 20 17 11 08 fd8c 215d 178e cOde 7699 75ff fe70 42ef NOTICE fcoe FCOE connectio
220. a slightly different composition of software defaults than Express which are summarized in Table 2 2 Table 2 2 PureFlex software defaults overview Software Enterprise Storage Storwize V7000 or Flex System V7000 Base Real Time Compression optional Flex System Manager FSM FSM Standard FSM advanced Upgradeable to Advanced Selectable to Standard IBM Virtualization PowerVM Standard PowerVM Enterprise Upgradeable to Enterprise Selectable to Standard Virtualization customer installed VMware Microsoft Hyper V KVM Red Hat and SUSE Linux AIX Standard V6 and V7 IBM i 7 1 6 1 RHEL 6 SUSE SLES 11 Operating systems Customer installed Windows Server RHEL SLES Security Power SC Standard AIX only Tivoli Provisioning Manager x86 only Standard one year upgradeable to three years a Advanced is required for Power Systems 2 2 1 Configurator for IBM PureFlex System For the latest Express and Enterprise PureFlex System offerings the IBM Configurator for e business e config tool must be used Configurations that are composed of x86 and Power Systems compute nodes are configurable The e config configurator is available at this website http ibm com services econfig announce Chapter 2 IBM PureFlex System 19 2 3 PureFlex solutions To enhance the integrated offerings that are available from IBM two new PureFlex based solutions are available One is focused on IBM i and the other on Virtual Desktop
221. ab view is displayed as shown in Figure 7 33 on page 226 All functions of the FSM can be accessed from this view with following second row of tabs Initial Setup Additional Setup Plug ins Administration Applications Learn YYYY YV Y Chapter 7 Power node management 225 IBM Flex System Manager Welcome USERID Problems o oiii Compliance E oiii Help Logout Select Action hail a a Home s Chassis han Home 0 A Use these tabs to perform some initial setup tasks view or activate plug ins Check and Update Flex System Manager Perform administration tasks and access additional information Information Center Initial Setup Additional Setup Plug ins Administration Applications Learn Perform the following initial setup tasks to set up IBM Flex System Manager for the first time or 7 Check and Update Flex System Manager Obtain and install updates for IBM Flex System Manager This will require a restart of IBM Flex System Manager B Updates completed on May 24 2013 3 27 12 PM Select Chassis to be Managed View all chassis and Flex System Managers in your environment and select which to manage You are currently managing 1 chassis View chassis 7 8 ay Configure Chassis Components rd Configure basic settings for chassis components including compute nodes storage nodes and I O modules Ill T fy Deploy Compute Node Images Os For Red Hat Enterpr
222. aces and servers with AUTOSTART YES are started with the STRTCP command The basic installation process is now complete for your IBM i virtual server Chapter 11 Installing IBMi 551 552 IBM Flex System p270 Compute Node Planning and Implementation Guide 12 Installing Linux In this chapter we describe how to install SUSE Linux Enterprise Server and Red Hat Enterprise Linux on the IBM Flex System p270 Compute Node The following topics are included in this chapter gt 12 1 IBM Installation Toolkit for PowerLinux on page 554 gt 12 2 Installing Red Hat Enterprise Linux on page 581 gt 12 3 Installing SUSE Linux Enterprise Server on page 592 Copyright IBM Corp 2013 All rights reserved 553 12 1 IBM Installation Toolkit for PowerLinux To use all of the capabilities of the p270 and IBM PowerVM virtualization some software rom packages must be added to the standard Linux distributions software This set of rom packages are called Service and Productivity Tools for PowerLinux Servers These packages can be downloaded and installed manually but these packages vary with the distributions SUSE Linux Enterprise Server or Red Hat Enterprise Linux and with the version of the distribution and they are regularly updated Figure 12 1 shows an example of an issue that is caused by missing packages Change virtual server DLPAR panel in FSM some daemons are missing RMC not available and DLPAR operations
223. add the system s Systems O42 1 1547 Figure 7 101 Managed system add confirmation Chapter 7 Power node management 281 5 The work pane is updated with the added server as shown in Figure 7 102 gt Hardware Management Console Af eu hscroot Help Logoff Systems Management gt Servers View 4 fr fe E Welcome H OP el e OY Filter Tasks Views El il Systems Management Available Pence Select Name Status Processing a Reference Code a E f Servers Unis Memory GB Server 7954 24x SM1 077828 i cem arapa v Server 7954 24X SN Power Off 0 0 Max Page Size Total 1 Filtered 1 Selected 1 T System Plans soo a HMC Management Xi Service Management om Updates asks Server 7954 24X SN107782B L Properties Connections Updates Operations Configuration Hardware Information Serviceability Figure 7 102 Work pane that is updated with new managed system If the password that is entered is incorrect you see a Failed Authentication message in the Status column and Incorrect LDAP password in the reference column as shown in Figure 7 103 gt Hardware Management Console f 4 aTa a T a a hscroot Help Logoff Systems Management gt Servers View lt A fa G Welcome teh 9 OP ea oR E Filter Tasks Views v E il Systems Management Available rene eas Select Name Status Processing a Reference Code a o
224. affic remains internal to the IBM Flex System Enterprise Chassis it does not have to rely on any external SAN equipment for its switching or redirection 174 IBM Flex System p270 Compute Node Planning and Implementation Guide Figure 6 5 shows VLAN 1002 which was created and includes external ports EXT11 and EXT12 with internal ports INTA13 and INTA14 from the V7000 Storage Node The storage node is in node bays 11 14 in the IBM Flex System Enterprise Chassis so INTA11 INTA14 are available for this VLAN of which INTA13 and INTA14 were selected The port from the Compute Node 8 INTA8 also was included in the Fibre Channel VLAN VT000 Mode Canister 2 Compute Node bay amp VTOO0 Mode Canister 1 O INTAS INTAIA O VLAN 1002 106Gb FCoE 8Gb Fibre Channel Extemal SAN connectivity via Omni Ports EXT11 EXT12 Figure 6 5 FCoE VLAN 1002 configuration with internal and external members With this VLAN created FCoE zones can be configured to map compute node 8 to the V7000 Storage Node via internal ports INTA13 and INTA14 and to external storage devices via EXT11 or EXT12 The connectivity between compute node 8 and the V7000 is FCoE as the internal physical layers are Ethernet based Any connection that is outbound to external storage via EXT11 or EXT 12 traffic is de encapsulated by using FCF as the Omni ports in this VLAN are set to Fibre Channel Any inbound FC traffic that is going to compute node 8 is encap
225. age You can use the storage capabilities of IBM Flex System to gain advanced functionality with the IBM Flex System V7000 Storage Node or the IBM V7000 Storwize in your system while making use of your existing storage infrastructure through advanced virtualization IBM Flex System simplifies storage administration by using a single user interface for all your storage through a management console that is integrated with the comprehensive management system You can use these management and storage capabilities to virtualize third party storage with nondisruptive migration of the current storage infrastructure You can also make use of intelligent tiering so you can balance performance and cost for your storage needs The solution also supports local and remote replication and snapshots for flexible business continuity and disaster recovery capabilities 1 4 7 Networking With a range of available adapters and switches to support key network protocols you can configure IBM Flex System to fit in your infrastructure while still being ready for the future The networking resources in IBM Flex System are standards based flexible and fully integrated into the system so you get no compromise networking for your solution Network resources are virtualized and managed by workload These capabilities are automated and optimized to make your network more reliable and simpler to manage Chapter 1 Introduction 11 The following key capabilities are inc
226. agement 300 The following tasks are basic system administration actions that are required to perform basic management of a Power compute node Hardware power on or off A Power compute can be in a powered off state while in the chassis However the FSP is always active and ready to accept instructions from a platform manager the CMM or from the ASMI user interface to the FSP directly With IVM managed systems the platform manager is not active unless the VIOS is running The powering on of a Power compute node can be done by only the CMM or ASMI interface with IVM managed systems IBM Flex System p270 Compute Node Planning and Implementation Guide CMM method Complete the following steps to use the CMM method 1 On the CMM a Power compute node can be powered up from the System Status window by clicking the wanted node and then clicking Power On from the Actions menu as shown in Figure 7 134 IBM Chassis Management Module USERID Settings Log Out System Status Multi Chassis Monitor Events Service and Support Chassis Management Mgt Module Management w Search i on Chassis Change chassis name System Information Chassis Graphical View Chassis Table View Active Events Actions for Node 06 node06 p270 Power On Power Off Shutdown OS and Power Off Restart Immediately Figure 7 134 CMM System Status compute node actions Chapter 7 Power node management 301 An alternative wa
227. aint perform software maintenance reset reset an object s NIM state fix_query perform queries on installed fixes check check the status of a NIM object reboot reboot specified machines maint_boot enable a machine to boot in maintenance mode showlog display a log in the NIM environment Ippchk verify installed filesets restvg perform a restvg operation MORE 6 Fl Help F2 Refresh F3 Cancel F8 Image F10 Exit Enter Do Find n Find Next tee ee ee ee ee a a a a a a a a a a a a a a a a a a a a a a ee ee ee ee Figure 9 12 Operation on machine selection 452 IBM Flex System p270 Compute Node Planning and Implementation Guide 14 Confirm your machine selection and option selection in the next window and select other options to further customize your installation as shown in Figure 9 13 Perform a Network Install Type or select values in entry fields Press Enter AFTER making all desired changes Entry Fields Target Name 7954AIXtest Source for BOS Runtime Files rte installp Flags agX Fileset Names Remain NIM client after install yes Initiate Boot Operation on Client yes Set Boot List if Boot not Initiated on Client no Force Unattended Installation Enablement no ACCEPT new license agreements yes F1 Help F2 Refresh F3 Cancel F4 List F5 Reset F6 Command F7 Edit F8 Image F9 Shel FIO Exit Enter Do Figure 9 13 Base Operating System BOS install
228. al should be on an FTP server that can be accessed by the HMC during the update process Installing the system firmware update Complete the following steps to install the system firmware update 1 Click Servers in the navigation pane then select the wanted server from the work area IBM Flex System p270 Compute Node Planning and Implementation Guide 2 Click Now Visible and then click Updates gt Change License Internal Code for the current release as shown in Figure 7 118 The option updates the system firmware to a new service pack within the same release The Upgrade Licensed Internal Code to a new release option is used for example in moving from 01AF773_xxx to 01AF776_xxx gt m Hardware Management Console Change Network Settings a 6 Systems Management gt Servers View E Welcome Ht e P e R E O Filter Tasks Views E il Systems Management Available i Available Reference A A AA A A Sages Sel Name Status Maal Memory GB Gnas Server 7954 24 SN107782B TH Custom Groups J Properties Operations i System Plans Configuration B HMC Management Connections Hardware Information 9 Service Management Se err Td om Updates Check system readiness View system information Properties E Connections Updates asks Server 7954 24X SN107782B Operations Service Processor Status Serviceability Configuration Reset o
229. allation method VIOS RHEL SLES IBMi Optical physical or VIOS virtual optical drive NIM TFTP or BOOTP Cloning alt_disk_copy or alt_disk_mksysb in AIX Installios HMC and FSM only a FSM and HMC Two VIOS supported IVM Only one VIOS supported b Only physical optical drives are supported c With additional toolset in the IBM Installation Toolkit for PowerLinux For more information see 12 1 IBM Installation Toolkit for PowerLinux on page 554 9 2 Accessing System Management Services In this section we describe how to access the System Management Services SMS menu for installation tasks for VIOS AIX and PowerLinux operating systems The IBM i operating system does not use the SMS menu and has a separate console system The SMS menu system is run by the Flexible Service Processor FSP in the Server hardware The SMS is used to view information about the system or partition and to perform tasks such as changing the boot list and setting network parameters Access to SMS from the FSM or Hardware Management Console HMC is through a Java based virtual terminal console that is started from the GUI ora secure shell SSH session by using the vtmenu command Integrated Virtualization Manager IVM managed systems use Serial over LAN SOL through the Chassis Management Module CMM to access the SMS for the VIOS partition 438 IBM Flex System p270 Compute Node Planning and Implementation Guide A
230. allation server 2 On the installation server install the tftp and the dhcpd server packages we use dhcpd to run bootp on a specific MAC address 3 Copy the yaboot executable file from the DVD directory ppc chrp to the tftpboot directory on the installation server var lib tftpboot Chapter 9 Operating system installation methods 485 Tip The yaboot executable file is named yaboot We can rename it for example yaboot rh6x to avoid conflicts in the tftpboot directory 4 The netboot image is larger than 65 500 512 bytes blocks and cannot be used because a limitation of tftpd We must boot the vml inuz kernel and use the ramdisk image Copy the two files from the ppc ppc64 directory of the DVD to the tftpboot directory of the installation server 5 On the installation server create a directory named tftpboot etc and create a file named 00 XX XX XX XX XX XX replacing all characters except the 00 with the target virtual server MAC address as shown in Figure 9 59 default rh6l timeout 100 image vml inuz initrd ramdisk image gz label rh6l1 Figure 9 59 OO XX XX XX XX XX XX file 6 The dhcpd conf file is shown in Figure 9 60 and it is similar to the SLES version Change the network addresses MAC address and the IP configuration to your environment settings allow bootp deny unknown clients not authoritative default lease time 600 max lease time 7200 ddns update style none subnet 192 168 20 0 netmask 255
231. allocated to this virtual server after it is activated 3 Click Next to move to the virtual adapter definitions Virtual Ethernet In this task the process is repeated for each virtual adapter to be defined on the VIOS but the characteristics differ from each adapter type The order in which the adapters are created does not matter 362 IBM Flex System p270 Compute Node Planning and Implementation Guide Be sure to double check your planning documentation to ensure that you are specifying the correct VLAN IDs for the virtual Ethernet adapters that the virtual SCSI client and server adapters are correctly linked and that the WWPN of the virtual Fibre Channel adapters is noted and provided to the SAN administrators If you performed the steps that are described in Memory and processor settings on page 361 you should see the window that is shown in Figure 8 12 Two virtual Ethernet adapters are created by default The adapters can be edited deleted or more can be added In this example we edit the two default adapters and add a third onfigure the virtual network adapters for the virtual server Physical I O network adapters can be selected later in the Wsical I O page of this wizard Two virtual Ethernet adapters will be created by default however you can add edit or move adapters to suite your needs irtual Ethernet Select Adapter ial Port VLAN ID Bridge Priority a E og F No Figure 8 1
232. alues The changes will be applied the current and pending values might take some time Processing Units Virtual Processors Property Current Pending Property Current Pending Minimum 0 1 Minimum i Assigned 1 6 5 Assigned 16 Maximum 16 Maximum 16 General Current Uncapped weight Medium 128 Medium 128 n None Capped Processor compatibility mode None Uncapped Bovw Low 64 Medium 126 Preferred value E High 255 Current value Figure 8 59 IVM Partition Properties Processing tab Chapter 8 Virtualization 409 11 Click the Ethernet tab as shown in Figure 8 60 to view the existing virtual Ethernet adapters IVM creates four adapters by default More virtual Ethernet adapters can be created from this tab if needed Partition Properties itsoVIOS6A 1 General Memory Processing Physical Adapters Virtual Ethernet Adapters You can change the assigned virtual Ethernet for each of this partition s virtual through 4 or create adapters Adapter Virtual Ethernet Create Adapter Enter Virtual Ethernet ID Virtual Ethernet ID 1 4094 pters tab and che ak Cancel Figure 8 60 IVM Partition Properties Ethernet tab IVM Limitation The first four default virtual Ethernet adapters cannot be deleted or modified New virtual Ethernet adapters can be created only with a Virtual Ethernet ID PIVD value by using the GUI More VLANs
233. alysis activities require a dedicated system console The POWER Hypervisor provides the virtual console by using a virtual TTY or serial adapter and a set of Hypervisor calls to operate on it Virtual TTY does not require the purchase of any other features or software such as the PowerVM Edition features For Power Systems compute nodes the operating system console can be accessed from IBM Flex System Manager 8 4 Planning for a virtual server environment 346 The IBM Flex System Manager FSM HMC or IVM can be used to create virtual servers or LPARs on Power Systems compute nodes It is presumed that FSM or HMC is set up so that it can manage the compute nodes on which the virtual servers or LPARs are created Because IVM is integral with the Power Systems compute node installation of VIOS IVM is always the first step when this system manager is used Any experience that uses the IVM HMC FSM or the Systems Director Management Console to create LPARs or virtual servers on Power system should easily transfer when any of these platform managers are used The PowerVM concepts are always the same regardless of the manager however the user interface varies how they are presented Removing an existing configuration IBM Flex System configurations typically are delivered with a full system single partition that is defined for AIX This LPAR or virtual server can be deleted when the initial configuration of the node is done for PowerVM
234. am that implements IBM PureFlex System solutions and supports clients in implementing IBM Power Systems blades that use VIOS IVM and AIX He was the Systems Integration Test Team Lead for the IBM BladeCenter JS21 blade with IBM SAN storage that uses AIX and Linux Kerry began his career with IBM supporting NASA at the Johnson Space Center as a Systems Engineer He transferred to Austin in 1993 Kerry has authored five other IBM Redbooks publications Simon Casey is an IT specialist working in the Power Systems and Flex Systems team for IBM UK based in Hursley With over a decade of Power Systems client experience in the Financial Services sector he is now part of IBM s core team that implements Flex System and PureFlex System solutions including proof of concepts for clients He specializes in IBM i for Flex enterprise storage PowerHA and datacenter migrations Xiv IBM Flex System p270 Compute Node Planning and Implementation Guide Fabien Willmann is an IT Specialist working for IBM Techline Europe in France After teaching hardware courses on Power Systems servers he joined ITS in 2006 as an AIX consultant where he developed his competencies in AIX Hardware Management Console management and PowerVM virtualization His expertise today is building new Power Systems configurations and upgrades for Systems and Technology Group presales including BladeCenter and PureSystems He participates as a speaker to the Symposium for
235. ample that is shown in Figure 7 84 Event Log 7 0 Select an event filter to display a specific set of events Select Event Log Preferences to customize how many events to display Event filter All Events A Last Updated Sep 25 2013 6 32 20 PM COT Events Delete Create Filter Actions Y Electronic FE Select Event Text Source F Electronic Service Agent connection test successful LESAUSFSM2 austin ibm corm Figure 7 84 Test connection to IBM event log entry IBM Flex System p270 Compute Node Planning and Implementation Guide 7 9 Management by using an HMC This section describes the basic management of a Power compute node by using an HMC The assumption is that the HMC is operational and is ready to configure an Ethernet adapter for communication on the same network as the CMM 7 9 1 Accessing an HMC This section describes how to access and perform basic navigation on an HMC web based user interface to complete tasks on Power compute nodes The HMC web interface supports the following browsers gt Internet Explorer 6 0 7 0 8 0 and 9 0 gt Firefox 4 5 6 7 8 9 and 10 Starting the HMC Start the HMC by setting the display and system units to the On position When the HMC completes the boot process you see the Welcome window on the local console as shown in Figure 7 85 This page includes the link to log on to view the online help and the summarized HMC status information Th
236. and apply online at http www ibm com redbooks residencies html Comments welcome Your comments are important to us We want our books to be as helpful as possible Send us your comments about this book or other IBM Redbooks publications in one of the following ways gt Use the online Contact us review Redbooks form found at http www ibm com redbooks gt Send your comments in an email to redbooks us ibm com gt Mail your comments to IBM Corporation International Technical Support Organization Dept HYTD Mail Station P099 2455 South Road Poughkeepsie NY 12601 5400 xvi IBM Flex System p270 Compute Node Planning and Implementation Guide Stay connected to IBM Redbooks gt Find us on Facebook http www facebook com IBMRedbooks gt Follow us on Twitter http twitter com ibmredbooks gt Look for us on LinkedIn http www 1inkedin com groups home amp gid 2130806 gt Explore new Redbooks publications residencies and workshops with the IBM Redbooks weekly newsletter https www redbooks ibm com Redbooks nsf subscribe 0penForm gt Stay current on recent Redbooks publications with RSS Feeds http www redbooks ibm com rss html Preface xvii xviii IBM Flex System p270 Compute Node Planning and Implementation Guide Introduction During the last 100 years information technology moved from a specialized tool to a pervasive influence on nearly every aspect of life From tabulating machines t
237. and IPv address information for the components below Storage Nodes Properties and settings for 4 1 0 Modules Properties and settings for I O Modules Fans and Cooling Cooling devices installed in Y Bay Device Name IPv4 Enabled IP Address Power Modules and Management Power devices consumptid 1 lO Module 1 Yes View 2 IO Module 2 Yes View Component IP Configuration Single location for you to view al 3 IO Module 3 Yes View Chassis Internal Network Provides internal connectivity bet 4 IO Module 4 Yes View Hardware Topology Hierarchical view of compon Reports Generate Reports of hardwy Compute Nodes Bay Device Name IPv4 Enabled IP Address 2 Mode 02 x240 Yes View 3 Mode 03 x240 Yes View 4 Mode 04 x240 Yes View 6 Mode 06 p270 Yes View T Mode OF x240 Yes View Storage Nodes Bay Device Name IPv4 Enabled IP Address No Data Available Figure 7 17 Component IP Configuration 212 IBM Flex System p270 Compute Node Planning and Implementation Guide From this view the IP configuration information of the I O modules compute nodes and storage nodes can be reviewed by clicking the View option of the wanted node as shown in Figure 7 18 Bay Device Name 1 lO Module 1 2 IO Module 2 IO Module 3 i ma P le 4 fal 1d he gule 4 Bay Device Name 2 Node 02 x240 Node 03 x240 4 Node 04 x240 Node 06 p2 rc T Mode OF x240 IP Yes v4 Component IP configuration Node 06 p270 E IPv4 Addresses
238. apter 8 Virtualization 353 Accessing the Integrated Virtualization Manager The IVM command line is combined with the VIOS padmin user ID command line and cannot be accessed until after the VIOS is installed To access VIOS you must know the IP address or host name of the VIOS and have a valid user ID and password Telnet and SSH protocols are enabled by default for the VIOS session login This example shows the creation of an AIX LPAR with virtual adapters from the CLI 8 5 2 GUI methods The FSM HMC and IVM all provide a GUI to create and manage resources The following sections follow the same example that was previously created with the CLI interfaces The following methods are described in this section gt FSM GUI method gt HMC GUI method on page 373 gt IVM GUI method on page 398 FSM GUI method This section describes the sequence to create a virtual server or LPAR with the same resources used in the FSM CLI method on page 351 but with the FSM GUI instead Accessing the IBM Flex System Manager IBM Flex System Manager can be accessed in one of the following ways gt Locally with a keyboard mouse and monitor that are attached directly to port at the front panel of the FSM through the Console Breakout Cable gt Through a web browser to the FSM web interface We accessed the FSM remotely by using a browser Complete the following steps 1 Open a browser and enter the following URL where
239. arc Ta T searc H Dervwer 7954 74 SH107707 Y Actions 4 Virtual Servers Select Name Access State A Reference Operating Systems i E bn es ay aw Related Resources b o kone Topology Perspectives p Power Units IBM FSM Explores REVE Add to Automation A Capacity on Gemeni Gol Hardware information 3 t Configuration Plans Inventory Configuration Templates Operations Great Vins Sener Power Ont Curent Gontiguretbon Deployment History Edit Host Relssse Management Security System Gonfhigureibon Vienege Sysiem Plans System Status and Health Wisnsge System Profile Rervice and 5 ss soo Server to Storage Mapping View 4 T O EREN nF Properties i i Page 1 of 1 FH 1 ay Sele View Woatloed Management Groups Figure 11 2 Creating a Virtual Server Chapter 11 Installing IBMi 501 2 Inthe Name panel of the wizard assign your partition a name a partition ID and choose any options that are applicable to your requirements Ensure that the Environment drop down menu is changed to IBM i as shown in Figure 11 3 After all of the required options are selected click Next Name c gt Name This wizard helps you create and assign resources to a virtual server Host name Server 7954 24 SN1L07 78268 Virtual server name ibm_i_vw ritr Virtual server ID J3 E Suspend capable E Remote Restart capable E IBM i res
240. asic management by the CMM without further configuration IP address configuration of the individual nodes is required if management by the native interface of the node or an advanced manager such as an HMC or FSM is wanted The Component IP Configuration option is used to configure the IP addresses for the I O modules compute nodes and storage nodes These IP addresses are required to be in the same subnet as the CMM The switch function of the CMM provides the connectivity for each IMM FSP and service processor of the different nodes types from the chassis management network to an external network This network traffic flows through the CMM s external 1 Gb Ethernet connection The ability of FSM and HMC to manage a Power compute node are dependent on communicating with the FSP Proper configuration of the FSP IP information is also required to access the FSP s web interface or Advanced System Management ASM interface Chapter 7 Power node management 211 Configuration of these components is started by clicking Chassis Management gt Component IP Configuration which displays the page as shown in Figure 7 17 IBM Chassis Management Module USERID Settings Loc ZV System Status MultitChassis Monitor Events Service and Support Chassis Management Mgt Module Management Search Chassis Properties and settings for M a Le Compute Nodes Properties and settings for 4 Component IP Configuration Configure IPv4
241. ass thru modules Quad and 14 data rate InfiniBand switches YYYY YV Yy Chapter 1 Introduction 9 Figure 1 4 shows the IBM Flex System Fabric EN4093R 10Gb Scalable Switch Figure 1 4 IBM Flex System Fabric EN4093R 10Gb Scalable Switch 1 4 5 Compute nodes Making use of the full capabilities of IBM POWER7 processors or Intel Xeon processors compute nodes are designed to offer the performance you need for your critical applications With support for a range of hypervisors operating systems and virtualization environments compute nodes provide the foundation for the following components Virtualization solutions Database applications Infrastructure support Line of business applications M vy y IBM Flex Systems offer compute nodes that vary in architecture dimension and capabilities The new no compromise nodes feature market leading designs for current and future workloads Optimized for efficiency density performance reliability and security the portfolio includes compute nodes that are based on the following processors gt IBM POWER single chip modules IBM POWER7 single chip modules IBM POWER7 dual chip modules Intel Xeon Processor E5 2400 family Intel Xeon Processor E5 2600 family YY vV Yy 10 IBM Flex System p270 Compute Node Planning and Implementation Guide Figure 1 5 shows the IBM Flex System p270 Compute Node Figure 1 5 The IBM Flex System p270 Compute Node 1 4 6 Stor
242. atically occurs This process establishes communications between the compute node and the CMM and allows the CMM to collect VPD from the node During the chassis power up process or when a compute node is inserted the power indicator light on the node fast flashes until the discovery process completes When complete the power indicator light is in a slow flash mode until power on then it is on continuously The active chassis map that is shown on the CMM System Status status can also show the discovery mode when the mouse cursor is placed over the compute node image as shown in Figure 7 15 on page 209 Node IP configuration The CMM Component IP Configuration option under Chassis Management is used to configure the IP addresses for the I O modules compute nodes and storage nodes These IP addresses are required to be in the same subnet as the CMM For more information about how to configure a node see Component IP configuration on page 211 FSM chassis manage After an FSM completes the initial configuration the first task is to manage one or more chassis This process establishes communication between the FSM and the target chassis CMM During this process the FSM authenticates with the CMM and collects initial chassis component VPD It also requests access unlock to the service processors in the various nodes and I O modules in the chassis including the FSP in Power compute nodes Chapter 7 Power node management 227 Fi
243. ation area as shown in Figure 7 42 Manage Power Systems Resources k Welcome Flex System Manager Version Power Systems Resources E L Hosts N E Server 7954 24 5N107782 Performance Summary Search the table eee Select Mame Part Id Access State Operating Systems Fl Blitsoviosea 1 CER Started Ei Power Units D Blitsoarxs 2 E CH Stopped Figure 7 42 Displaying all virtual servers that are known by FSM Operating Systems Operating systems are separately discovered objects These objects are discovered by IP address Clicking Operating Systems in the navigation area displays operating systems that were discovered running on a Power based compute node as shown in Figure 7 43 Manage Power Systems Resources b Welcome Flex System Manager Version Power Systems Resources Cor E gt Hosts H Server 7954 24X SN107782 Search the table Search a Virtual Servers ce Select Name gt Access gt Problems Compliance Operating Systems m O Gitsovrosea Mok Box Mok ai Power Units Figure 7 43 Displaying discovered operating systems on Power compute nodes Chapter 7 Power node management 233 Content area columns Be default 12 columns of information are displayed in the content area Figure 7 44 shows the first four columns of the default order A slide bar at the bottom of this window can be used to show the remai
244. ation options The selection of options on the NIM machine is complete Continue the installation from the SMS menu on the compute node Chapter 9 Operating system installation methods 453 15 Reboot the server and during reboot press 1 to access SMS mode as shown in Figure 9 14 IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
245. ault Adapter ID of 5 This value can be changed is needed Leave the This adapter is required for partition activation option cleared if DLPAR operations and Live Partition Mobility are being considered Select Only selected client partition can connect For this example the assumption is that this LPAR for the VIOS is the first to be created on the managed systems Specify the client partition by the planned partition number Previously defined client LPARs are available in the drop down menu by name and number Chapter 8 Virtualization 391 392 Enter a Client adapter ID in the example we use 102 This value represents the virtual slot number on the client LPAR The server virtual SCSI adapter that is created in this step and the client virtual SCSI adapter that is created for a client LPAR are paired and must reference each other by the corresponding virtual adapter IDs Often these virtual adapter IDs match have the same value on the server and client side Different numbers were chosen here to show that they are independent values 9 After you enter all of the information select OK as shown in Figure 8 42 Create Virtual SCSI Adapter Server 7954 24x SN10 7732B Virtual SCSI adapter Adapter Type of adapter TEETER E This adapter is required for partition activation Any client partition can connect Only selected client partition can connect Client partition 5 Client adapt
246. autofs4 OK Starting automount OK Generating SSH1 RSA host key Generating SSH2 RSA host key Generating SSH2 DSA host key Starting sshd OK Starting postfix OK Starting abrt daemon OK Starting crond OK Starting atd OK Starting rhsmcertd 240 OK Red Hat Enterprise Linux Server release 6 1 Santiago Kernel 2 6 32 131 0 15 e16 ppc64 on an ppc64 ite bt 061 stglabs ibm com login Figure 12 45 First time login screen The basic installation is complete You might choose to install more RPMs from the IBM Service and Productivity Tools web page Chapter 12 Installing Linux 591 12 3 Installing SUSE Linux Enterprise Server In this section we describe the installation of SUSE Linux Enterprise Server 11 SLES 11 from a distribution image We recommend that first time users use the VNC graphical mode to aid with understanding the complex options that are available in the installation process Note This section describes the process of installing SLES from the ISO image as provided by SUSE Linux We also describe installing SLES by using the IBM Installation Toolkit for PowerLinux which also installs IBM specific RPMs for Power Systems compute nodes For more information see 12 1 IBM Installation Toolkit for PowerLinux on page 554 For brevity the initial SMS steps are not shown here because they are described in 12 2 Installing Red Hat Enterprise Linux on page 581 Follow ste
247. becomes active and the active port becomes standby This action is done quickly within a few seconds After restoring the failed link the teaming driver can perform a failback or can do nothing depending on the configuration Review topology 1 in Figure 5 1 on page 143 Assume that NIC Teaming is on the compute node NIC port that is connected to switch 1 is active and the other node is on standby If something goes wrong with the internal link to switch 1 the teaming driver detects the status of NIC port failure and performs a failover But what happens if external connections are lost that is the connection from chassis switch 1 to Enterprise Switch 1 is lost The answer is that nothing happens because the internal link is still on and the teaming driver does not detect any failure So the network service becomes unavailable To address this issue the Layer 2 Failover technique is used Layer 2 Failover can disable all internal ports on the switch module if there is an upstream links failure A disabled port means no link so the NIC Teaming driver performs a failover This special feature is supported on the IBM Flex System and BladeCenter switch modules Thus if Layer 2 Failover is enabled and you lose connectivity with Enterprise Switch 1 the NIC Teaming driver performs a failover and the service is available through Enterprise Switch 2 and chassis switch 2 Layer 2 Failover is used with NIC active or standby teaming Before NIC Teami
248. blade data Flex System configurations In a Flex System configuration that uses IVM or an HMC to manage the Power compute nodes both of these management devices can be configured to report problems directly to IBM service and support However these management devices do not report chassis issues such as cooling fan or power supply problems Therefore the CMM should also be configured to enable IBM support and report these types of problems directly to IBM service and support 220 IBM Flex System p270 Compute Node Planning and Implementation Guide PureFlex System configurations The FSM in a PureFlex System configuration can perform centralized reporting for all devices it manages including the chassis components Therefore it is not necessary to configure this feature on the CMM Enabling IBM Support IBM Support or the CMM call home feature is enabled and setup from the Settings options under the Service and Support menu bar option To Enable IBM Support on the CMM complete the following steps 1 Click Service and Support Settings from the menu bar option as shown in Figure 7 27 IBM Chassis Management Module USERID Service and Support Problems Problems addressed by IBM Support iF you have en Mgt Module Manage System Status Multi Chassis Monitor Events Chassis Management Servo Settings Configure your system to monitor and repan itso Flex1 Change chassis name Syste
249. ble with the physical cabling used or planned to be used in your data center Also make sure that the features and functions that are required in the network are supported by the proposed switch modules such as protocol speed and adapter function 136 IBM Flex System p270 Compute Node Planning and Implementation Guide For more information about I O module configuration see IBM PureFlex System and IBM Flex System Products and Technology SG24 7984 The available Ethernet switches and pass through modules are listed in Table 5 2 on page 137 Table 5 2 Available switch options for the chassis Table 5 3 lists the common selection considerations that might be useful when you are selecting an Ethernet switch module Table 5 3 Switch module selection criteria Suitable switch module requirement EN2092 S814093 EN4093R CN4093 1Gb Systems 10Gb 10Gb Ethernet Interconnect Scalable Converged Switch Module Switch Scalable Switch Basic Layer 2 switching peme OO e e e Os Advanced Layer 2 switching IEEE features STP Yes No Yes Yes QoS Layer 3 IPv4 switching forwarding routing ACL Yes filtering Chapter 5 Planning 137 Suitable switch module requirement EN2092 Sl4093 EN4093R CN4093 1Gb Systems 10Gb 10Gb Ethernet Interconnect Scalable Converged Switch Module Switch Scalable Switch Layer 3 IPv6 switching forwarding routing ACL Yes No Yes Yes filtering Pemma Nes e e 10 Gb Ethern
250. cal switches VLAN configuration of switches Integration with server management Per virtual machine network usage and performance statistics that are provided to VMControl Logical views of servers and network devices that are grouped by subnet and VLAN Storage management Discovery of physical and virtual storage devices Support for virtual images on local storage across multiple chassis Inventory of physical storage configuration Health status and alerts Storage pool configuration Disk sparing and redundancy management Virtual volume management Chapter 7 Power node management 193 Support for virtual volume discovery inventory creation modification and deletion gt Virtualization management base feature set Support for VMware Hyper V KVM and IBM PowerVM Create virtual servers Edit virtual servers Manage virtual servers Relocate virtual servers Discover virtual server storage and network resources and visualize the physical to virtual relationships gt Virtualization management advanced feature set Create image repositories for storing virtual appliances and discover existing image repositories in your environment Import external standards based virtual appliance packages into your image repositories as virtual appliances Capture a running virtual server that is configured the way that you want complete with guest operat
251. can replace any other component but only once N N means that there are N backup devices for N devices where N number of devices can fail and each has a backup 66 IBM Flex System p270 Compute Node Planning and Implementation Guide The redundancy options are configured from the CMM and can be changed nondisruptively The five policies are shown in Table 3 5 Table 3 5 Chassis power management policies Power management policy Function Basic Power Management Allows the chassis to fully use available power no N N or N 1 redundancy Power Module Redundancy Single power supply redundancy with no compute node throttling N 1 redundancy Single power supply redundancy Compute nodes can be throttled if required to stay within the available power This setting provides higher power availability over simple Power Module Redundancy N 1 setting Power Source Redundancy Maximum power available limited to one half of the installed number of power supplies N N setting Power Source Redundancy Maximum power available limited to one half of the with Compute Node Throttling installed number of power supplies Compute nodes allowed can be throttled if required to stay within available Power Module Redundancy with Compute Node Throttling allowed power This setting provides higher power availability compared with simple Power Source Redundancy N N setting Throttling Node throttling is an IBM EnergyScale feature of P
252. ch 2 F Switch 2 Figure 5 1 IBM Flex System redundant LAN integration topologies N Compute node Topology 1 in Figure 5 1 has each switch module in the chassis that is directly connected to one of the enterprise switches through aggregation links by using external ports on the switch The specific number of external ports that are used for link aggregation depends on your redundancy requirements performance considerations and real network environments This topology is the simplest way to integrate IBM Flex System into an existing network or to build a new one Topology 2 in Figure 5 1 has each switch module in the chassis with two direct connections to two enterprise switches This topology is more advanced and it has a higher level of redundancy but certain specific protocols such as Spanning Tree or Virtual Link Aggregation Groups must be implemented Otherwise network loops and broadcast storms can cause the problems in the network Chapter 5 Planning 148 Spanning Tree Protocol Spanning Tree Protocol is a 802 1D standard protocol that is used in Layer 2 redundant network topologies When multiple paths exist between two points on a network Spanning Tree Protocol or one of its enhanced variants can prevent broadcast loops and ensure that the switch uses only the most efficient network path Spanning Tree Protocol is also used to enable automatic network reconfiguration in case of failure For example enterprise s
253. ch as working with server related resources showing and installing updates submitting service requests and starting the remote access tools Remote console e Open video sessions and mount media such as DVDs with software updates to their servers from their local workstation e Remote KVM connections e Remote Virtual Media connections mount CD DVD ISO and USB media e Power operations against servers Power On Off and Restart Hardware detection and inventory creation Firmware compliance and updates Automatic detection of hardware failures e Provides alerts e Takes corrective action e Notifies IBM of problems to escalate problem determination Health status Such as processor usage on all hardware devices from a single chassis view Administrative capabilities such as setting up users within profile groups assigning security levels and security governance 7 3 2 FSM user interfaces The FSM supports a web based graphical user interface that provides access to all FSM management functions from a supported web browser You can also perform management functions through the FSM CLI The web based and CLI interfaces should be available through a network connection after the FSM setup wizard completes Chapter 7 Power node management 195 The default security setting is Secure so HTTPS or SSH is required to connect to the FSM 7 3 3 FSM requirements The FSM requires one open c
254. ch side ports LEDs Figure 6 4 CN4093 Scalable switch port layout Table 6 3 shows the different types of ports on the CN4093 10Gb Converged Scalable Switch Table 6 3 CN4093 10 Gb Converged Scalable Switch port types Ethernet Ports INTA1 INTA14 ports 1 14 Standard 10 Gb SFP Ethernet ports that connect internal INTB1 INTB14 15 28 internally to the midplane and route to the node bays INTC1 INTC14 29 42 at the front of the chassis which houses compute nodes or V7000 Storage nodes Ethernet Ports EXT1 EXT2 ports 43 44 Standard 10 Gb SFP Ethernet ports that provide external external connectivity High Capacity EXT3 EXT10 ports 45 52 40 Gb QSFP Ethernet ports that can be configured Ethernet Ports as two 40 Gb Ethernet Ports EXT15 and EXT 19 or external break out as four 10 Gb Ethernet ports EXT15 EXT18 and EXT19 EXT22 IBM Omni Ports EXT11 EXT22 ports 53 64 Hybrid 10 Gb SFP ports that can be configured to external operate in Ethernet mode default or in Fibre Channel mode to provide direct connection to Fibre Channel switches or devices Chapter 6 Converged networking 173 The Omni ports are all set to Ethernet mode by default and can carry FCoE and TCP traffic The Omni ports can be configured to Fibre Channel mode Then the ports are attached to external Fibre Channel storage controllers or servers The Omni ports are paired ports so each concurrent block of two ports must be co
255. cific requirements IBM Flex System offers a broad range of x86 and POWER compute nodes in an innovative chassis design that goes beyond blade servers With advanced networking and system management it provides the capability to support extraordinary simplicity flexibility and upgradeability 1 2 1 PureFlex System PureFlex System offers the following configurations that include the p270 gt IBM PureFlex System Express Designed for small and medium businesses it is the most affordable entry point in the PureFlex Systems family IBM PureFlex System Enterprise Optimized for transactional and database systems with built in redundancy for highly reliable and resilient operation it supports your most critical workloads For more information about the PureFlex configurations and specific details and comparisons of the two offerings see Chapter 2 IBM PureFlex System on page 15 4 IBM Flex System p270 Compute Node Planning and Implementation Guide 1 3 IBM Flex System p270 Compute Node All compute nodes are installed in the Flex System Enterprise Chassis which provides power cooling and connectivity for the compute node As shown in Figure 1 1 the IBM Flex System p270 Compute Node 7954 24X is a standard width Power Systems compute node with 2 POWER7 processor sockets 16 memory slots 2 I O slots an expansion port and options for two internal drives to provide local storage Figure 1 1 IBM Flex System p270 Comp
256. cumulative package A situation might exist where you received the latest cumulative package from IBM and the preventive service planning PSP information indicates that the package contains two defective PTFs In this situation you do not want to install the defective PTFs To omit any PTFs enter Y against Omit PTFs and enter the specified PTF IDs 11 7 4 Completing fix installation An IPL of the system is required to complete the installation of PTFs If you are installing technology refresh PTFs at the same time that you are installing fixes with technology refresh requisite PTFs You might be prompted to perform another normal IPL to permanently apply the technology refresh PTFs Chapter 11 Installing IBMi 541 The other IPL might be required when a cumulative PTF package fix group Such as the HIPER group or fixes that were downloaded electronically is installed If another IPL is needed PTF SI42445 was applied and you are installing from a virtual optical device or save files SERVICE the second IPL is performed automatically lf another IPL is needed and you are installing from a physical optical device or tape device you must perform an IPL before you complete the PTF installation process To complete the fix installation complete the following steps 1 If the escape message CPF362E IPL required to complete PTF install processing is displayed complete the following steps a End all jobs on the system and p
257. d Figure 8 22 Virtual server list for specified server 372 IBM Flex System p270 Compute Node Planning and Implementation Guide HMC GUI method This section describes the sequence to perform the same steps that are described in HMC CLI method on page 352 but with the HMC user interface instead Accessing the HMC HMC can be accessed in one of the following ways gt Locally from the HMC console FSM gt Through a web browser to the FSM web interface When you are accessing HMC remotely by using a browser complete the following steps 1 Open a browser and enter the following URL where system_name is the host name or IP address of the HMC node https system_name The HMC launch page opens as shown in Figure 8 23 This web server is hosting the Hardware Management Console application Click on the link below to begin Log on and launch the Hardware Management Console web application You can also view the online help for the Hardware Management Console System Status _ Status is good Ed Attention LEDS Status is good IY Serviceable Events One or more Serviceable Events Figure 8 23 The HMC launch page Chapter 8 Virtualization 373 2 Start the HMC user interface and login page by clicking Log on and launch the Hardware Management Console web application The request for login credentials opens as shown in Figure 8 24 Enter a valid Userid and password and click L
258. d blank If no value is shown for the LICPGM from menu option 11 a blank in the installed status column means that the product is not installed Chapter 11 Installing IBMi 535 11 6 IPL and Initialize System If you do not install the cumulative program temporary fix PTF package now you must perform an IPL and allow the Initialize System INZSYS process as complete Before you do set the IPL type of the virtual server to B from the FSM or the IPL type you use for everyday operation and set the IPL mode to Normal The installation process must be completed before the INZSYS process is automatically started This process is started during each IPL after you install the QUSRSYS library until the INZSYS process successfully completes The INZSYS process is not started during the IPL if the system is started in a restricted state If the INZSYS process is started during the IPL it runs in the SCPF system job If you want to perform PTF installation before system initialization see 11 7 Installing Program Temporary Fix packages on page 537 Note If you perform an IPL before you install a cumulative PTF package ensure that the INZSYS process completes before you start to install the PTF package The use of any PTF commands before the INZSYS process is completed after the first system IPL causes the INZSYS to fail The completion time for INZSYS varies Allow sufficient time for this process to complete Complete the followin
259. d Enterprise are summarized in Table 2 1 The base configuration of the two offerings is shown that can be further customized within the IBM configuration tools Table 2 1 PureFlex System hardware overview configurations Components PureFlex Rack Flex System Enterprise Chassis Chassis power supplies Minimum maximum Chassis Fans minimum maximum Flex System Manager Compute nodes one minimum POWER or x86 based VMware ESXi USB key Top of rack switches Integrated 1 GbE switch Integrated 10 GbE switch Integrated 16 Gb Fibre Channel Converged 10 GbE switch FCoE IBM Storwize V7000 or V7000 Storage Node Media enclosure 18 PureFlex Express Optional 42 U 25 U or no rack Required Single chassis only 2 6 4 8 Required 0260 p270 p460 x220 x222 x240 x440 Selectable on x86 nodes Optional Integrated by client Selectable redundant Selectable redundant Selectable redundant Selectable Redundant or non redundant Required and selectable Selectable DVD or DVD and tape PureFlex Enterprise Required 42 U Rack Required Multi chassis 1 2 or 3 chassis p260 p270 p460 x220 x222 x240 x440 Selectable redundant Required and selectable Selectable DVD or DVD and tape IBM Flex System p270 Compute Node Planning and Implementation Guide PureFlex System software can also be customized in a similar manner to the hardware components of the two offerings Enterprise has
260. d Implementation Guide 7 Enter 1 to select option SCSI as shown in Figure 12 7 Version AF773 021 SMS 1 7 c Copyright IBM Corp 2000 2008 All rights reserved Select Media Type SCSI SSA SAN SAS SATA USB IDE ISA List All Devices l 2 3 4 T 6 L 8 9 Navigation keys M return to Main Menu ESC key return to previous screen Type menu item number and press Enter or select Navigation key 1 Figure 12 7 SCSI Chapter 12 Installing Linux 559 8 Enter option 1 to select your optical drive as shown in Figure 12 8 the location code you see is different from the code that is shown in the figure Version AF 73 021 SMS 1 7 c Copyright IBM Corp 2000 2008 All rights reserved Select Device Device Current Device Number Position Name l SCSI CD ROM loc U7954 24X 1077E3B V20 C20 T1 L8200000000000000 Navigation keys M return to Main Menu ESC key return to previous screen X eXit System Management Services Type menu item number and press Enter or select Navigation key 1 Figure 12 8 SCSI Device 9 In the next panel select 2 Normal boot not shown 10 In the next panel select eXit the SMS not shown 560 IBM Flex System p270 Compute Node Planning and Implementation Guide 11 The virtual server boots the virtual DVD You then see the console panel that is shown in Figure 12 9 IBM Installation Toolkit for PowerLinux Version 5 4 Timestamp 201303281340 The IBM R Ins
261. d Implementation Guide For more information about the mksyscfg command see this website http pic dhe ibm com infocenter powersys v3rlm5 index jsp topic 2Fip hcx_p5 2Fmksyscfg htm FSM CLI method The following sections describe an example of the use of the FSM CLI to create a virtual server for a VIOS Accessing the IBM Flex System Manager To access the FSM you must know the IP address or host name of the FSM node and have a valid user ID and password You must start a Secure Shell SSH session with FSM and log in This process is similar to the process of accessing the SDMC or HMC command line Creating the VIOS virtual server by using the FSM CLI Creating the VIO Server can be done by using the FSM CLI To ensure that the correct I O devices are specified in the command understand and document the intended I O adapters Use the information that is described in Physical adapters on page 346 and the corresponding DRC Indexes that are shown in Table 8 5 on page 350 for this p270 example This example uses the mksyscfg command with the FSM required smcli_ prefix The r option specifies an LPAR as the type of resource to create The m option determines the managed system on which to create the resource FSM usage The FSM command smcli 1ssys can be used to display a list of endpoint objects in the FSM including compute nodes Run the following command to create a virtual server suitable for a VIOS smcli mksyscfg
262. d Implementation Guide The December 2011 Service Pack enhances capabilities by enabling four systems to participate in a Shared Storage Pool configuration This configuration can improve efficiency agility scalability flexibility and availability Specifically the Service Pack enables the following functions Storage Mobility A function that allows data to be moved to new storage devices within Shared Storage Pools while the virtual servers remain active and available VM Storage Snapshots Rollback A new function that allows multiple point in time snapshots of individual virtual server storage These point in time copies can be used to quickly roll back a virtual server to a particular snapshot image This functionality can be used to capture a VM image for cloning purposes or before applying maintenance gt Thin provisioning VIOS 2 2 supports highly efficient storage provisioning where virtualized workloads in VMs can have storage resources from a shared storage pool that is dynamically added or released as required gt VIOS grouping Multiple VIOS 2 2 partitions can use a common shared storage pool to more efficiently use limited storage resources and simplify the management and integration of storage subsystems gt Network node balancing for redundant Shared Ethernet Adapters SEAs with the December 2011 Service Pack This feature is useful when multiple VLANs are being supported in a dual VIOS environment
263. d Progress Server 7954 24X SN107782B Function duration time 00 11 00 Elapsed time 00 09 31 Select Object Name 7954 24 107782B Activating updates Managed System Primary Activating updates Restarting Flexible Service Processor OK _ Details Figure 7 131 LIC update progress window continued Change Licensed Internal Code Wizard Progress Server 7954 74 SN1077827B Function duration time Elapsed time Select Object Name 7954 240 1077028 Completed All Updates Managed System Primary Completed All Updates Cancel Figure 7 132 LIC update progress window complete Oi eg OO 14 30 Details 16 When the Completed All Updates message is shown in the Status column click OK to complete the Change LIC wizard and close the window The HMC returns the Server list view in the work pane 298 IBM Flex System p270 Compute Node Planning and Implementation Guide 7 10 Management by using IVM This section describes the basic management of a Power compute node by using the IVM 7 10 1 Installing IVM IVM is part of the VIOS code base and does not require any other software or licensed program Products LPPs However the Power compute node must meet certain conditions before IVM is enabled during the VIOS installation process For more information about these conditions see 7 5 3 IVM requirements on page 201 There are no op
264. dard width compute nodes or slots positions 2 and 4 for double wide compute nodes An example of I O Adapter to I O Module connectivity is shown in Figure 3 4 on page 59 58 IBM Flex System p270 Compute Node Planning and Implementation Guide Figure 3 4 Connectivity between I O adapter slots and switch bays The following Ethernet modules were announced at the time of writing gt IBM Flex System Fabric EN4093R 10Gb Scalable Switch 42x internal ports 14x 10 Gb and 2x 40 Gb convertible to 8x 10 Gb uplinks Base switch 10x external 10 Gb uplinks 14x 10 Gb internal 10 Gb ports Upgrade 1 Adds 2x external 40 Gb uplinks and 14x internal 10 Gb ports Upgrade 2 Adds 4x external 10 Gb uplinks 14x internal 10 Gb ports gt IBM Flex System EN2092 1Gb Ethernet Scalable Switch 28 Internal ports 20 x 1 Gb and 4 x 10 Gb uplinks Base 14 internal 1 Gb ports 10 external 1 Gb ports Upgrade 1 Adds 14 internal 1 Gb ports 10 external 1 Gb ports Uplinks upgrade Adds four external 10 Gb uplinks gt IBM Flex System EN4091 10Gb Ethernet Pass thru 14x 10 Gb internal server ports 14x 10 Gb external SFP ports Chapter 3 Introduction to IBM Flex System 59 gt EN6131 40Gb Ethernet Switch 14x 40 Gb internal ports 18x External 40 Gb QSFP ports gt CN4093 10Gb Converged Scalable Switch 42x internal ports 2x 10 Gb 2x 40 Gb and 12x Omni Ports Base 14x internal 10 Gb ports 2x
265. dditional virtual LAN IDs for the adapter IEEE 802 19 compatible adapter Maximum number of VLANs 20 Additional VLAN IDs 4092 2 20 48 Shared Ethernet Settings Select Ethernet bridging to link bridge the virtual Ethernet to a physical network Fil Use this adapter for Ethernet bridging Priority il filor2 p Advanced virtual ethernet configuration Ok Cancel Help Figure 8 14 Create virtual Ethernet adapter control channel for SEA failover Chapter 8 Virtualization 365 4 When you return to the main virtual Ethernet window click Add as shown in Figure 8 15 to add a virtual Ethernet adapter In a dual VIOS environment a control channel is required that acts as a heartbeat This new adapter servers that purpose Create Virtual Server Server 7954 24 SN107 7928 Ethernet af Name yf Memory Configure the virtual network adapters for the virtual server Physical I O network adapters can be selected later in the Physical I O page of this wizard Two virtual Ethernet adapters will be created by default however you can add edit o v Processor remove adapters to suite your needs amp Ethernet y Virtual Ethernet Edit Delete Port VLAN ID i iority E 2 4091 Yes 1 H 2 1 Yes 1 Figure 8 15 Adding a virtual Ethernet adapter 5 Enter or accept the following characteristics for the new Ethernet adapter as shown in Figure 8 16 on page 367 Accept the default Adapter
266. ded De 3 we we Fal Power System Firmware FPW 773 99 APY S O16 APPS 016 Mone System Firmware Firmware Wes This pa M4 Page 1 of 1 FR 1 La Total 1 Selected systems Mame Type Description i Server 7934 445 5H1077E3B Server H4 Pageiofi FR 1 La Total 1 Figure 7 71 Update Summary window 254 IBM Flex System p270 Compute Node Planning and Implementation Guide When a job that has multiple steps is displayed such as a system firmware update another tab is created that shows the job steps and the progress of each as shown in Figure 7 72 focttive and Scheduled Jobs Active and Scheduled Jobs Properties Mame Install Updates June 20 2013 2 28 23 PM EDT General Targets History Logs Job Steps Select an instance of the jab to display the history of job steps for that instance in the table below Job Instance 6f2Of13 at 2 28 PM ow Job Steps Search the table Search Name 7 Status q Progress S star dime 7 Stop time 3 Downloading updates to IBM S731AC1 KOSFO2D 555 Complete 100 Jun 20 2013 Jun 20 2013 2 28 3 Staging Updates to Server F954 44H SNLO77ESB Complete 100 Jun 20 2013 Jun 20 2013 2 28 3 Installing Updates to Server 7954 448 SNLO07F7E3B Running Jun 20 2012 ff Figure 7 72 Active update job showing Job Steps When the update job completes verify that there were no errors from the General tab or the Logs tab in the active job window
267. ded With the HMC a system administrator can perform logical partitioning functions service functions and various system management functions by using the web browser based user interface or the CLI The HMC uses its connection to one or more systems which are referred to as managed systems to perform the following functions Creating and maintaining logical partitions in a managed system Displaying managed system resources and status Opening a virtual terminal for each partition Displaying virtual operator panel values for each partition Powering managed systems on and off Performing dynamic LPAR DLPAR operation Managing virtualization features Managing platform firmware installation and upgrade Acting as a service focal point for all managed compute nodes Vvvvvvrvvyv Y 7 4 2 HMC user interfaces HMC Version 7 uses a web browser based user interface This interface uses a tree style navigation model that provides hierarchical views of system resources and tasks by using drill down and launch in context techniques to enable direct access to hardware resources and task management capabilities This version provides views of system resources and provides tasks for system administration The HMC supports a CLI user interface that provides access to HMC management functions Both the web based and CLI interfaces should be available through a network connection when the HMC is correctly configured on a network Remote access to
268. displayed shows both virtual and physical adapters In most cases a physical adapter is selected and often it is the first physical adapter Chapter 9 Operating system installation methods 443 3 When the adapter is entered a summary of the previous selections is displayed as shown in Figure 9 4 To proceed press Enter to cancel press Ctrl C Here are the values you entered managed system Server 7954 24X SN1077E3B virtual I 0 server partition VIOS1 profile DefaultProfile source dvdimage vl iso IP address 9 42 171 85 Subnet mask 255 255 254 0 gateway 9 4 270 1 Speed auto duplex auto configure network no install interface ethl ethernet adapters 00 00 c9 d1 65 84 Press enter to proceed or type Ctrl C to cancel Figure 9 4 Interactive installios selection summary A series of message follow that indicate the preparation and setup of the VIOS ISO images for the installation and other preparations that the installios command performs before the actual installation Installios activates the new VIOS virtual server configures the wanted IP information at the Open Firmware level and performs a test ping to the FSM as shown in Figure 9 5 messages not shown Connecting to itsoVIOS6A Connected Checking for power off Power off complete Power on itsoVIOS6A to Open Firmware Power on complete Client IP address is 9 42 171 85 Server IP address is 9 42 170 223 Gateway IP addres
269. ds on your system Sign in to the Simplified Setup Tool using the root user name and password which you specified during the IBM Installation Toolkit for Linus set up process Figure 12 29 IBM Installation Toolkit for PowerLinux GUI Chapter 12 Installing Linux 579 37 Review and agree to the license when prompted The Welcome page now appears as shown in Figure 12 30 oo https 9 42 171 69 6060 e P IBM Installation loolkit for PowerLinuxX Simplified Setup Toal Welcome Change Instructions The IBM Installation Toolkit for PowerLinux Simplified Setup Tool Simplified Setup Tool Use this page to select the open Al guides you through the process of quickly and easily configuring one or more open source workloads that you want to source workloads on your system configure to download updates orto restore 4 previous workload configuration Workloads Select one of the follawing workloads to configure on your system You can configure one or more workloads in any sequence Open source workloads are installed on your system by default If you selected one or more workloads to configure during the installation process of Contigur the IBM Installation Toolkit those workloads are highlighted on this no Web server LAMP After you complete the initial configuation of the workloads using the Simplified Setup Tool you will need to perform more advanced administrative tasks on your system to Implement solu
270. e w Physical I O Console cy Load source console Systems Director Figure 11 10 Creating a Virtual Server Load source and console panel For an IBM i installation you must designate an alternative restart adapter VSCSI for optical or vFC for tape media library via NPIV That is from where the operating system is loaded The load source adapter is to where it is loaded Leave the default console device as Systems Director which is the FSM that acts as the HMC for IBM i client partitions Click Next 510 IBM Flex System p270 Compute Node Planning and Implementation Guide 12 The summary panel as shown in Figure 11 11 is displayed so that you can review the properties that were selected in the Create Virtual Server wizard After the properties are verified click Finish and the virtual server is created yf Name yf Memory yf Processor wf Ethernet Storage v selection yf Storage yf Optical devices yw Physical 1 0 Load source console o gt Summary Summary The following is a summary of your virtual server settings You can select Back to make changes fou can also use the virtual server properties task to make changes after the virtual server is created Server Hame Virtual server name Wirtual server ID Environment Memory Processors Virtual Ethernets Virtual Adapters Storage capacity Storage devices Optical devices Virtual Optical devices Physical adapters Server 795
271. e FCF replies with a FLOGI Accept frame and then the login is complete The VN_Port to VF_Port link is now established The accept frame also provides a mechanism for the FCF to indicate to the device the MAC address to use for its VN_Port which is the FCoE equivalent of an FCID These virtual links can be established over arbitrary Ethernet networks and they must now be given security that is equivalent to the security in a point to point FC network This security is provided by having the CN4093 switch snoop the FIP frames that it forwards By using the information that the switch sees during the FIP login sequence the switch can determine which devices are connected by using a virtual link Then the switch dynamically creates narrowly tailored Access Control Lists ACLs that permit expected FCoE traffic to be exchanged between the appropriate devices and deny all other undesired FCoE or FIP traffic The CN4093 FIP snooping function allows the compute node to log in and establish the VN_Port to VF_Port virtual link For more information about FIP see the FC BB 5 standard at this website http fcoe com 09 056v5 pdf Note The current FCoE standard is FC BB 5 as agreed by the T11 technical committee The FC BB 6 standard is a work in progress and brings more flexibility and switch types Chapter 6 Converged networking 171 MAC addresses used by end devices End devices such as the compute nodes ENodes use virtual MAC addresse
272. e and Support is enabled IBM Support File Transfer Server Enable IBM Support To successfully call home IBM Support make sure the DNS settings are valid Domain Name System DNS a Enable IBM Support Figure 7 31 IBM Support enabled on CMM 7 8 Management by using FSM This section describes the basic management of a Power compute node by the FSM The assumption is that the initial FSM setup wizard was run and at least one chassis with a Power compute node was managed 7 8 1 Accessing the FSM Before you begin you need the IP address of the FSM You can access the FSM web interface by using a browser or the CLI from an SSH session The browser method is described here For more information about supported browsers for accessing the FSM and all devices in the Flex System or PureFlex System see this website http pic dhe ibm com infocenter flexsys information topic com ibm acc pureflex doc p7eek_pwebbrowsers html Complete the following steps 1 Open a browser and point it to the following URL where system name is the host name or IP address of the FSM https system name 224 IBM Flex System p270 Compute Node Planning and Implementation Guide 2 When the user login view displays as shown in Figure 7 32 provide the proper User ID and password to complete the login process Use a es Y USERID Figure 7 32 FSM web interface login When the login process completes the home t
273. e by clicking Show and Install Updates to open the Show and Install Updates window as shown in Figure 7 67 Acquire Updates Ajob was scheduled To monitor the progress of this jab click Display Properties in the preceding message When the job is complete click Show and Install Updates to view the updates needed by a system Show and Install Updates Figure 7 67 Show and Install Updates start option The Show and Install Updates window in Figure 7 68 displays the name of the server or object to which the updates that are listed in the table can be applied When the wanted package is selected the Install option is available and can be clicked When Install is clicked the update wizard starts Show and Install Updates This page shows the current updates that are needed for the selected systems Superseded or optional updates are not shawn To view superseded or optional updates that are installable an this system click the Show all installable updates link below Show all installable updates Updates needed for Server 7954 24 SHLOFFESB Install Search the table Search Select Name System Wersion Severity ill M4 Pageiofi FR 1 Selected 1 Total 1 Filtered 1 Figure 7 68 Show and Install Updates window Chapter 7 Power node management 251 The update wizard prompts you through a welcome page and then a Start Target Checks page As shown in Figure 7 69 this page q
274. e defined so that there is always an entry for at least one default route DFTROUTE If there is no match on any other entry in the routing table data is sent to the IP router that is specified by the first available default route entry To configure a default route complete the following steps 1 From the CFGTCP menu select Option 2 Work with TCP IP Routes and press Enter 2 Select Option 1 Add and press Enter to access the Add TCP IP Route ADDTCPRTE menu 3 Type DFTROUTE for the Route destination prompt and NONE for the Subnet mask prompt 4 Atthe Next hop prompt specify the IP address of the gateway on the route and then press Enter 11 9 5 Defining TCP IP domain After you specify the routing entries you must define the local domain and host names to allow communication within the network and then use a DNS server to associate the IP addresses with the host names The local domain and host name are the primary names that are associated with your system They are required when you set up other network applications such as email If you want to use easily remembered names rather than IP addresses you must use a DNS server a host table or both to resolve IP addresses You must configure the host name search priority to tell the system which method you prefer to use To define TCP IP domain complete the following steps 1 From the CFGTCP menu select Option 12 Change TCP IP domain information and th
275. e following components e One processor e 56 GB or memory e SAN attached disks through the CN4058 8 port 10Gb Converged Adapter e One CN4058 8 port 10Gb Converged Adapter ASIC for networking e AIX operating system Important Configurations that are shown in the following samples are not the only configurations supported You can use several combinations of expansion cards and memory the limitations are disk and network access 5 9 2 Virtual servers with VIOS You can use the IBM Flex System Manager or HMC management to configure a dual VIOS environment as described in 5 6 Dual VIOS on page 149 Setting up a VIOS environment is the key to overcoming the hardware limitations you might have on your Power Systems compute node This environment supports up to 480 partitions on the p270 20 per core VIOS can solve many of the hardware limitations buses cards disk and memory you find when you are creating partitions on your Power Systems compute node For more information see Chapter 8 Virtualization on page 333 A sample configuration for a dual VIOS environment gt Sample Configuration 1 One IBM Flex System p270 Compute Node with one CN4058 8 port 10Gb Converged Adapter one FC5054 4 port 16Gb FC Adapter and 512 GB of memory For this sample you can create the following VIOS servers VIOS Server 1 consists of the following components e Two processor cores e 16GB of memory Chapter 5 Planning 161
276. e information provides any special instructions that you should be aware of before you install your cumulative PTF package The steps that follow step 1 within this section also are part of the letter They are provided here as an overview of some of the steps that you must perform To install cumulative PTF packages complete the following steps 1 Read the installation instructions thoroughly and follow the instructions that are contained in it 2 If you received your cumulative PTF package as an image complete the following steps to create an image catalog and virtual optical devices as required a Create a virtual optical device by using the following command CRTDEVOPT DEVD OPTVRTO1 RSRCNAME VRT ONLINE YES TEXT text description Verify that the virtual optical device was created by issuing the following command a device of type 632B should be listed WRKDEVD DEVD 0PT Check and if required vary on the device by pressing F14 and using option 1 to vary on the device 538 IBM Flex System p270 Compute Node Planning and Implementation Guide Create an image catalog Create an image catalog for the set of PTFs that you want to install The Create Image Catalog CRTIMGCLG command associates an image catalog with a target directory where the preinstalled images are loaded as shown in the following example CRTIMGCLG IMGCLG ptfcatalogue DIR MYCATALOGDIRECTORY CRIDIR YES TEXT text description Add an
277. e p270 supports two expansion adapters to host dual VIOS on external based storage cards and retain adapter level resiliency VIOS should be allocated resources at an ASIC level to provide IP and FC traffic IBM Flex System Dual VIOS Adapter The Dual VIOS Adapter is only available with the p270 compute nodes and are not with the p260 p460 or p24L compute nodes 5 6 1 Dual VIOS on Power Systems compute nodes One of the capabilities that is available with Power Systems compute nodes that is managed by an FSM or an HMC is the ability to implement dual Virtual I O Servers Note IVM managed compute nodes cannot run more than one VIOS partition virtual server The VIOS IVM installs on partition 1 other partitions can be created by using IVM With IBM Flex System Manager the creation of partitions and the type of operating system environment that they support can occur before any operating system installation The only limitation from a dual VIOS perspective is the availability of disk and network physical resources Physical resource assignment to a partition is made at the level of the expansion card slot or controller slot physical location code Individual ports and internal disks cannot be individually assigned this can be done only at the SAS controller level if the optional SAS adapter is installed This type of assignment is not unique to Power Systems compute nodes and is a common practice for all Power platforms A dual VIOS en
278. e the implementation of FCoE connectivity for an IBM Flex System Enterprise Chassis the CN4093 10Gb Converged Scalable Switch and Power compute nodes with the CN4058 8 port 10Gb Converged Adapter installed There are other I O modules that can be used with FCoE networks such as the EN4093R 10Gb Scalable Switch Note FCoE over LAG is supported from I O Module firmware 7 7 and above FCoE over VLAG is planned for a future release To configure FCoE on the CN4093 10Gb Converged Scalable Switch it is necessary to understand the functions of and different port types within the switch IBM Flex System p270 Compute Node Planning and Implementation Guide The physical ports consist of internal and external types An example of internal port connectivity between all components is shown in Figure 6 3 on page 168 Internal ports on the switch module route to compute nodes or storage nodes within the chassis via the midplane and are fixed against node bay positions The IBM Omni external ports on the CN4093 10Gb Converged Scalable Switch can be cabled to external LAN or SAN network equipment depending on whether they are configured for Ethernet or FC mode Figure 6 4 shows the layout of port types on the CN4093 10Gb Converged Scalable Switch 2x 10 Gb ports 2x 40 Gb uplink ports 12x Omni Ports standard enabled with Upgrade 1 6 standard 6 with Upgrade 2 SFP ports QSFP ports SFP ports Switch release handle Management Switch one ea
279. e the server power is off The CMM and IVM together provide a simple but effective solution for a single partitioned server 7 5 2 IVM user interfaces Power compute node management administration tasks through IVM are done by a web interface with the VIOS acting as the web server Being integrated within the VIOS code IVM also handles all virtualization tasks that normally require VIOS commands to be run 200 IBM Flex System p270 Compute Node Planning and Implementation Guide Figure 7 7 show the main IVM view and is the normal default after a login The interface has two main sections a navigation list on the left and a work area on the right The work area changes with each navigation option Integrated Virtualization Manager Edit my profile Help Log out View Modify Partitions Welcome padmin itsovios6A Partition Management view Modify Partitions To perform an action on a partition first select the partition or partitions and then select the task View Modify System Properties j i i 2 i e view Modify Shared Memory Sybe ena Pool I O Adapter Management Total system memory 32 GB Total processing units 24 Memory available 26 62 GB Processing units available 71 6 Wiew Modify Virtual Ethernet ry 2 ae View Modify Physical Adapters Reserved firmware memory 1 38 GB Processor pool utilization 0 035 0 1 View Virtual Fibre Channel System attention LED Inactive Virtual Storage Management
280. e zones and permit access from Compute Node 8 to the V7000 Storage Node that is in the first four bays of the chassis ISCLI commands are used in the following steps The output is shown in Example 6 1 on page 177 1 Run the enable command to enter privilege mode 2 Runthe configure terminal command to enter the configuration terminal mode 3 Run the cee enable command to enable CEE 4 Runthe fcoe fips enable command to enable FIP 5 Run the system port EXT11 EXT12 type fc command to set the Omni ports EXT11 and EXT12 ports 53 and 54 to Fibre Channel mode 6 Create the FCoE VLAN by running the vlan 1002 command a Assign ports member INTA13 INTA14 INTA8 to the FCoE VLAN b Enable FCF by assigning fc mode Omni ports member EXT11 EXT12 to the FCoE VLAN 176 IBM Flex System p270 Compute Node Planning and Implementation Guide These steps must be completed in the order they are listed so that the configuration is successful In Example 6 1 the ISCLI commands show that the Omni ports EXT11 12 are changed from their default Ethernet mode to Fibre Channel after the CEE and FIP snooping is enabled The FCoE VLAN is created and the ports are assigned to the VLAN Example 6 1 Configuring basic FCoE VLAN Router gt enable Enable privilege granted Router configure terminal Enter configuration commands one per line End with Ctr1 Z Router config cee enable Router config fcoe fips enable Router config system port EXT11 E
281. ea command now includes a sharing option for the ha_mode attribute The sharing option divides traffic across the dual VIOS environment that is based on VLANs This function is negotiated in the dual VIOS environment automatically Create Virtual Ethernet Adapter Server 7954 24X SN107733B General Advanced Virtual ethernet adapter Adapter ID 7 Booo A ETHERNETO Default Fl Port Virtual Ethernet VLAN ID view Virtual Network This adapter is required for virtual server activation IEEE Settings Select this option to allow additional virtual LAN IDs for the adapter IEEE 802 19 compatible adapter Maximum number of VLANs 20 Add VLAN ID Additional VLAN IDs Shared Ethernet Settings Select Ethernet bridging to link bridge the virtual Ethernet to a physical network Use this adapter for Ethernet bridging Priority li B 1or2 Cance Figure 8 38 Virtual Ethernet values when used for a second SEA 5 Click OK when the values are specified The wizard returns to the Virtual Adapters window that shows an updated table as shown in Figure 8 39 on page 389 with two virtual Ethernet adapters now defined 388 IBM Flex System p270 Compute Node Planning and Implementation Guide Create Lpar Wizard Server 7954 234X 5N1077823B Virtual Adapters wf Create Partition Processors Create Virtual Adapter Ethernet Adapter Edit Fibre Channel Adapter SCSI Adapter Serial Adapter
282. ect Navigation key 1 Figure 9 55 Select Install Boot Device 482 IBM Flex System p270 Compute Node Planning and Implementation Guide 11 Select option 1 Select Install Boot Device The window that is shown in Figure 9 56 opens Version AF773_ 033 SMS 1 7 c Copyright IBM Corp 2000 2008 All rights reserved Select Device Type Diskette Tape CD DVD IDE Hard Drive Network List all Devices Navigation keys M return to Main Menu ESC key return to previous screen Type menu item number and press Enter or select Navigation key 6 Figure 9 56 Select a network as the installation device Chapter 9 Operating system installation methods 483 12 Select option 6 Network as the boot device The window that is shown in Figure 9 57 opens Version AF773_ 033 SMS 1 7 c Copyright IBM Corp 2000 2008 All rights reserved Select Network Service 1 BOOTP 2 ISCSI Navigation keys M return to Main Menu ESC key return to previous screen Type menu item number and press Enter or select Navigation key 1 Figure 9 57 Select BOOTP as the boot protocol 13 Select option 1 BOOTP as shown in Figure 9 57 14 Select the network adapter and the normal mode boot The installation starts loading the yaboot ibm boot loader through the network as shown in Figure 9 58 on page 485 484 IBM Flex System p270 Compute Node Planning and Implementation Guide IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
283. ect the PCI to PCl bridge device under the Physical I O Adapters option as shown in Figure 9 26 Physical I O Adapters af Name yf Memory Select one or more physical adapters from the list of available physical adapters Note Virtual servers tha are assigned physical adapters cannot be relocated wf Processor wf Ethernet Display only adapters that are currently available f Storage selection Bus A TE Select Location Code E Description 8 Assigned 5 id irtua Storage F U78AE 001 WZ500E4 EN4054 4 port 10Gb Ethernet 7895 22X 512 peewee Pi Cig Li Adapter SN10F528AVIOS1 S UF8AE O0L W2Z500E4 FCS172 2 port 8Gb Fibre 7895 22x 314 c Physical Pi C19 Li Channel Adapter SN1OFS2SAVIOS1 i L 0 U78AE 001 WZS5S00E4 P1 T2 PCI E SAS Controller 7895 22x 315 SNLOFS28AVIOS1 Fi U7B8BAE 001 WZS5S00E4 P1 T1 PCI to PCI bridge 7895 22X 317 SNLIOFS2Z8AVIOS1 UF 8AE O0L W2Z500E4 EN4054 4 port 10Gb Ethernet 7895 22x 526 Pi C18 L2 Adapter SNLOFS28AVIOSL Figure 9 26 Using the FSM virtual server wizard to add the USB port Chapter 9 Operating system installation methods 463 464 When you are using the FSM to modify a virtual server right click the virtual server name then click System Configuration Manage Profiles profile name I O as shown in Figure 9 27 to assign the PCI to PCI bridge to the wanted virtual server Typically this device is added as Desi
284. ecurity policy Another example is the scan control QSCANFSCTL system value If you did not do so already consider specifying NOPOSTRST for the system value to minimize future scanning of some objects that are restored during the installation of licensed programs in the following steps 17 Enter Y for the Define or change the system at IPL prompt and the Start system to restricted state prompt 18 Set the System time zone as appropriate To see a list of possible time zones press F4 at the time zone prompt Chapter 11 Installing IBMi 525 The Set Major Systems Options menu opens as shown in Figure 11 27 Set Major System Options Type choices press Enter Enable automatic configuration Y Yes N No Device configuration naming NORMAL S36 DEVADR Default special environment NONE S36 Figure 11 27 Set Major System Options menu The following values are set Enable automatic configuration The value Y Yes automatically configures local devices N No indicates no automatic configuration Device configuration naming Specify NORMAL to use a naming convention that is unique to the IBM i operating system The value S36 uses a naming convention that is similar to System 36 For information about device configuration naming and DEVADR see Local Device Configuration SC41 5121 00 Default special environment The default value NONE indicates no special environment S36 sets up the System 36 environ
285. ed correctly 1 Go to the LICPGM menu by running the GO LICPGM command 2 Select Option 50 Display log for messages 3 Enter the start date and start time on the Display Install History display and press Enter The messages about fix installation are shown 4 Optional Verify that requisite PTFs for licensed programs are installed For example enter the following command CHKPRDOPT PRDID OPSYS RLS OPSYS OPTION BASE CHKSIG NONE DETAIL FULL Note Checking several licensed programs or options might cause this command to run for several minutes If the fixes were installed successfully you see messages as shown in the following example PTF installation process started Loading of PTFs completed successfully Marking of PIFs for delayed application started Marking of PTFs for delayed application completed successfully Apply PTF started Applying of PTFs for product 5770xxx completed successfully Applying of PTFs for product 5770xxx completed successfully Applying of PTFs for product 5770xx completed successfully Applying of PTFs completed 544 IBM Flex System p270 Compute Node Planning and Implementation Guide If the PTFs were installed successfully but require a server IPL to activate the changes you see messages as shown in the following example PTF installation process started PTFs installed successfully but actions pending Server IPL required 11 8 Installing software license keys After sys
286. ed system must not be managed by an HMC or FSM The VIOS installation process effectively deactivates IVM if another platform manager is detected gt The designated system to be managed by IVM must not be partitioned gt The first operating system to be installed must be the VIOS 7 6 Comparing FSM HMC and IVM management The three management console or device options are FSM HMC and IVM All of these devices work with the CMM Only one of the management device types can be attached to a Power based compute node at any time Changing to a different management console For more information about the one way conversion from IVM to HMC see this website http pic dhe ibm com infocenter powersys v3rlm5 topic p7hchl iphch addhmc htm FSM to HMC conversions required the FSM to unmanage the chassis that contains the Power nodes before they are added the nodes as a server to the HMC HMC to FSM conversions are not supported IBM System Director and IBM System Director Management Console SDMC introduced common terminology that can be applied to both Power and Intel based compute nodes This new terminology is often used interchangeably with HMC and IVM terms Table 7 2 shows of comparison of these terms Table 7 2 Terminology comparison HMC terminology IVM terminology FSM terminology CMM terminology Managed System Managed System Compute Node LPAR logical LPAR logical Virtual Server None partition partition Partition
287. een this logical partition and the HMC If the channel does not work the SFP application generates a serviceable event in the SFP log Chapter 8 Virtualization 9393 This step ensures that the communications channel can carry service requests from the logical partition to the HMC when needed If this option is not selected the SFP application still collects service request information when there are issues on the managed system This option controls only whether the SFP application automatically tests the connection and generates a serviceable event if the channel does not work Clear this option if you do not want the SFP application to monitor the communications channel between the HMC and the logical partition that is associated with this partition profile gt Start the partition with the managed system automatically This option shows whether this partition profile sets the managed system to activate the logical partition that is associated with this partition profile automatically when you power on the managed system When you power on a managed system the managed system is set to activate certain logical partitions automatically After these logical partitions are activated you must activate any remaining logical partitions manually When you activate this partition profile the partition profile overwrites the current setting for this logical partition with this setting If this option is selected the partition profile set
288. elect Operations Power On as shown in Figure 7 106 Systems Management gt Servers View Table se ee ee eR ee Filter Tasks Views Available F Select Name al Erue a Processing Available a Reference Memory SE Code Units E Server 7954 24 SM1 0778282 properties 21 6 22 625 Max Page Siz Operations 200 Configuration Power Management Connections LED Status Hardware Information Schedule Operations Updates Launch Advanced System Management ASM Serviceability Utilization Data Rebuild Change Password lasks Server 7954 24 SN1077026 B Properties Connections Updates El Operations Hardware Information Serviceability Power On Power Management LED Status Schedule Operations Launch Advanced System Management 4 Litilization Data Rebuild Change Password Configuration Figure 7 106 HMC managed server Power On 284 IBM Flex System p270 Compute Node Planning and Implementation Guide Figure 7 107 shows the Power On server window that opens and is used to select the Power On method option Normal or Hardware Discovery The Normal method brings the server to a standby mode if no partitions are set to auto start The Hardware Discovery method temporarily creates and activates an all systems resources partition that is used to collected information such as network MAC addressees and Fibre Channel WWPNs After the detailed hardware information is collected the tem
289. election 802 1Qaz ETS gt Congestion Notification 802 1Qau CN gt Data Center Bridging Capabilities Exchange 802 1Qaz y Several terms are used to describe these DCB standards but the term Converged Enhanced Ethernet CEE is now widely accepted by IBM and several other vendors The official term is Data Center Bridging Figure 6 2 on page 167 shows a perspective on FCoE layering that is compared to other storage networking technologies The FC and FCoE layers are shown with the other storage networking protocols and iSCSI 166 IBM Flex System p270 Compute Node Planning and Implementation Guide deme hie ARI i gnome y Fibre Channel iSCSI FCoE Operating System Application layer Figure 6 2 Storage Network Protocol layering In general an FCoE network contains servers DCB capable switches Fibre Channel Forwarders FCFs that provide FC fabric services and storage devices An existing FC SAN might not be present For example for compute node connectivity to an IBM Flex System V7000 Storage Node the connection link is by I O module lossless Ethernet FCF switches a connected FC SAN does not have to be present Chapter 6 Converged networking 167 Figure 6 3 shows an example of FCoE connectivity of a compute node via the CN4093 10Gb Converged Scalable Switch to LAN SAN and the IBM Flex System V7000 Storage Node The CN4093 10Gb Converged Scalable Switch is providing FCF and DCB functi
290. em on the primary node of the PureFlex Express configuration The primary OS can be one of the following options gt AIX v6 1 gt AIX v7 1 gt IBM i v7 1 RHEL and SUSE Linux on Power VIOS is preinstalled on each Linux on Power compute node for the virtualization layer Client operating systems such as RHEL and SLES can be ordered with the PureFlex Express configuration but they are not preinstalled The following Linux on Power versions are available gt RHEL v5U9 POWER7 gt RHEL v6U4 POWER7 or POWER7 gt SLES v11SP2 2 5 10 Available software for x86 based compute nodes x86 based compute nodes can be ordered with VMware ESXi 5 1 hypervisor preinstalled to an internal USB key Operating systems that are ordered with x86 based nodes are not preinstalled The following operating systems are available for x86 based nodes Microsoft Windows Server 2008 Release 2 Microsoft Windows Server Standard 2012 Microsoft Windows Server Datacenter 2012 Microsoft Windows Server Storage 2012 RHEL SLES YYYY Y Y 46 IBM Flex System p270 Compute Node Planning and Implementation Guide 2 6 Services for IBM PureFlex System Express and Enterprise Services are recommended but can be decoupled from a PureFlex configuration The following offerings are available and can be added to either PureFlex offering gt PureFlex Introduction This three day offering provides IBM FSM and storage functions but does not include external i
291. emovable disks that can be attached to the new server You can use the ALT_DISK_INSTALL method to create a full copy of your system rootvg You can then remove that disk from the server and assign it to another server When you start your system your system is cloned For more information see the following Information Center resources gt Cloning the rootvg to an alternative disk with NIM http pic dhe ibm com infocenter aix v 7rl topic com ibm aix instal doc insgdrf basic_install_altdisk_clone htm gt Installing a partition by using alternative disk installation http pic dhe ibm com infocenter aix v rl topic com ibm aix instal doc insgdrf scenario altdisk_install htm gt Running alternative disk installation by using SMIT http pic dhe ibm com infocenter aix v 7rl topic com ibm aix instal doc insgdrf alt_disk_ install using smit htm For more information about the alt_disk_copy alt_disk_mksysb and alt_rootvg op commands see the AIX Information Center at this website http publib16 boulder ibm com pseries Chapter 9 Operating system installation methods 487 488 IBM Flex System p270 Compute Node Planning and Implementation Guide 10 Installing VIOS and AIX In this chapter we describe how to install VIOS and AIX on the IBM Flex System p270 Compute Node This chapter includes the following topics gt 10 1 Installing VIOS on page 490 gt 10 2 Installing AIX on page 491 Copyright IBM
292. ems The p270 supports SUSE Linux Enterprise 11 SP2 or later and Red Hat Enterprise Server Linux 6 4 or later Installation profile Select between Minimal Minimal with X default and full Each profile selects a different set of the distribution packages to have a minimal or a more complete Linux system e Minimal Includes the smallest set of packages that allows the system to boot and to perform basic tasks The disk usage is minimal You can install other packages in the future by using the standard method that is provided by each Linux distribution e Minimal with X Includes all the packages that are included in Minimal It also includes the X Window System a graphical environment that runs on Linux This option is for servers that include a graphics card but still have storage space restrictions Note Power Systems compute nodes do not have a video controller To use the X graphical environment you must use a graphical emulator such as VNC e Default Includes the default package selection for the distribution and provides a balance between disk usage and functionality e Full Includes all the package sets that are provided by the distribution Requires the most disk space Disk partitioning Select to install Linux on automatically partitioned disks or to use manual partitioning N_Port ID Virtualization NPIV is not supported Chapter 12 Installing Linux 563 564 For automatic partitioning choose
293. en press Enter 2 At the Host name prompt specify the name that you defined for your local host name 3 At the Domain name prompt specify the names that you defined for your local domain name Chapter 11 Installing IBMi 549 4 5 At the Host name search priority prompt set the value in one of the following ways Set the value to REMOTE This determines that the system automatically searches the host names in a DNS server first The system queries each DNS server until it receives an answer Set the value to LOCAL This determines that the system searches the host names in a host table first Note If you have a host table entry that is defined for your system set the host name search priority to LOCAL At the Domain name server prompt specify the IP address that represents your DNS server and then press Enter After the TCP IP domain information is defined you can use the character based interface or System i Navigator to change the configurations 11 9 6 Defining a host table You might want to use a host table other than a DNS server to resolve your IP addresses You can ignore this step if you use only a DNS server Like a DNS server a host table is used to associate IP addresses with host names so that you can use easily remember names for your system The host table supports IPv4 and IPv6 addresses To define a host table by using the character based interface complete the following steps
294. enablement for AIX 5L V5 3 Technology Level 12 SP4 Version WPARs Support for trusted kernel extension loading and configuration from WPARs Enables exporting a list of kernel extensions that can then be loaded inside a WPAR while maintaining isolation 8 3 POWER Hypervisor The IBM POWER Hypervisor is the foundation of IBM PowerVM By using the POWER Hypervisor you can divide physical system resources into isolated logical partitions Each logical partition operates like an independent system that is running its own operating environment AIX IBM i Linux or the Virtual I O Server The Hypervisor can assign dedicated processors I O and memory which you can dynamically reconfigure as needed to each logical partition The Hypervisor can also assign shared processors to each logical partition by using its micro partitioning feature Unknown to the logical partitions the Hypervisor creates a Shared Processor Pool from which it allocates virtual processors to the logical partitions as needed This means that the Hypervisor creates virtual processors so that logical partitions can share the physical processors while running independent operating environments Combined with features that are designed into the IBM POWER processors the POWER Hypervisor delivers functions that enable capabilities including dedicated processor partitions micro partitioning virtual processors IEEE VLAN compatible virtual switch virtual Ethernet adapters v
295. energy efficiency Intelligent energy optimization capabilities enable the POWER7 processor to operate at a higher clock frequency for increased performance and performance per watt or reduce frequency to save energy This feature is called Turbo Mode and is a no charge capability of the IBM Flex System p270 Compute Node 4 11 1 IBM EnergyScale technology This section describes the design features and the hardware and software requirements of IBM EnergyScale IBM EnergyScale consists of the following elements gt A built in EnergyScale device which is known as the Thermal Power Management Device TPMD This micro controller runs real time firmware whose sole purpose is to manage system energy The TPMD monitors the processor modules memory environmental temperature and fan speed This information is passed back to the CMM to react to environmental conditions gt Power executive software on the IBM Flex System CMM IBM EnergyScale functions include the following elements gt Energy trending EnergyScale provides the continuous collection of real time server energy consumption data This function enables administrators to predict power consumption across their infrastructure and to react to business and processing needs For example administrators might use such information to predict data center energy consumption at various times of the day week or month gt Thermal reporting The CMM displays measured ambient te
296. ensed enabled packaged product license term feature and system The repository can contain license keys for any system and the product does not need to be installed Chapter 11 Installing IBMi 545 If the product is installed on the system when you add license key information to the repository and the license is for this system the ADDLICKEY command also installs the license key When you install the license key the product s current usage limit is changed to the usage limit that is specified by the license key The expiration date is also set If the license key information exists in the license key repository for a product that is installed the license key information is installed as part of the product installation process 11 8 2 Setting usage limit of license managed programs After you complete the installation process and before you make the system available to all users set the usage limit for the software license managed products These products are listed on the Proof of Entitlement POE invoice or other documents that you received with your software order For products that have a usage limit you set the usage limit by using the WRKLICINF command To set your usage limit complete the following steps 1 Go to the Work with License Information display by entering WRKLICINF and pressing Enter 2 On the Work with License Information display press F11 Display Usage Information The usage limit number on each product
297. epresented as shown in the following example U78AE 001 ssssss P1 C18 An FC3172 2 port 8Gb FC Adapter is represented as shown in the following example U78AE 001 ssssss P1 C19 Ports The ports on the 4 port and 8 port adapters are evenly split across the following different ASICs EN4054 4 port 10Gb Ethernet Adapter EN2024 4 port 1Gb Ethernet Adapter CN4058 8 port 10Gb Converged Adapter FC5054 4 port 16Gb FC Adapter M vy y Each ASIC and its ports can be assigned independently to different virtual servers The location code has a suffix of L1 or L2 to distinguish between the two ASICs and sets of ports Chapter 8 Virtualization 347 Figure 8 3 shows the expansion card location codes for the p270 Un P1 C19 Figure 8 3 p270 adapter location codes The integrated SAS storage controller has a location code of P1 R1 The USB controller has a location code of P1 T1 On the p270 a second SAS controller option IBM Flex System Dual VIOS Adapter has a location code of P1 C20 This location is physically under P1 C19 Virtual adapters Assigning and configuring virtual adapters requires more planning and design For virtual Ethernet adapters the VLANs that the virtual servers require access to must be considered The VIOS bridges the virtual Ethernet adapter to the physical Therefore the virtual Ethernet adapter in the VIOS must be configured with all of the VLANs that are required for the virtual servers in
298. eps to review the history log 1 Go to the LICPGM menu by running the GO LICPGM command 2 Select Option 50 Display log for messages 3 Look for any messages that indicate any PTF activity during the previous IPL Normal PTF processing occurs only during an unattended IPL that immediately follows a normal system end If you did not specify Y for Perform Automatic IPL on the Install Options for PTFs display verify that the Power Down System PWRDWNSYS command was run with RESTART YES and that the IPL mode set to normal lf an abnormal IPL occurs some LIC fixes might be installed but no other operating system or licensed program PTFs are applied You can look at the previous end of system status system value QABNORMSW to view whether the previous end of system was normal or abnormal Chapter 11 Installing IBMi 543 4 Look for any messages that indicate that there was a failure during the IPL or that indicate that a server IPL is required If you find any failure messages complete the following steps a Go to the start control program function SCPF job log by using the WRKJOB SCPF command b If you performed an IPL choose the first job that is inactive and review the spooled file for that job c Find the error messages and determine what caused the error d Fix the errors and reinitially load the system to apply the rest of the PTFs You also can perform the following steps to verify that your fixes were install
299. er ID ho oO p Figure 8 42 HMC Create Virtual SCSI Adapter window IBM Flex System p270 Compute Node Planning and Implementation Guide The wizard returns to the Virtual Adapters window that shows an updated table of all created virtual adapters Ethernet and SCSI as shown in Figure 8 43 Create Lpar Wizard Server 7954 24X SN107783B Virtual Adapters v Create Partition sf Partition Profile ae t Processing Settings xy aa aa virtual adapter settings are listed below Maximum virtual adapters Profile Summary ff a oy e e a Number of virtual adapters N A N A N A N A Ethernet N A N A Server SCSI 5 2 102 Server Serial Any Partition Server Serial 1 Any Partition Figure 8 43 Review virtual adapters Review the table for accuracy Edits can be made by clicking the wanted adapter number in the Adapter ID column or by selecting the wanted adapter and using the Actions drop down menu and clicking Edit 10 When the review is complete click Next Optional Settings window In the Optional Settings window that is shown in Figure 8 44 on page 395 you can perform the following functions gt Enable connection monitoring Select this option to enable connection monitoring between the HMC and the logical partition that is associated with this partition profile When connection monitoring is enabled the Service Focal Point SFP application periodically tests the communications channel betw
300. er firmware allows for the enabling of redundant error path reporting the Redundant Error Path Reporting Capable option on the Capabilities tab in Managed System Properties is True Create Lpar Wizard Server 7954 24X 5N107732B Optional Settings Create Partition Partition Profile wf Processors g P Select optional settings for this partition profile using the fields below E Enable connection monitoring E Automatically start with managed system Profile Summary E Enable redundant error path reporting Boot modes Normal System Management Services SMS Diagnostic with default boot list DIAG_DEFAULT Diagnostic with stored boot list DIAG_STORED Open Firmware OK prompt OPEN_FIRMWARE Finish Figure 8 44 Defined virtual Ethernet adapter properties You can also specify one of the following available boot modes gt Boot modes Select the default boot mode that is associated with this partition profile When you activate this partition profile the system uses this boot mode to start the operating system on the logical partition unless you specify otherwise when you are activating the partition profile The boot mode applies only to AIX Linux and Virtual I O Server logical partitions This area is unavailable for IBM i logical partitions The following valid boot modes are available Normal The logical partition starts as normal This is the mode that you use to complete
301. er of objects that are selected are shown The example in Figure 7 50 shows a virtual server on the same physical server that was used previously This virtual server does not have an Operations option from a right click operation because the physical server is powered off Also the State is Not Available Performance Summary Search the table Search select Name Part Id Access State li med a A Related Resources b P ox Mot Available C itsoAIsi Topology Perspectives fe Mok Not Available Create Group Change Default Frotile Add to Autom ation Irae nto ny Release Management Remote Access Security System Configuration System Status and Health Semice and Support Figure 7 50 Context sensitive menu system T FF F F F F F F When the physical server is powered up the state for the virtual server changes to a value other than Not Available typically Stopped or Running With these values a right click of the virtual server now shows an Operations option 7 8 4 Managing Power compute node basics 238 Basic compute node management consists primarily of the following tasks gt Requesting access to the Flexible Service Processor on page 238 gt Inventory collection on page 240 gt Opening a virtual terminal console with the FSM GUI on page 243 gt Updating system firmware on page 247 These tasks are described in the following sections Requesting access to t
302. er user So one user with the role UserManagement can manage the users on the system but does not have any further access With RBAC the VIOS can split management functions that can be done only by the padmin user which provides better security by giving only the necessary access to users It also provides easy management and auditing of system functions Suspend Resume By using Suspend Resume you can provide long term suspension greater than 5 10 seconds of partitions saving partition state memory NVRAM and VSP state on persistent storage This action makes server resources available that were in use by that partition which restores the partition state to server resources and resumes operation of that partition and its applications on the same server or on another server The requirements for Suspend Resume dictate that all resources must be virtualized before suspending a partition If the partition is resumed on another server the shared external I O which is the disk and local area network LAN must remain identical Suspend Resume works with AIX and Linux workloads when managed by the Hardware Management Console HMC Shared storage pools You can use VIOS 2 2 to create storage pools that can be accessed by VIOS partitions that are deployed across multiple Power Systems servers Therefore an assigned allocation of storage capacity can be efficiently managed and shared IBM Flex System p270 Compute Node Planning an
303. erform a normal mode IPL to the B IPL source b When the Sign On display is shown continue with Verifying fix installation on page 543 2 If the Confirm IPL for Technology Refresh PTFs display is shown complete the following steps to perform the PTF installation process a Press F10 to end all jobs on the system and IPL the system b When the Sign On display is shown enter GO PTF again with the same parameters c If you are installing from a tape or optical device mount the first volume in the PTF volume set After the IPL is complete the subsequent PTF installation process loads the remaining PTFs from the installation device and sets the IPL action to apply the PTFs on the next IPL 3 If the escape messages CPF3615 PTF install processing failed and CPF36BF IPL required for a technology refresh PTF are displayed complete the following steps to complete the PTF installation process a End all jobs on the system and perform a normal mode IPL to the B IPL source b When the Sign On display is shown enter GO PTF again with the same parameters c If you are installing from a tape or optical device mount the first volume in the PTF volume set After the IPL is complete the PTF installation process loads the remaining PTFs from the installation device and sets the IPL action to apply the PTFs on the next IPL 542 IBM Flex System p270 Compute Node Planning and Implementation Guide If you entered Y Yes
304. erminal The partition ID can be obtained from the ID column in the work area when the View Modify Partitions option was selected from the navigation area Figure 7 152 shows how to open the virtual terminal of a client LPAR through a VIOS telnet session telnet itsoVIOS6A IBM Virtual I 0 Server login padmin padmin s Password Last login Tue Jun 25 14 18 41 CDT 2013 on dev pts 2 from 9 42 170 129 mkvt id 2 AIX Version 7 Copyright IBM Corporation 1982 2011 Console login Figure 7 152 Console window through VIOS CLI To close the virtual terminal from the client LPAR press tilde then a period This key sequence cannot be used at the operating system login of the client LPAR To force a close of the client LPAR console login to the VIOS by using the padmin ID and run the rmvt id lt partition ID gt command Chapter 7 Power node management 315 Updating the system firmware Updating system firmware on an IVM managed compute node is a two step process in which the update is acquired and then applied or installed The following example described the use of the manual download from IBM Fix Central for updating the Licensed Internal Code which is more commonly known as system firmware on a Power compute node Terms The terms system firmware platform firmware Licensed Internal Code LIC and Machine Code are used interchangeably in this section Acquiring system firmware update The system firmware u
305. es identifier integrated drive electronics Institute of Electrical and Electronics Engineers instant messaging integrated management module I O operations per second Internet Protocol IPL ISA ISCLI ISCSI ISL ISO ITSO IVM KB KVM LAG LAN LDAP LED LICPGM LLA LOM LP LPAR LPM LR LSO LUN LVM MAC MB MM MSI MTU NFS initial program load industry standard architecture industry standard command line interface Internet small computer system interface Inter Switch Link International Organization for Standards information technology International Technical Support Organization Integrated Virtualization Manager kilobyte keyboard video mouse link aggregate group local area network Lightweight Directory Access Protocol light emitting diode licensed program Link local address LAN on motherboard low profile logical partitions Live Partition Mobility long range Large Send Offload logical unit number Logical Volume Manager media access control megabyte Management Module Message Signaled Interrupt maximum transmission unit network file system IBM Flex System p270 Compute Node Planning and Implementation Guide NIC NIM NMI NPIV NPV NVRAM OS OSPF PC PCI PCI E PCOMM PDU PF PFC PID POE PSP PSU PTF PVID PXE QDR RAID RAM RAS RBAC RDIMM RDMA RHEL RHN network interface card Network Installation Manager non maskable i
306. es itsoV General ipeaR ERSTE Memory Ifo Virtual Adapters Power Controlling Settings Detailed below are the current processing settings for this virtual server profile Processing mode Dedicated D Shared Dedicated processors Total host processors 24 00 Minimum processors Desired processors Maximum processors Processor Sharing Allow when virtual server is inactive P Allow when virtual server is active Processor compatibility mode default wt Figure 8 48 VIOS profile Changing processor settings from FSM 4 Similar observations and modifications can be made regarding the memory settings by clicking the Memory tab in the profile window The default minimum memory is 256 MB Increase this memory for an AIX virtual server 5 When all changes are complete click OK A change that is made to a profile requires that the virtual server is stopped and reactivated IBM Flex System p270 Compute Node Planning and Implementation Guide Using the HMC GUI Similar to the FSM the HMC creates a profile for an LPAR The HMC create partition wizard is more granular and also allows the selection of minimum and maximum values for CPU and memory allocations This process can be used as the procedure to modify any profile values as needed To change a VIOS profile by using the HMC user interface complete the following steps 1 Select the newly created VIOS and click Configuration Manage Profiles or
307. es which enables connections within or external to the IBM Flex System Enterprise Chassis The firmware for this four port adapter is provided by Emulex while the AIX driver and AIX tool support are provided by IBM Table 4 13 lists the ordering part number and feature code Table 4 13 Ordering part number and feature code 1762 EN4054 4 port 10Gb Ethernet Adapter The IBM Flex System EN4054 4 port 10Gb Ethernet Adapter has the following features and specifications gt Dual ASIC Emulex BladeEngine 3 BE3 controller which allows logical partitioning gt On board flash memory 16 MB for FC controller program storage gt Uses standard Emulex SLI drivers gt Interoperates with existing FC SAN infrastructures such as switches arrays SRM tools including Emulex utilities and SAN practices gt Provides 10 Gb MAC features such as MSI X support jumbo frames 8 KB support VLAN tagging 802 1Q PER priority pause or priority flow control and advanced packet filtering gt No host operating system changes are required NIC and HBA functionality including device management and utilities are not apparent to the host operating system 108 IBM Flex System p270 Compute Node Planning and Implementation Guide Figure 4 22 shows the IBM Flex System EN4054 4 port 10Gb Ethernet Adapter y Ae sasaa N ij ssesl hl 22 E8 ST tii HEEE 5 z 5 O lala menile e e Figure 4 22 The EN4054 4 p
308. esagent pLinux For running Electronic Service Agent inside Linux LPAR instead using ESA of the Management appliance which is the recommended method e IBM Java packages e nmon Linux version of the nmon AIX monitoring tool e Large Page Analysis Ipa IBM Installation Toolkit for PowerLinux IBM packages to be installed Eiter packages by me Select the IBM R software to be installed Package PAM suthenticate Ibm PMLInu ibmitHinus sct pexpect Category All Servers All Servers All Servers All Servers Information See details See details See details See details HMCM Managed Server HMC TVM Managed Server esagent pLinux Optional lbrm power managed rhel libservicelog devel ibm java ppec sdk Optional ibm java ppc 4 sdk Optional Ina Optional a ao oO K uit Prev Ment Figure 12 19 IBM packages to be installed See details See details See details See details See details See details Optional See details Use Filter by package to view packages in each category Select a category and click Apply Check the boxs next ta the package name to select a package for installation For more information about each package click See details When you have finished selecting packages click Next Note Packages that are Unavailable for selection are installed automatically 24 When prompted accept the license agreements and click Next 572 IBM Flex System p270 Compute Node
309. ess Select Option 2 to install the operating system as shown in Figure 11 19 IPL or Install the System System E1277E3B Select one of the following Perform an IPL Install the operating system Use Dedicated Service Tools DST Perform automatic installation of the operating system Save Licensed Internal Code Selection 2 Licensed Internal Code Property of IBM 5770 999 Licensed Internal Code c Copyright IBM Corp 1980 2010 All rights reserved US Government Users Restricted Rights Use duplication or disclosure restricted by GSA ADP schedule Contract with IBM Corp Figure 11 19 IPL or Install Operating System menu Chapter 11 Installing IBMi 519 10 Select your source of operating system media In our example we are using virtual optical from the VIOS This is considered an Optical and not a virtual device as shown in Figure 11 20 Confirm the operating system by pressing F12 Install Device Type Selection System E1277E3B Select the installation device type Tape Optical Virtual device preselected image catalog Current alternate selected device Network device Selection 2 F3 Exit F12 Cancel Figure 11 20 Installation Device Type selection menu 520 IBM Flex System p270 Compute Node Planning and Implementation Guide 11 The Select a Language Group display which shows the primary language currently on the system opens This value should match the language feature nu
310. et CEE Unified Fabric Port UFP support C S E C S C s Noyes Yes IBM VMready a Planned support in a later release 5 2 2 Virtual LANs Virtual LANs VLANs are commonly used in the Layer 2 network to split up groups of network users into manageable broadcast domains create a logical segmentation of workgroups and enforce security policies among logical segments VLAN considerations include the number and types of supported VLANs supported VLAN tagging protocols and specific VLAN configuration protocols that are implemented All IBM Flex System switch modules support the 802 1Q protocol for VLAN tagging 138 IBM Flex System p270 Compute Node Planning and Implementation Guide Another usage of 802 1Q VLAN tagging is to divide one physical Ethernet interface into several logical interfaces that belong to more than one VLAN A compute node can send and receive tagged traffic from several VLANs on the same physical interface This task can be done with network adapter management software the same used for NIC teaming Each logical interface appears as a separate network adapter in the operating system with its own set of characteristics such as IP addresses protocols and services Having several logical interfaces can be useful in cases when an application requires more than two separate interfaces and you do not want to dedicate a whole physical interface to it for example not enough interfaces or low traffic It m
311. eting partition dev sdas 15 45 30 Manage parts py Deleting partition dev sdaz 15 45 30 Manage parts py Deleting partition dev sdal 15 45 30 Manage parts py Creating Partitions 15 45 30 Manage parts py Creating partition dev sdal 15 46 30 Manage parts py Re reading partition table for dev sda try 1 15 45 31 Manage parts py Stopping all RAID arrays 15 46 31 Manage parts py Creating partition dev sdaz 15 45 31 Manage parts py Creating partition dev sdas 15 46 31 manage parts py Creating partition dev sda4 15 46 31 Manage parts py Formatting Toolkit partition 15 46 31 manage parts py Activating all existing LVM volume groups 15 46 31 Manage parts py Activating all existing RAID arrays 15 46 31 Manage parts py Creating RAID arrays 15 45 31 manage parts py Waiting for RAID arrays to become clean 15 46 31 Manage parts py Arrays Figure 12 22 Linux Installation in progress 29 Monitor the installation at the console as shown in Figure 12 23 IBMIT Installation Welcome to yaboot version 10 1 22 r1034 booted from vdevice v scsi 30000014 disk 8100000000000000 Using configfile built in Enter help to get some basic usage information boot Figure 12 23 First reboot 30 After the reboot press Enter when prompted Chapter 12 Installing Linux 575 31 The packages are installed and progress is displayed in the panel as shown in Figure 12 24 Welcome to Red Hat E
312. etwork E Use this adapter for Ethernet bridging Figure 8 40 Virtual Ethernet values 7 Click OK when the values are specified IBM Flex System p270 Compute Node Planning and Implementation Guide The wizard returns to the Virtual Adapters window that shows an updated table as shown in Figure 8 41 with three virtual Ethernet adapters now defined Create Lpar Wizard Server 7954 23 4X 5N107732B Virtual Adapters Create Partition wf Processors Cte ei a Ethernet Adapter Edit Fibre Channel Adapter tween logical partitions The current Properties SCSI Adapter Virtual Adapters Delete Serial Adapter Optional Settings Profile Summary N A N A N A N A N A N A Server Serial Any Partition Any Partition Slot Server Serial Any Partition Any Partition Slot Total 5 Filtered 5 Selected 0 Ethernet Ethernet Z 3 4 o 1 F a e i a Figure 8 41 HMC Virtual Adapters window updated showing third virtual Ethernet adapter The HMC Virtual Adapters window is also used to create the virtual SCSI adapters Virtual SCSI attachment of disk storage to a client LPAR requires a pair of adapters one on the VIOS or server side the other on the AIX or client LPAR side The VIOS or server side virtual SCSI adapter is created in the next steps 8 Select Actions gt Create SCSI to open the Create Virtual SCSI Adapter window as shown in Figure 8 42 on page 392 Use the following settings Accept the def
313. ew options IBM Flex System p270 Compute Node Planning and Implementation Guide Compute Node management Clicking Compute Nodes gt Compute Nodes as shown in Figure 7 21 displays a list of all compute nodes that are installed in the chassis The Device Name column contains active links the remaining columns are information only IBM Chassis Management Module USERID Settings Chassis Management Mot Module Management Search Chassis Properties and settings for the overall chassis System Status Multi Chassis Monitor Events Service and Support Compute Nodes Properties and settings for compute nodes in a Compute Nodes Storage Nodes Properties and settings for storage nodes in t 10 Modules Properties and settings for IO Modules in the If specifying a power action for multiple nodes please be aware that in case failed executing the action Successful nodes are ignored Different node types may take different amounts of time to complete the power ac immediately reflected on the page In this case the user may have to perform a refre reflected on the page Fans and Cooling Cooling devices installed in your system Power Modules and Management Power devices consumption and allocation Component IP Configuration Single location for you to view and configure the s Se rere eee See eee Chassis Internal Network Provides internal connectivity between compute no Power and Restart Actions Global Set
314. external 10 Gb ports 6x Omni Ports Upgrade 1 Adds 14x internal ports 2x 40 GbE QSFP Upgrade 2 Adds 14x internal ports and 6x Omni Ports gt S14093 System Interconnect Module 42x internal ports 14x 10 Gb and 2x 40 Gb convertible to 8x 10 Gb uplinks Base switch 10x external 10 Gb uplinks 14x internal 10 Gb ports Upgrade 1 Adds 2x external 40 Gb uplinks and 14x internal 10 Gb ports Upgrade 2 Adds 4x external 10 Gb uplinks and14x internal 10 Gb ports The following Fibre Channel modules were announced at the time of this writing gt IBM Flex System FC3171 8Gb SAN Pass thru 14 internal and six external ports 2 Gb 4 Gb and 8 Gb capable gt IBM Flex System FC3171 8Gb SAN Switch 14 internal and six external ports 2 Gb 4 Gb and 8 Gb capable gt IBM Flex System FC5022 16Gb SAN Scalable Switch and IBM Flex System FC5022 24 port 16Gb ESB SAN Scalable Switch 28 internal and 20 external ports 4 Gb 8 Gb and 16 Gb capable FC5022 16 Gb SAN Scalable Switch Any 12 ports FC5022 16 Gb ESB Switch Any 24 ports gt IBM Flex System IB6131 InfiniBand Switch InfiniBand module 14 internal QDR ports up to 40 Gbps 18 external QDR ports Upgradeable to FDR speeds 56 Gbps For more information about available switches see BM PureFlex System and IBM Flex System Products and Technology SG24 7984 which is available at this website http www redbooks ibm com abstracts sg247984 h
315. f overview of the IBM FSM as shown in Figure 7 4 For more information about the FSM when it is used to manage a Power basedPower based compute node see 7 8 Management by using FSM on page 224 Figure 7 4 IBM Flex System Manager Detailed FSM setup and overall usage information is not covered in this document but is available in mplementing Systems Management of IBM PureFlex System SG24 8060 which is available at this website http www redbooks ibm com abstracts sg248060 html 7 3 1 FSM overview The FSM is a high performance scalable system management appliance that is based on the IBM Flex System x240 Compute Node FSM hardware has Systems Management software preinstalled and you can configure monitor and manage FSM resources in up to four chassis The FSM looks similar to the x240 Compute Node However there are differences that make these two hardware nodes not interchangeable From a hardware point of view the FSM is a locked down compute node with a specific hardware configuration that is designed for optimal performance of the preinstalled software stack This hardware configuration currently includes an eight core 2 0 GHz processor 32 GB of RAM two 200 GB solid state drives SSDs in an RAID 1 configuration and one 1 TB hard disk drive HDD A management network adapter is a standard feature of the FSM and provides a physical connection into the private management network of the chassis Chapter 7 P
316. f the expansion cards connect directly to the midplane such as the CFFh adapters and others do not such as the CIOv and CFFv adapters 4 9 2 PCI hubs 104 The I O is controlled by two P7 IOC I O controller hub chips This configuration provides additional flexibility when resources are assigned within Virtual I O Server VIOS to specific Virtual Machine LPARs IBM Flex System p270 Compute Node Planning and Implementation Guide 4 9 3 Available adapters Table 4 11 shows the available I O adapter cards for Power Systems compute nodes Table 4 11 Supported I O adapter for Power Systems compute nodes Converged Ethernet Adapter neern InfiniBand I O Adapters 1761 IBM Flex System IB6132 2 port QDR InfiniBand Adapter Chapter 4 Product information and technology 105 4 9 4 Adapter naming convention Figure 4 20 shows the naming structure for the I O adapters IBM Flex System EN2024 4 port 1 Gb Ethernet Adapter EN2024D Figure 4 20 Naming structure for the I O expansion cards 4 9 5 IBM Flex System EN2024 4 port 1Gb Ethernet Adapter The IBM Flex System EN2024 4 port 1Gb Ethernet Adapter is a quad port network adapter from Broadcom that provides 1 Gbps full duplex Ethernet links between a compute node and Ethernet switch modules that are installed in the chassis The adapter interfaces to the compute node by using the PCle bus Table 4 12 lists the ordering part number and feature code Table 4
317. f these uplinks is disabled and the other carries traffic from all VLANs However if two STP instances are running one link is disabled for one set of VLANs while carrying traffic from another set of VLANs and vice versa Both links are active thus enabling more efficient use of available bandwidth Layer 2 failover Depending on the configuration each compute node can have one IP address per each Ethernet port or it can have one virtual NIC consisting of two or more physical interfaces with one IP address This configuration is known as NIC teaming technology From an IBM Flex System perspective NIC teaming is useful when you plan to implement high availability configurations with automatic failover if there are internal or external uplink failures 144 IBM Flex System p270 Compute Node Planning and Implementation Guide We can use only two ports on a compute node per virtual NIC for high availability configurations One port is active and the other is standby One port for example the active port is connected to the switch in I O bay 1 and the other port for example the standby port is to be connected to the switch in I O bay 2 If you plan to use an Ethernet expansion card for high availability configurations the same rules apply Active and standby ports need to be connected to a switch in separate bays If there is an internal port or link failure of the active NIC the teaming driver switches the port roles The standby port
318. file The virtual server s profile to install against Full path to first VIOS ISO image The location of optical media or virtual ISO file The example that is shown in Figure 9 2 on page 442 uses a virtual ISO file that is in the FSM user ID s home directory New VIOS IP The main interface for the VIOS partition from which it is administered New VIOS network mask The network mask value for the main VIOS IP address Default gateway for new VIOS The gateway address to be assigned to the primary VIOS IP Adapter speed Auto is the only valid value for Power compute nodes Adapter duplex mode Auto is the only valid value for Power compute nodes VLAN tag priority QoS value Setting the VLAN Tag priority for QoS generally the default is accepted VLAN number for VIOS if required This option creates a VLAN device during the installation process Post installation network configuration Determines whether the interface that is specified in the command is configured with the network settings after the installation is complete Chapter 9 Operating system installation methods 441 USERID itsoFSM1 gt installios The following objects of type managed system were found Please select one 1 Server 7895 22X SN10F528A 2 Server 7895 42X SN10078DB 3 Server 7954 24X SNF28D005 Enter a number 1 3 3 The following objects of type virtual I O server partition were found Please select one 1 itsoVIOS6A 2 its
319. formation about these tasks see Red Hat Enterprise Linux 6 Installation Guide and the DM Multipath which is available at this website http docs redhat com docs en US Red_ Hat Enterprise Linux 6 For more information about VNC see this website http www realvnc com Figure 12 38 shows the network TCP IP configuration that is required to use VNC Welcome to Red Hat Enterprise Linux for ppc64 Enter the IPv4 and or the IPv6 address and prefix address prefix For IPv4 the dotted quad netmask or the CIDR style prefix are acceptable The gateway and name server fields must be valid IPv4 or IPv6 addresses IPv4 address 20 114 255 255 XXX 0 X XX Gateway X XX 20 1 X XX Name Server lt Tab gt lt Alt Tab gt between elements lt Space gt selects lt F12 gt next screen Figure 12 38 Manual TCP IP configuration for VNC installation Chapter 12 Installing Linux 587 Figure 12 39 shows the VNC graphical console start Running anaconda 13 21 117 the Red Hat Enterprise Linux system installer please wait 21 08 52 Starting VNC 21 08 53 The VNC server is now running 21 08 53 You chose to execute vnc with a password 21 08 53 Please manually connect your vnc client to ite bt 061 stglabs ibm com 1 9 27 20 114 to begin the install 21 08 53 Starting graphical installation Figure 12 39 VNC server running 7 Connect to the IP address that is listed in Figure 12 39 with a VNC clie
320. formaton Connecting virtual server Connecting adapter ID Figure 8 80 Create virtual SCSI adapter Click OK Chapter 8 Virtualization 427 8 The main Virtual Storage adapter window opens as shown in Figure 8 81 We create only one virtual SCSI adapter so click Next Create VITUS Serer Server S9o 44eR SHLUSeUUS Virtual Storage Adapters Name Memory Specify the virtual storage adapters required for this virtual server Processor Ethernet Maximum number of virtual adapters 23 Storage selection Virtual Storage Adapters Add Delete Select Adapter ID ae Type He Connecting Virtual Server E 13 SCSI Client 7989 VI S 1 Mote Storage adapters configuration can be automatically handled if VIOS servers with active rnc connection are available Figure 8 81 IBM i virtual server settings for virtual SCSI adapter Important Do not forget to configure the virtual SCSI server adapter on the VIOS to which this virtual SCSI client adapter refers In addition disks must be provisioned to the virtual SCSI server adapter in the VIOS to be used by the IBM i virtual server operating system and data To use a virtual optical drive from the VIOS for the IBM i operating system installation the installation media ISO files must be copied to the VIOS and the virtual optical devices must be created 9 In the physical adapter settings window do not select physical adapters for IBM i virtual servers as shown
321. from the Tasks menu click Manage Profiles under Configuration as shown in Figure 8 49 Systems Management gt 4 Properties Ch Default Profil bee r ange Default Profile amen ae Operations l Environment Configuration l Select gt Name A Active Profile l Hardware Information a Bl tsoviosea 2 Dynamic partitioning Console Window au T T a _b uy T T o T a co Serviceability asks itsoVIOSGA l8 Properties E Configuration Console Window Change Default Profile g Manage Profiles Serviceability E Operations Manage Custom Groups Activate Save Current Configuration Deactivate Attention LED Hardware Information Schedule Operations Dynamic partitioning Delete Figure 8 49 Manage VIOS profiles to change settings from HMC A window opens and shows all of the profiles that are available for the selected LPAR Select the profile to edit and click Actions Edit or click the profile name In this example click the Processors tab to access the processor settings that were made by the Create Partition wizard The window that is shown in Figure 8 50 on page 402 opens Values can be changed in this window to match the current requirements for the VIOS virtual server Change the minimum desired and maximum values as needed Chapter 8 Virtualization 401 402 Logical Partition Profile Properties
322. g a Power compute node to anHMC 269 7 9 3 Power compute node management basics 05 283 7 10 Management by using IVM 0 0 00 eee 299 721021 Installing WV Mis eaa Sat dae ee Sh ee ewe ed 299 LOZ ACCESSING IVM siete Susu ernan ear aha BRS ae A 299 7 10 3 Power compute node basic management 05 300 7 10 4 Service and Support 0 0 ccc eee 326 Chapter 8 Virtualization 0 0 0 0 cc eee 333 G51 INMOGUCHON cian a eae k aa ee eM E eS 334 82 IP OWCIV IN oxy eect e ei ont wa ane Bee ee D E 334 8 2 1 PowerVM editions 0 0 0 ees 336 8 2 2 PowerVM features 0 0 0 2c ees 338 8 3 POWER TYDEINISON cares x Mota et an eo healer AG Pe 340 8 3 1 Logical partitioning technologieS 00 cee eee eee 342 8 3 2 Virtual I O adapters 0 0 0 eee 343 Contents vii viii 8 4 Planning for a virtual server environment 2000008 346 8 5 Creating a VIOS virtual server 0 0 00 0 eee 349 B15 MUSINGS CLM rasna gaia ech Bod cd eadtnas asd E a neko Deen ek 349 8 52 GUTTA OGS o oessa 0 wt eB oh te no nd ow hw a ad eae A eee eae oe 354 8 5 3 Modifying the VIOS profile 0 0 0 cece eee 399 8 6 Creating an AIX or Linux virtual Server 00 0 0c ee eee 413 8 6 1 Using the IVM GUI 2 0 ee 413 8 7 Creating an IBM i virtual Server 0 0 00 ees 422 8 8 Creating a full system partition 20
323. g steps to verify completion of the INZSYS process following the first system IPL not in restricted state 1 Go to the LICPGM menu by using the GO LICPGM command 2 Select Option 50 Display log for messages and look for the following messages Initialize System INZSYS started Initialize System INZSYS processing completed successfully CPC37A9 If you do not see the completed message or if the message Initialize System INZSYS failed appears review the job log to determine the problem Use the information in the job log to correct the problem then restart the conversion process 536 IBM Flex System p270 Compute Node Planning and Implementation Guide 11 7 Installing Program Temporary Fix packages It is strongly advised that after a new operating system is installed you install the most current cumulative Program Temporary Fix PTF package and any applicable PTF groups for your installed software PTF packages can be ordered via IBM Fix Central IBM ID required which is available at this website http ibm com support fixcentral 11 7 1 Reviewing fix cover letters before installation Determine whether there are any special instructions that you should be aware of before you install your fixes You should always review your cover letters to determine whether there are any special instructions If you are installing a cumulative PTF package you should read the instructions that are included with that package If
324. gain to verify the change Chapter 9 Operating system installation methods 471 Isrep Size mb Free mb Parent Pool Parent Size Parent Free 10198 6905 rootvg 40896 6272 Name File Size Optical Access AIX7TLISP1 3293 vtopt0d ro unloadopt vtd vtopt0 Isrep Size mb Free mb Parent Pool Parent Size Parent Free 10198 6905 rootvg 40896 6272 Name File Size Optical Access AIX7TLISP1 3293 None ro Figure 9 41 Using the unloadopt command to unload an ISO image file from a virtual target device 9 5 4 Using the optical device as an installation source When a physical or virtual optical device is ready to install a virtual server or partition complete the following steps to perform an optical media installation 1 If a physical device is used ensure that the external USB optical drive is attached to the USB port of the Power Systems compute node and powered on or create the appropriate virtual optical device 2 Insert the installation media into the optical drive or associate a media repository image with a file backed virtual target device 472 IBM Flex System p270 Compute Node Planning and Implementation Guide 3 Reboot or power on the server virtual server or partition and press 1 when prompted to access SMS mode as shown in Figure 9 42 IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
325. ge System Flash again The Update and Manage Flash window is refreshed with the committed firmware levels as shown in Figure 7 165 UPDATE AND MANAGE FLASH 802810 The current permanent system firmware image is FW773 00 AF773 019 The current temporary system firmware image is FW773 00 AF773 019 The system is currently booted from the temporary firmware image Move cursor to selection then press Enter Validate and Update System Firmware Validate System Firmware Commit the Temporary Image F1 Help FIO Exit F3 Previous Menu Figure 7 165 Validate and Update System Firmware option Chapter 7 Power node management 323 7 Select Validate and Update System Firmware to start the update process As shown in Figure 7 166 the full path to the firmware update image file is requested in the next window In this example the following path is used tmp fwupdate 01AF773_ 021 021 img Press F7 to confirm the entry UPDATE AND MANAGE FLASH 802812 Enter the fully qualified path name of the file with the flash update image The file will be copied to var update flash image When finished use Commit to continue flash update image file lt l_021 img F1 Help F2 Refresh F3 Cancel F4 List F5 Reset F7 Commit F10 Exit Figure 7 166 Entering the full path to the update image file 324 IBM Flex System p270 Compute Node Planning and Implementation Guide Figure 7 167 shows the levels of the new temporary image and the curre
326. ge directly fram the Flex System Manager to System x compute nodes To deploy other operating systems or to deploy to System p compute nodes see the link below for more information Learn more about deploying operating systems Update Chassis Components Update chassis components including compute nodes storage nodes and I O modules Launch IBM FSM Explorer IBM FSM Explorer is an easy way to find and browse resources monitor status and events and launch management tasks Figure 8 5 IBM Flex System Manager home window 356 IBM Flex System p270 Compute Node Planning and Implementation Guide 4 Click the Plug ins tab to display the list of installed plug ins The list of installed plug ins opens as shown in Figure 8 6 Initial Setup Additional Setup fH Plug ins Administration Applications Learn IBM Flex System Manager contains the following plug ins Depending on its readiness the plug in mid be ready to use or might require additional setup and configuration Last refreshed June 26 2013 9 40 37 AM EDT IBM Flex System Manager 135 Ready IBM FSM Explorer Manage your Flex Resources Chassis Manager Management Domain Manage Power Systems Resources IBM FSM Capacity Utilization IBM Flex System Manager Server 2321 Ready Manage Users Discovery Manager 231 14 Systems have no inventory collected System Discovery Resource Explorer View and Collect Inventory Status Manager 332 Ready
327. gu View as HTML administrators and system operators using the HMC Managing the HMC vi guide Provides an online version of Managing the HMC v7 guide for system 4 View as HTML system operators using the HMC Servicing the HMC wr guide Provides an online version of Servicing the HMC vY guide for system a View as HTML system operators using the HMC amp f HMC Readme Provides hints and errata information about the HMC Online Information Additional related online information Figure 8 25 HMC Welcome window 374 IBM Flex System p270 Compute Node Planning and Implementation Guide 4 From the left side navigation area expand the Servers options and click the wanted server or managed system The Server page opens in the work pane area Figure 8 26 shows the list of LPARs that are defined for the managed system In this example no LPARs exist and the VIOS LPAR is the first to be created on the selected managed system Hardware Management Console a A systems Management gt Servers gt Server 7954 24X SN107732B Welcome ee P lel Filter Tasks Views E i Systems Management Select A Name A o A Status A aR A Memory GB is Sealy a Ear ee a f E Servers i z z j i E Serer T9 24X SNM107 r228 H pr er Max Page Size ecules n Total O Fitered 0 Selected 0 System Plans M HMC Management ai Service Management eam Updates asks Server 7954 24 SN107732B a Properties Connections Ser
328. gure 7 34 show the FSM Chassis Manager graphical view of a managed chassis with p270 Compute Nodes in bays 6 7 and 8 IBM Flex System Manager Welcome USERID Problems oc od Compliance o0 p Home Chassis Man xN hianage Powe Managed Chassis gt Enterprise Chassis Find ol Find a Task or Hardware Re Table View Enterprise Chassis P i TB O Please select one element from the map to view details Figure 7 34 FSM discovered chassis graphical view FSM compute node access Figure 7 35 on page 229 shows the same chassis in a table view The table view has a column that is labeled Access The wanted status is OK for compute and storage nodes and I O modules With this status the FSM can communicate directly with the FSP in a Power compute node 228 IBM Flex System p270 Compute Node Planning and Implementation Guide If the Access level is No Access see Requesting access to the Flexible Service Processor on page 238 I i IBM Flex System Manager Welcome USERID Problems i oft Compliance hWanage Powe amp Managed Chassis gt Enterprise Chassis M4 Find g Find a Task or Hardware Re Grapl Search the table Chassis Man W Select Name 2 Type Access Hardware 2 Compliance Model A F BJ rmodular0i powersupplyo2 Power Supply E Hot applica I OK T OK Fj BJ rmodular0i powersupplyos5 Power Supply E Hot applica
329. h a padmin line command or assisted through the diagnostic function Both methods are shown for this example From the padmin user ID or protected shell of the VIOS the 1dfware command can be used to manage and install the system firmware as shown in Figure 7 155 Idfware Option flag is not valid Usage ldfware dev Device file filename ldfware commit ldfware reject Figure 7 155 Idfware command usage options Although typically not required committing the current temporary firmware image to the permanent location should be considered as a general firmware maintenance task Figure 7 156 shows the commit option of the 1dfware command The commit process takes several minutes to complete Idfware commit The commit operation is in progress Please stand by The commit operation was successful Figure 7 156 Idfware commit option Chapter 7 Powernode management 317 5 Figure 7 157 shows the 1dfware command is used to update the system firmware Provide the full path name to the image file with the file attribute Idfware file tmp fwupdate 01AF773_ 021 021 img The image is valid and would update the temporary image to FW 73 00 AF773 021 The new firmware level for the permanent image would be FW773 00 AF773 019 The current permanent system firmware image is FW773 00 AF773 019 The current temporary system firmware image is FW773 00 AF773 019 WARNING Continuing will rebo
330. hannel and SAN connectivity gt Hard disk drives HDDs and solid state drives SSDs If you choose to use your Power Systems compute node with internal disks your memory choices can be affected SAS and SATA HDD options are available and SSDs Very Low Profile VLP memory DIMMs are required if HDDs are chosen as described in 4 8 Storage on page 98 If Low Profile LP memory options are chosen only SSDs can be used for internal storage Choosing the disk type that best suits your needs involves evaluating the size speed and price of the options 130 IBM Flex System p270 Compute Node Planning and Implementation Guide gt Memory Your Power Systems compute node supports various memory configurations The memory configuration can be dependent on certain configurations of internal disks that are installed as described Hard disk drives HDDs and solid state drives SSDs on page 130 Mixing both types of memory is not recommended Active memory expansion AME is available on POWER7 as is Active Memory Sharing AMS when PowerVM Enterprise Edition is used For more information about AMS see IBM PowerVM Virtualization Introduction and Configuration SG24 7940 and IBM PowerVM Virtualization Managing and Monitoring SG24 7590 gt Processor Several processor options are available for the IBM Flex System p270 Compute Node as described in 4 5 1 Processor options on page 82 Evaluate the processor quantity and
331. hat counted with mechanical switches or vacuum tubes to the first programmable computers IBM was a part of this growth while always helping customers solve problems Information technology IT is a constant part of business and of our lives IBM expertise in delivering IT solutions helped the planet become smarter As organizational leaders seek to extract more real value from their data business processes and other key investments IT is moving to the strategic center of business To meet those business demands IBM introduces a new category of systems that combine the flexibility of general purpose systems the elasticity of cloud computing and the simplicity of an appliance that is tuned to the workload Expert integrated systems are the building blocks of this capability This new category of systems represents the collective knowledge of thousands of deployments established preferred practices innovative thinking IT leadership and distilled expertise The new IBM Flex System p270 Compute Node is part of this new Expert Integrated category of systems Copyright IBM Corp 2013 All rights reserved 1 This chapter includes the following topics 1 1 IBM PureFlex System 1 2 Choosing an IBM PureFlex System or IBM Flex System on page 4 1 3 IBM Flex System p270 Compute Node on page 5 1 4 Flex System components on page 5 1 5 This book on page 13 M YY vV Yy 2 IBM Flex System p270 Compute
332. he Flexible Service Processor Typically a Power compute node is automatically discovered and accessed unlocked through the CMM discovery process and FSM chassis management IBM Flex System p270 Compute Node Planning and Implementation Guide The access must be shown as OK before most operations can be performed This access allows the FSM to talk to the Power compute node s Flexible Service Processor FSP The following example shows a discovered node in a No Access condition and how to correct the issue Figure 7 51 shows one of the two available Power compute nodes or servers to be in a No Access condition Manage Power Systems Resources k Welcome Flex System Manager Version Power Systems Resources O t Hosts H server 7954 24x sNioz7a2 L efermance Summary Search the table Se re Virtual Servers La Operating Systems jn Power Units Select Mame Access 2 State A C Server 7954 24x SNi0778 M no access Not Available Figure 7 51 Power compute node in No access state To request access complete the following steps 1 Click No Access in the Access column 2 Inthe Request Access window that opens as shown in Figure 7 52 provide an FSM administrator UserlD centrally managed systems or CMM supervisor UserID non centrally managed systems and password then click Request Access In the Access column the No Access status should change to OK Request Access Specify the user ID and
333. he chassis When the fan module is being replaced the 80 mm fan modules cool the I O modules and the Chassis Management Module Figure 3 9 shows cooling zones 3 and 4 that service the I O modules Cooling zone 4 Cooling zone 3 Figure 3 9 Cooling zones 3 and 4 3 6 3 Power supply cooling The power supply modules have two integrated 40 mm fans Installation or replacement of a power supply and fans is done as a single unit The integral power supply fans are not dependent upon the power supply being functional Rather they are powered independently from the midplane 72 IBM Flex System p270 Compute Node Planning and Implementation Guide Product information and technology The IBM Flex System p270 Compute Node is based on IBM POWER7 architecture and provides a high density high performance environment for AIX Linux and IBM i workloads This chapter includes the following topics 4 1 Overview on page 74 4 2 Front panel on page 76 4 3 Chassis support on page 80 4 4 System architecture on page 81 4 5 IBM POWER7 processor on page 82 4 6 Memory subsystem on page 93 4 7 Active Memory Expansion on page 96 4 8 Storage on page 98 4 9 I O adapters on page 102 4 10 System management on page 118 4 11 IBM EnergyScale on page 119 4 12 Anchor card on page 124 4 13 External USB device support on page 125 4 14 Operating s
334. he degree of expansion varies based on how compressible the memory content is and having adequate spare processor Capacity available for the compression and decompression Tests in IBM laboratories that use sample workloads showed excellent results for many workloads in terms of memory expansion per additional processor used Other test workloads had more modest results Clients have a great deal of control over Active Memory Expansion usage Each AIX partition can turn on or off Active Memory Expansion Control parameters set the amount of expansion that is wanted in each partition to help control the amount of processor that is used by the Active Memory Expansion function An IBM Public License IPL is required for the specific partition that is turning on or off memory expansion After it is turned on monitoring capabilities in standard AIX performance tools are available such as Iparstat vmstat topas and svmon Figure 4 14 on page 97 represents the percentage of processor that is used to compress memory for two partitions with various profiles The green curve corresponds to a partition that has spare processing power capacity The blue curve corresponds to a partition that is constrained in processing power IBM Flex System p270 Compute Node Planning and Implementation Guide CPU 1 utilization 1 Plenty of spare for CPU resource available expansion 2 Constrained CPU resource already running at significant ut
335. he large number of manufacturers of these devices not every device can be guaranteed support External power Non IBM USB DVD RAM tape and RDX drives must use an external power supply Table 4 19 Non IBM USB devices that can attach to the Power Systems compute nodes VIOS AIX and Linux IBM i e a uss ox ave win rores perone e wes pe vo a The AIX operating system supports the mksysb system backup and restore operations by using any of the USB removable media types The AIX operating system does not support the use of a USB device as a target for an AIX operating system installation The AIX operating system and VIOS only support writing to DVD RAM media but can read all optical media formats through the read interface of the device driver b Only USB tape drives and USB DVD RAM drives can be virtual devices in a client partition For all other USB devices the USB controller must be assigned to a partition for the partition to have access to the USB device c Boot from a USB flash drive can only be used for AIX stand alone diagnostics or mksysb system restore Booting or installing AIX based media from a USB flash drive is not supported 4 14 Operating system support The p270 is designed to run AIX VIOS IBM i and Linux For more information about the supported operating systems see 5 1 2 Software planning on page 132 Chapter 4 Product information and technology 127 4 15 Warranty and maintenance agreements
336. he other two ports from the second ASIC are unused dotted blue lines The implication is if each ASIC is assigned to a different VIOS and both upgrades are installed the first VIOS has four active ports and the second VIOS has two active ports Chapter 6 Converged networking 165 For more information about Fibre Channel over Ethernet FCoE that uses high speed Ethernet networks and recommendations see Storage and Network Convergence Using FCoE and iSCSI SG24 7986 which is available at this website http www redbooks ibm com abstracts sg247986 html 6 1 1 Fibre Channel over Ethernet FCoE is a method of sending FC protocol traffic directly over an Ethernet network It relies on a new Ethernet transport with extensions that provide the lossless transmission that the Fibre Channel Backbone 5 FC BB 5 standard specifies for operation This means that an Ethernet network cannot discard frames in the presence of congestion Such an Ethernet network is called a lossless Ethernet in this standard The standard also states that devices must ensure in order delivery of FCoE frames within the Lossless Ethernet network The set of extensions that are fundamental to FCoE fall under the DCB standard The enhancements provide a converged network that allows multiple applications to run over a single physical infrastructure The following DCB standards are included Priority based Flow Control 802 1Qbb PFC gt Enhanced Transmission S
337. he technology of the POWER7 and POWER7 processors Characteristic POWER 7 POWER7 Maximum cores Maximum cores Maximum SMT threads per core Maximum frequency 4 3 GHz 4 25 GHz L2 Cache 256 KB per core 256 KB per core 92 IBM Flex System p270 Compute Node Planning and Implementation Guide Characteristic POWER 7 POWER 7 L3 Cache 10 MB of FLR L3 cache 4 MB or 8 MB of FLR L3 per core with each core cache per core with each having access to the full core having access to the 80 MB of L3 cache full 32 MB of L3 cache on chip eDRAM on chip eDRAM Memory Support DDR3 DDR3 4 6 Memory subsystem Each POWER7 processor that is used in the compute nodes has an integrated memory controller Industry standard DDR3 Registered DIMM RDIMM technology is used to increase reliability speed and density of memory subsystems 4 6 1 Memory placement rules The minimum and maximum configurable memory for the p270 is listed in Table 4 6 Table 4 6 Configurable memory limits Model Minimum memory Maximum memory p270 All 512 GB 16x 32 GB DIMMs While the functional minimum memory is shown in Table 4 6 it is recommended to use a minimum of 2 GB of memory per core in the p270 48 GB This provides sufficient memory for reasonable production usage of the machine Low Profile and Very Low Profile form factors One benefit of deploying IBM Flex System systems is the ability to use Low Profile LP memory DIMMs This de
338. he various SMT modes that are offered by the POWER7 processor provides flexibility which enables the selection of the threading technology that meets a combination of objectives such as performance throughput energy use and workload enablement Intelligent threads The POWER7 processor features intelligent threads which can vary based on the workload demand The system automatically selects or the system administrator can manually select whether a workload benefits from dedicating as much capability as possible to a single thread of work or if the workload benefits more from spreading this capabilty across two or four threads of work 88 IBM Flex System p270 Compute Node Planning and Implementation Guide With more threads the POWER7 processor delivers more total capacity to accomplish more tasks in parallel With fewer threads workloads that require fast individual tasks get the performance that they need for maximum benefit Memory access Each POWER7 processor chip in the compute node has two DDR3 memory controllers with two memory channels Each channel operates at 6 4 Gbps and can address up to 64 GB of memory Thus the POWER7 DCM that is used in these compute nodes can address up to 256 GB of memory each Figure 4 11 gives a simple overview of the p270 Compute Node memory access structure P7 DCM CPO P7 DCM CP1 Figure 4 11 Overview of POWER7 memory
339. hen you are planning the power consumption for your Power Systems compute node you must consider the server estimated power consumption highs and lows that are based on the power supply features that are installed in the chassis and tools such as the IBM Power Configurator You can use these features to manage measure and monitor your energy consumption 5 7 1 Power supply features The peak power consumption is 626 W for the IBM Flex System p270 Compute Node with power provided by the chassis power supplies The maximum measured value is the worst case power consumption that is expected from a fully populated server under an intensive workload It also takes into account component tolerance and non ideal operating conditions Power consumption and heat load vary greatly by server configuration and use Use the IBM Systems Energy Estimator to obtain a heat output estimate that is based on a specific configuration The Estimator is available at this website http www 912 ibm com see EnergyEstimator 5 7 2 PDU and UPS planning 152 Planning considerations for your IBM Flex System configuration depend on your geographical location Your need for power distribution units PDUs and uninterruptible power supply UPS units varies based on the electrical power that feeds your data center AC or DC 220 V or 110 V and so on These specifications define the PDUs UPS units cables and support you need For more information about planning
340. het ieee tees a ee ee 76 4 2 1 Light path diagnostic LED panel 0 000 e eae 77 4 2 2 ADEN yates Seah andes oa Gee eee eee eae oad ewe 79 a3 ChassiS SUPPOR 4 6 024 sida Scan awa Mae Ki ee se a ee 80 4 4 System architecture anaana naaraana eee 81 IBM Flex System p270 Compute Node Planning and Implementation Guide 4 5 IBM POWER7 proceSSOr 2 0 00 ee eee 82 4 5 1 Processor options 2 0 0 ccc eee eee 82 4 5 2 UNCONNOUNNO Diss re darana Sad meth Since at ech ae eal oom ete ph Aa Re 83 45 3 AICHINCCIUIG s 24 44 28 ew wh ends dod eS a bee we beds Mew Eales 84 4 6 Memory subsystem ansaa eaaa ea eee eee eee 93 4 6 1 Memory placement rules 0 2 0 0 0 eee 93 4 7 Active Memory Expansion 0000s 96 AS SORA Os et ih 8 rata tee wpa thd ear ee kW pad MO ek A vce be wi die ae 98 4 8 1 Storage configuration impact to memory configuration 99 4 8 2 Local storage and cover options 0 00 eee eee 100 4 8 3 Local drive connection 0 cc ees 101 4 8 4 RAID capabilities 0 0 0 ees 102 AD VOadapte S eresse ag hteens hee eth eda eee es Oe Meh eae es 102 4 9 1 I O adapter slots 0 0 0 0 0 cc ees 103 49 2 PCl NUDS 935 6 n2 oee ns baere e wd Ree eee ee aRoue ee 104 4 9 3 Available adapters 0 0 0 0 ees 105 4 9 4 Adapter naming convention 0 00 cee ees 106 4 9 5 IBM Flex System EN2024 4 port 1Gb Ethernet Adapter
341. hods 475 6 Select the device type in this case option 3 CD DVD The window that is shown in Figure 9 46 opens Version AF773_ 033 SMS 1 7 c Copyright IBM Corp 2000 2008 All rights reserved Select Media Type 1 SCSI 2 SSA 3 SAN 4 SAS 5 SATA 6 USB 7 IDE 8 ISA 9 List All Devices Navigation keys M return to Main Menu ESC key return to previous screen X eXit System Management Services Type menu item number and press Enter or select Navigation key Figure 9 46 Device type selection 476 IBM Flex System p270 Compute Node Planning and Implementation Guide 7 Select option 6 USB as the media type The window that is shown in Figure 9 47 opens and shows the list of available USB optical drives In our example a virtual optical drive is shown as item 1 What you see depends on the drive that you connected Version AF773_ 033 SMS 1 7 c Copyright IBM Corp 2000 2008 All rights reserved Select Media Adapter 1 U7954 24X 1077E3B V6 C2 T1 vdevice v scsi 30000002 2 List all devices Navigation keys M return to Main Menu ESC key return to previous screen X eXit System Management Services Type menu item number and press Enter or select Navigation key Figure 9 47 Select media adapter 8 Select your optical drive The window that is shown in Figure 9 48 on page 478 opens Chapter 9 Operating system installation methods 477 SMS 1 7 c Copyright IBM Corp 2000 2008 All right
342. hose limitations 5 9 1 Virtual servers without VIOS Partitions on a Power Systems compute node without VIOS might be available on certain configurations as described in the following configuration examples You can use the IBM Flex System Manager or HMC management to configure them gt Sample Configuration One p270 Compute Node with one EN2024 4 port 1Gb Ethernet Adapter 48 GB of memory internal disks and an FC3172 2 port 8Gb FC Adapter In this sample you can create the following partitions Partition 1 consists of the following components One processor 24 GB of memory Internal disks One port on the EN2024 4 port 1Gb Ethernet Adapter AIX operating system Partition 2 consists of the following components e One processor e 24 GB of memory e SAN attached disks through the FC3172 2 port 8Gb FC Adapter e One port on the EN2024 4 port 1Gb Ethernet Adapter e Linux operating system gt Sample Configuration 2 One p270 Compute Node with two CN4058 8 port 10Gb Converged Adapters and 96 GB of memory In this sample you can create the following partitions Partition 1 consists of the following components e One processor e 40 GB of memory 160 IBM Flex System p270 Compute Node Planning and Implementation Guide e SAN attached disks through the CN4058 8 port 10Gb Converged Adapter e One CN4058 8 port 10Gb Converged Adapter ASIC for networking e AIX operating system Partition 2 consists of th
343. hts reserved M l Lis Si 4 Ja ain Menu Select Language Setup Remote IPL Initial Program Load Change SCSI Settings Select Console Select Boot Options Type menu item number and press Enter or select Navigation key Figure 9 1 SMS Main Menu 9 3 Installios installation of the VIOS Installios can be used only for installing the VIOS The installios procedure for installing the VIOS can be run from the FSM or an HMC Installios is not an option if you are preparing a Power compute node for management by IVM In this section we describe the installation methodology via the FSM The following steps are used to run installios 1 Ensure that the Power compute node is in an OK state from the FSM 2 Create a virtual server on the node for a VIOS environment 3 Copy the VIOS ISO images to the FSM 4 Run the installios command interactively or single command 9 3 1 Interactive installation Complete the following steps to use the interactive method 1 Start the interactive installation process by entering the installios command as shown in Figure 9 2 on page 442 Enter the following information Desired server The physical server that is targeted for VIOS installation 440 IBM Flex System p270 Compute Node Planning and Implementation Guide Desired virtual server The server partition to install VIOS that should include the hardware that you want to use for virtualization to client partitions Desired pro
344. i 527 11 5 Installing Licensed Programs After IBM i is installed as described in 11 4 Installing the IBM i operating system on page 513 the installation of Licensed Programs can be performed Note Ensure that you are logged on to the operating system with a user profile with Security Officer authority such as QGECOFR 1 Enter the following commands to ensure that the system is in a restricted state and can filter pertinent messages that appear CHGMSGQ QSYSOPR BREAK SEV 60 This puts the system operator message queue into break mode for your session to alert you of any messages of severity 60 or higher ENDSBS ALL IMMED This ends all active subsystems and brings the system to an effective restricted state A break message might appear that states System ended to restricted condition CHGMSGQ QSYSOPR SEV 95 This changes the system message queue to break into the session only for messages of severity 95 or higher 528 IBM Flex System p270 Compute Node Planning and Implementation Guide 2 Enter GO LICPGM to go to the Work with Licensed Programs menu as shown in Figure 11 29 LICPGM Work with Licensed Programs System E1277E3B Select one of the following Manual Install 1 Install all Preparation 5 Prepare for install Licensed Programs 10 Display installed licensed programs 11 Install licensed programs 12 Delete licensed programs 13 Save licensed programs More Selection or
345. ia Speed ethernet Sutodetection Partition Communication Enabled DHCP Server LJ Enable DHCP server ddress Range O IPv4 Address No IPv4 address Obtain an IP address automatically OHCP Specify an IP address TCP IP interface address 0 0 0 0 TCP IP interface network mask o55 255 955 9 Figure 7 93 LAN Adapter Details Basic Settings tab The following options are available gt Local Area Network Information The LAN interface address shows Media Access Control MAC Address on the card and the adapter name The following values uniquely identify the LAN adapter and cannot be changed Private A private network is used by the HMC to communicate by its managed system The term private refers to the HMC service network The only elements on the physical network are the HMC and the service processors of the managed systems Chapter 7 Power node management 273 Open The term open refers to any general public network that contains elements other than HMCs and service processors that are not isolated behind an HMC The other network connections on the HMC are considered open which means that they are configured in a way that you expect when any standard network device is attached to an open network An open network connects the HMC outside the managed system Media speed Specifies the speed in duplex mode of an Ethernet adapter The options are Autodetection 10 Mbps Half Duplex 10 Mbps F
346. ic Service Agent application automatically monitors and collects hardware problem information and sends this information to IBM e View Modify System Properties support It also can collect hardware software system configuration and performance management information which may help IBM support e View Modify Shared Memory Pool assist in diagnosing problems I O Adapter Management e View Modify Virtual Ethernet Launch the Electronic Service Agent interface e View Modify Physical Adapters View Virtual Fibre Channel Virtual Storage Management e View Modify Virtual Storage IVM Management e View Modify User Accounts e View Modify TCP IP Settings e Guided Setup Enter PowerVM Edition Key Service Management e Electronic Service Agent Service Focal Point e Manage Serviceable Events Service Utilities e Create Serviceable Event e Manage Dumps e Collect VPD Information Updates Backup Restore Application Logs Monitor Tasks Hardware Inventory Figure 7 174 Starting ESA from the IVM user interface 5 Click Launch the Electronic Service Area interface to open the ESA window as shown in Figure 7 175 The ESA uses the padmin User ID and password Enter this information and click OK Electronic Service Agent Welcome enter your information Electronic Service Agent requires you to log in using a valid username and password from the System root or Administrator group of the local operating system User ID
347. ice 1 Figure 10 2 Installation options 492 IBM Flex System p270 Compute Node Planning and Implementation Guide You can install the operating system by using one of the following options Option 1 Start Install Now with Default Settings begins the installation by using the default options Option 2 Change Show Installation Settings and Install displays several options as shown in Figure 10 3 Installation and Settings Either type 0 and press Enter to install with current settings or type the number of the setting you want to change and press Enter 1 System Settings Method of Installation New and Complete Overwrite Disk Where You Want to Install hdisk0O Primary Language Environment Settings AFTER Install Cultural Convention English United States Language English United States Keyboard English United States Keyboard Type Default Security Model Default More Options Software install options Select Edition express Install with the settings listed above 88 Help WARNING Base Operating System Installation will 99 Previous Menu destroy or impair recovery of ALL data on the destination disk hdiskO gt gt gt Choice 0 Figure 10 3 Installation settings In this window the following settings are available After you change and confirm your selections enter 0 and press Enter to begin the installation Option 1 Systems Settings refers to the installation method and destinatio
348. icensed Internal Code LIC Concurrency Server 954 24 SN1077827B Click a table row and click View Information to see the activated and retrievable LIC levels for that target Current LIC repository location FTF site Concurrency Status Server 7954 24y SN107792B6 All must be disruptively activated Select the type of installation to perform O Concurrent install and activate O Concurrent install only with deferred disruptive activate Disruptive install and activate Power off automatically if necessary Shortest overall update time Disruptive install and activate Delay power off with confirmation Shortest system down time Figure 7 126 Update installation concurrency options 11 The license agreement for the update must be accepted to continue Click Agree to continue IBM Flex System p270 Compute Node Planning and Implementation Guide 12 Figure 7 127 shows the update wizard that is continuing with a request to confirm the update action Click Finish to proceed with the update Change Licensed Internal Code Wizard Confirm the Action Server 7954 24X 8N1077827B Attention You are about to start disruptive install and activate MOTICE During activation of the new firmware level all YTERM windows will be closed The following LIC types will be Updated on each target Click a table row and click View Levels to see the levels that will be active for that target after the operation complete
349. ices so it can connect directly to storage devices or to other SAN switches where physical connectivity and interoperability permits The EN4093R 10Gb Scalable Switch does not run FCF services so it requires connectivity to an upstream switch before it connects to SAN switches or FC equipment For more information about FCF see 6 1 4 Fibre Channel Forwarders on page 170 The converged adapter that is supported by the IBM Flex System p270 Compute Node is the IBM Flex System CN4058 8 port 10Gb Converged Adapter Table 6 2 shows the supported IBM Flex System Switch modules that provide connectivity for the CN4058 8 port 10Gb Converged Adapter Table 6 2 Switch modules supported by the CN4058 8 port 10Gb Converged Adapter 3593 IBM Flex System Fabric EN4093R 10Gb Scalable Switch IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch Chapter 6 Converged networking 169 6 1 4 Fibre Channel Forwarders The CN4093 10Gb Converged Scalable Switch can act as an optional Fibre Channel Forwarder FCF The FCF function is the FC switching element in an FCoE fabric It provides functions that are analogous to the functions that are provided by an FC switch in a traditional FC Fabric The most basic function is the forwarding of FCoE frames that are received on one port to another port that is based on the destination address in the encapsulated FC frame The FCF is also handles Fabric Login FLOGI Fabric Provided MAC Address FP
350. ies and guidelines of Power compute nodes see Chapter 7 Power node management on page 183 8 IBM Flex System p270 Compute Node Planning and Implementation Guide For more information about IVM see ntegrated Virtualization Manager for IBM Power Systems Servers REDP 4061 which is available at this website http www redbooks ibm com abstracts redp4061 html For more information about HMC see IBM Power Systems HMC Implementation and Usage Guide SG24 7491 which is available at this website http www redbooks ibm com abstracts sg247491 html 1 4 4 Chassis I O modules Data center networking is undergoing a transition from a discrete traditional model to a more flexible optimized model The network architecture in IBM Flex System is designed to address the key challenges customers are facing today in their data centers The key focus areas of the network architecture on this platform are unified network management optimized and automated network virtualization and a simplified network infrastructure Providing innovation leadership and choice in the I O module portfolio uniquely positions IBM Flex System to provide meaningful solutions to address customer needs The following I O technologies are available for Flex System 40 Gb Ethernet switches 10 Gb Ethernet switches and pass thru modules 10 Gb Converged networking switches 1 Gb Ethernet switches 16 Gb Fibre Channel switches 8 Gb Fibre Channel switches and p
351. ies of a lossless 10 Gbps Ethernet now offer a realistic environment for a converged network This section describes how the IBM Flex System p270 Compute Node can use the IBM Flex System CN4058 8 port 10Gb Converged Adapter with the EN4093R 10Gb Scalable Switch or the CN4093 10Gb Converged Scalable Switch to run converged network traffic over a single adapter type IBM Flex System p270 Compute Node Planning and Implementation Guide Figure 6 1 shows the internal layout of the CN4058 for consideration when ports are assigned for use on VIOS for TCP and FCP traffic Red lines indicate connections from ASIC 1 on the CN4058 adapter and blue lines are the connections from ASIC 2 The dotted blue lines are reserved for future use when switch are offered that support all 8 ports of the adapter CN4058 8 port 10Gb Converged Adapter I O Module 1 ASIC 1 4 ports ASIC 2 4 ports I O Module 2 Connections from ASIC 1 Connections from ASIC 2 Connections from ASIC 2 reserved for future use Figure 6 1 Internal layout of the CN4058 adapter connected to CN4093 EN4093R or SI4093 switch Note Port position INTDx is reserved for future use Dual VIOS note Enabling both upgrade licenses enables all 42 internal ports the A B and C sets The first ASIC connects to one A one B and two ports the red lines The second ASIC connects to one A and one B port the solid blue lines T
352. ifform 82 Waiting for 944 C14720FF Ting k https 9 42 171 37 cqi bin cgifform 82 i https 9 42 171 37 cgi bin cgi form 82 2001100 i https 9 42 171 37 cgi bin cgi form 82 Waiting for 9 Starting kernel 1 Waiting for 94 0536 i https 9 42 171 37 cgi bin cgi form 82 TL r EE OEE Na OES Running Figure 7 146 ASMI real time messages Opening a SOL terminal for the VIOS LPAR A virtual terminal session for the first LPAR or VIOS LPAR of an IVM manage system requires the use of SOL This virtual terminal session can be used for the VIOS installation process and general access before and after and IP address is configure for the VIOS Flex System and SOL When a Power Systems compute node is managed by IVM SOL must be enabled for the node and globally for the entire chassis by the CMM to allow access to first partition or VIOS by definition VIOS must be on the first LPAR on IVM managed systems By default SOL is enabled on Flex System or BTO systems Chapter 7 Power node management 311 312 SOL to a server partition is started after establishing a Secure Shell SSH session to the CMM After an SSH login to the CMM is complete use one of the following commands to open the terminal session gt Method 1 console T blade x gt Method 2 env T blade x console The first method directs the console command to the specified blade slot number The second method sets
353. ight also help to implement strict security policies for separating network traffic by using VLANs while having access to server resources from other VLANs without needing to implement Layer 3 routing in the network To be sure that the deployed application supports logical interfaces check the application documentation for possible restrictions that applied to the NIC teaming configurations especially in the case of a clustering solutions implementation For more information about Ethernet switch modules see BM PureFlex System and IBM Flex System Products and Technology SG24 7984 which is available at this website http www redbooks ibm com abstracts sg247984 html 5 3 SAN connectivity SAN connectivity in the Power Systems compute nodes is provided by the expansion cards The list of SAN Fibre Channel FC adapters that are currently supported by the Power Systems compute nodes is listed in Table 5 4 on page 140 For more information about the supported expansion cards see 4 9 I O adapters on page 102 For information about Fibre Channel over Ethernet FCoE converged networking see chapter 5 4 Converged networking on page 141 Chapter 5 Planning 139 Table 5 4 Supported FC adapters 1764 IBM Flex System FC3172 2 port 8Gb FC Adapter EC23 IBM Flex System FC5052 2 port 16Gb FC Adapter EC24 IBM Flex System FC5054 4 port 16Gb FC Adapter Fibre Channel I O modules are installed in the IBM Flex System chassis
354. il IBM Flex System p270 Compute Node Planning and Implementation Guide Describes the new POWER7 compute node for IBM Flex System Provides detailed product and planning information Explains setting up converged networking partitioning and OS installation David Watts Kerry Anders Simon Casey Fabien Willmann ibm com redbooks Red D 00 KS l International Technical Support Organization IBM Flex System p270 Compute Node Planning and Implementation Guide December 2013 SG24 8166 00 Note Before using this information and the product it supports read the information in Notices on page xi First Edition December 2013 This edition applies to the IBM Flex System p270 Compute Node 7954 24xX Copyright International Business Machines Corporation 2013 All rights reserved Note to U S Government Users Restricted Rights Use duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp Contents NOUCCS 2 ss 2 oe a Se Ge ee Be ee Ee eee oe xi Trademark S aaa anaa e e a E eee eae oe A xii PICTACC eraran Ce ed eee ewe ble wen phe a G fag ees Gees xiii AUNO soarana og ta ec wae ey tes WG aa te ned EE A aa anaes XIV Now you can become a published author too 0000 e eee xvi Comments welcome 2 0 00 cc ee ee ee ee eens xvi Stay connected to IBM Redbooks 0 00 cee eee xvii Chapter 1 Introduction 0 0 00 ees
355. ilization Very cost effective Amount of memory expansion Figure 4 14 Processor usage versus memory expansion effectiveness Both cases show the following knee of the curve relationship for processor resources that are required for memory expansion gt Busy processor cores do not have resources to spare for expansion gt The more memory expansion that is done the more processor resources are required The knee varies depending on how compressible the memory contents are This situation demonstrates the need for a case by case study to determine whether memory expansion can provide a positive return on investment ROI To help you perform this study a planning tool is included with AIX V6 1 TL4 SP2 or later You can use this planning tool to sample actual workloads and estimate how expandable the partition memory is and how much processor resources are needed Any Power Compute Node model can run the planning tool Figure 4 15 on page 98 shows an example of the output that is returned by this planning tool The tool outputs various real memory and processor resource combinations to achieve the wanted effective memory and proposes one particular combination In this example the tool proposes to allocate 58 of a processor core to benefit from 45 extra memory capacity Chapter 4 Product information and technology 97 Active Memory Expansion Modeled Statistics Modeled Expanded Memory Size 8 00 GB Expansion True
356. in Figure 9 32 The column that is labeled Client Partition ID displays the virtual server or partition number Ismap all SVSA Physloc Client Partition ID vhost0 U7954 24X 06D996A V1 C11 0x00000002 VTD NO VIRTUAL TARGET DEVICE FOUND Figure 9 32 Determining the vhost adapter to client virtual server or partition The DVD drive device name can be determined by using the Isdev grep cd command as shown in Figure 9 33 The device must be in an Available state to be used In this example the device name is cd0 and is in an Available state Isdev grep cd cd0 Available Figure 9 33 Using the Isdev command to determine the optical device name and state The optical device is virtualized to the client virtual server or partition by using the mkvdev command as shown in Figure 9 34 on page 468 The virtualized device is cd0 and the vadapter is vhost0 that is associated with the desired client virtual server or partition Chapter 9 Operating system installation methods 467 mkvdev vdev cd0 vadapter vhost0 vtoptO Available Figure 9 34 Using the VIOS command line to virtualize the optical device After the mkvdev command completes it can be verified by using the 1smap a11 command as shown in Figure 9 35 Ismap all SVSA Physloc Client Partition ID vhost0 U7895 42X 1047BEB V1 C5 0x00000002 VTD vtopt0 Status Available LUN 0x8100000000000000 Backing device cd0 Physloc U78A5 001 WIHB1D3 P1 T1 L1 L2 L3 Mirrored N
357. ing system running applications and virtual server definition Import virtual appliance packages that exist in the Open Virtualization Format OVF from the Internet or other external sources Deploy virtual appliances quickly to create virtual servers that meet the demands of your ever changing business needs Create capture and manage workloads Create server system pools where you can consolidate your resources and workloads into distinct and manageable groups Deploy virtual appliances into server system pools Manage server system pools including adding hosts or more storage space and monitoring the health of the resources and the status of the workloads in them Group storage systems by using storage system pools to increase resource usage and automation Manage storage system pools by adding storage editing the storage system pool policy and monitoring the health of the storage resources 194 IBM Flex System p270 Compute Node Planning and Implementation Guide gt Additional features A resource oriented chassis map provides an instant graphical view of chassis resources including nodes and I O modules e A fly over provides an instant view of individual server s node status and inventory e A chassis map provides an inventory view of chassis components a view of active statuses that require administrative attention and a compliance view of server node firmware e Actions can be taken on nodes su
358. install a suitable 5250 emulator configure the console by using one of the following methods gt If you are using the System i Access emulator follow the first two steps that are described in document that is found at this website http www ibm com support docview wss uid nas137396cfd6 d5ef 5886256 01000bda50 gt If you are using IBM Personal Communications complete the following steps a b C d Click Start or Configure Sessions Click New Session Select iSeries as the type of host then click Link Parameters On the Primary host name or IP address enter the same IP address as defined for entry to the FSM GUI Change the Port field to 2300 Click OK twice Configure the properties for the session with a user ID sign on information value of Use HMC 5250 console settings Enter Not Secured for the Security value A 5250 emulation console window appears and the console is configured IBM Flex System p270 Compute Node Planning and Implementation Guide 11 4 Installing the IBM i operating system Complete the following steps to install IBM i 1 After a console connection is established a Remote 5250 Console Sign on window opens as shown in Figure 11 12 Select the applicable language type and then a sign on window for authentication opens Enter your FSM GUI user ID and password as the User and Password Remote 5250 Console Sign on Enter your management console userid and password User
359. ion The desired I O resources can be dynamically DLPAR removed from the partition Typically physical I O adapters that are assigned for the VIOS LPAR are added as required 2 In this example click Add as required Figure 8 32 on page 382 Create Lpar Wizard Server 7954 24X SN107782B I O Create Partition Partition Profile Physical 1 0 Processors Detailed below are the physical I O resources for the managed system Select which adapters from the list you would like included in the profile and then add the adapters to the profile as Desired or Required Click on an adapter to view more detailed adapter information Aree EE Add as desired Virtual Adapters Optional Settings ff BD Wf lel select Action Profile Summary E pee ee ie en ee U SAE 001 W45R02E P1 R1 PCI E SAS Controller U SAE 001 W45SRO02E P1 T1 PCI to PCI bridge U7SAE 001 W2ZS5RO02E P1 C18 L1 EN4054 4 port 10Gb Ethernet Adapter U7SAE 001 W2ZS5RO2E P1 C18 L2 EN4054 4 port 10Gb Ethernet Adapter Back next gt Figure 8 32 HMC I O assignment window 3 The I O window is refreshed as shown in Figure 8 33 on page 383 with the Added column in the table updated to reflect the Required or Desired state Click Next to continue to the Virtual Adapters window 382 IBM Flex System p270 Compute Node Planning and Implementation Guide Create Lpar Wizard Server 7954 24X SN107782B f Create Partition w Partition Profile w Proce
360. ion displays any currently defined network gateways for the HMC Entries in the table can be selected and changed or deleted by clicking Change or Delete New entries can be made by clicking New Default gateway information Typically as a minimum a default gateway must be configured for the HMC The gateway information shown if any is locked and cannot be changed or edited from this window The default gateway information provides the following components gt Gateway address The default gateway is the route to all networks The gateway informs each personal computer or other network device where to send data if the target station is not on the same subnet as the source gt Gateway device Network interface that is used as a gateway device 278 IBM Flex System p270 Compute Node Planning and Implementation Guide To add a new gateway click New The Route Entry window opens as shown in Figure 7 98 Route Entry Position After currently selected entry Before currently selected entry Route Type ONet Host Default Destination Subnet mask Adapter Figure 7 98 Route Entry window Select the Default route type and provide the IP address of the gateway and then click OK The routing information table is updated with the default gateway information Enable routed option You use the Enable routed option to enable or disable the network routing daemon which is routed If disabled this optio
361. ion of L3 cache FLR LS but also has access to other L3 cache regions as shared L3 cache Additionally each core can negotiate to use the FLR L3 cache that is associated with another core depending on reference patterns Data can also be cloned to be stored in more than one core s FLR L3 cache again depending on reference patterns This intelligent cache management enables the POWER7 processor to optimize the access to L3 cache lines and minimize overall cache latencies IBM Flex System p270 Compute Node Planning and Implementation Guide Figure 4 12 shows the FLR L3 cache regions for the cores on the POWER7 processor chip design This is the same overall design as the POWER7 processor the POWER7 implements this design in a smaller die and packages two chips per processor package LLL a SATHECUOUEOCPTPEERERERUSRERGAERCGEIREGROBOnER kelalta Iel Figure 4 12 FLR L3 cache regions on the POWER7 processor The innovation of the use of eDRAM on the POWER7 processor die is significant for the following reasons gt Latency improvement A six to one latency improvement occurs by moving the L3 cache on chip compared to L3 accesses on an external on ceramic application specific integrated circuit ASIC gt Bandwidth improvement A 2x bandwidth improvement occurs with on chip interconnect Frequency and bus sizes are increased to and from each core Chapter 4 Product information and technology 91
362. irtual SCSI adapters virtual Fibre Channel adapters and virtual consoles The POWER Hypervisor is a firmware layer that sits between the hosted operating systems and the server hardware as shown in Figure 8 1 on page 341 340 IBM Flex System p270 Compute Node Planning and Implementation Guide Virtual and physical resources Partition Partition Partition E O l 2 O l O l S i O O l l N o o Server hardware resources Figure 8 1 POWER Hypervisor abstracts physical server hardware The POWER Hypervisor is always installed and activated regardless of system configuration The POWER Hypervisor has no specific or dedicated processor resources that are assigned to it Memory is required to support the resource assignment to the logical partitions on the server The amount of memory that is required by the POWER Hypervisor firmware varies according to the following factors gt Number of logical partitions gt Number of physical and virtual I O devices that are used by the logical partitions gt Maximum memory values that are specified in the logical partition profiles The POWER Hypervisor performs the following tasks gt Enforces partition integrity by providing a security layer between logical partitions gt Provides an abstraction layer between the physical hardware resources and the logical partitions that are using them It controls the dispatch of virtua
363. is web server is hosting the Hardware Management Console application Click on the link below to begin Log on and launch the Hardware Management Console web application You can also view the online help for the Hardware Management Console Status is good Attention LEDs Status is good 9 Serviceable Events B i One or more Sericeable Events Figure 7 85 HMC Welcome window Chapter 7 Power node management 265 To log on to the HMC click Log on and launch the Hardware Management Console web application from the Welcome window The Logon window opens as shown in Figure 7 86 aed Hardware Management Console 7R 7 0 2 Logon Please enter a userid and password below and click Logon Userid oo Password Figure 7 86 HMC Logon window The HMC is supplied with a predefined user ID hscroot and the default password abc123 When you update your password you can no longer keep it at six characters the minimum length for a password is now seven characters User ID and password are case sensitive The user ID and password are case sensitive and must be entered exactly Session preservation With HMC Version 7 you can remain in the graphical user interface GUI session across logins as shown in Figure 7 87 If you want to preserve your session choose Disconnect and then click OK ty il Choose to Logoff or Disconnect Would you like to log off the console or disco
364. ise Linux 6 2 6 3 Red Hat Enterprise Linux 6 2 6 3 i t with Kernel based Virtual Machine KWM and VMware vSphere 5 1 with IBM Customization you can deploy the image directly fram the Flex System Manager ta System compute nodes To deploy other operating systems or to deploy ta System p compute nodes see the link below for more information Learn more about deploying operating systems Update Chassis Components Update chassis components including compute nodes storage nodes and IO modules O meee auch IBM FSM Explorer 5 IBM FSM Explorer is an easy way to find and browse resources monitor _ status and events and launch management tasks Figure 7 33 FSM home tab 7 8 2 Connecting a Power compute node to the FSM The following dependencies are available for managing a Power based compute node from the FSM gt The CMM must successfully complete the discovery process of the node gt The compute node s IP address is within the same subnet as the CMM gt The FSM successfully managed the chassis containing the node gt The FSM unlocked or successfully accessed the node s FSP The complete process for these dependencies is not described in this document but they are summarized next 226 IBM Flex System p270 Compute Node Planning and Implementation Guide CMM discovery When the chassis is powered up the CMM restarted or a compute node is inserted a discovery process autom
365. it is not a cumulative package you should display and print your fix cover letters because they can contain special instructions If you read your cover letters you can avoid problems that can result in time consuming recovery If there are any pre installation special instructions in any of the cover letters follow those instructions first 11 7 2 Preparing the system for installation of PTFs To ensure a successful installation of PTFs for immediate apply or during an IPL the settings in Table 11 1 are recommended for those system values that affect PTF processing Table 11 1 Recommended settings that affect PTF processing System Value Recommended Setting QALWOBJRST ALL or ALWPTF Chapter 11 Installing IBMi 537 11 7 3 Installing a Cumulative PTF package You must order and install the current cumulative PTF package for a new installation of the operating system Also perform this on a periodic basis according to your fix maintenance strategy or when you install a new release of a licensed program to keep your system at the most current fix level Note The cumulative PTF package automatically includes the most recent Database PTF group and HIPER PTF group To simplify the process for installing a cumulative PTF package from media some special instructions might be automated during installation when possible It is important that you thoroughly read the installation instructions that are included with your package Th
366. ition or partitions and then select the task System Overview Total system memory 16 GB Total processing units 16 Memory available 13 12 GB Processing units available 14 4 Reserved firmware memory 896 MB Processor pool utilization 0 08 0 5 System attention LED Inactive Partition Details A Create Partition Shutdown More Tasks kd Select ID Name State Uptime Memory Processors Entitled Processing Utilized Processing Reference Units Units Code El 1 10 17B4B Running iad 2 GB 16 1 6 0 08 r Days i Figure 8 56 IVM View Modify Partitions view 406 IBM Flex System p270 Compute Node Planning and Implementation Guide 8 Click the default name to open the partition properties window as shown in Figure 8 57 This window includes selectable tabs that are used to modify the management or VIOS partition properties From the General tab the Partition name is altered in the example and all other values on this tab are not changed Partition Properties 10 17B46B 1 Ethernet Memory Processing General Partition name itsoVIOS6A Partition ID 1 Environment Virtual I O Server State Running Attention LED Inactive Settings Boot mode Normal Keylock position Normal Partition workload group participant E Automatically start when system starts Dynamic Logical Partitioning DLPAR Partition hostname or IP address 10 1 9 91 Partition communication state Active Memory
367. itsoVIOS6A_new itsoVIOS6A Server 7954 24X SN107782B itsoVIOS6A General Processors Memory I O eee a Settings Detailed below are the current processing settings for this partition profile Processing mode Dedicated Shared Dedicated processors Total managed system processors 24 00 Minimum processors Desired processors Maximum processors Processor Sharing Allow when partition is inactive Ol Allow when partition is active Processor compatibility mode deiak Help Figure 8 50 VIOS profile Changing processor settings from HMC 4 Similar observations and modifications can be made regarding the memory settings by using the Memory tab in the profile window I O assignments virtual adapters and so on can also be modified 5 When all changes are complete click OK A change that is made to a profile requires that the virtual server is stopped and reactivated Using the IVM GUI IVM managed LPARs do not have profiles they use configurations instead Only one configuration per LPAR is allowed The FSM and HMC can create multiple profiles for each virtual server or LPAR To change the VIOS configuration by using the IVM user interface complete the following steps 1 The IP address of the VIOS must be set before the IVM GUI interface can be accessed By using an SOL session log in to the VIOS padmin ID Acknowledge the license prompt by entering a then press Enter IBM Fle
368. k Next e Virtual Server Server 7895 424 85N1058008 548 4 SASS Load source and console Name Memory Select the resources for the load source and console adapters of the IBMi virtual server Processor Load source Ethernet SCSI 13 Ea Storage selection Virtual Storage Alternate restart Adapters eel La Physical IO Load Console source consale ee eee jaa Summary Figure 8 83 IBM i virtual server load source and console settings 11 The Summary window opens Review the information and click Finish to complete the definition The IBM i virtual server is now ready to be activated for load Chapter 8 Virtualization 429 8 8 Creating a full system partition If you need the entire capacity of the Power Systems compute node an operating system can be installed natively on the node The configuration know as a full system partition is similar to the setup for a VIOS virtual server or LPAR All resources of the compute node are assigned to a single partition and virtual adapters cannot be used Full system partitions can be configured and managed by the FSM or HMC I VM managed systems always require VIOS to be installed and do not meet the requirements of a full partition system It is possible to use the Chassis Management Module CMM to allow the installation and perform limited management of a full system partition p270 compute node The operating system is installed to this single virtual server by using the
369. l processors to physical processors and saves and restores all processor state information during virtual processor context switch Chapter 8 Virtualization 341 gt Controls hardware I O interrupts and management facilities for partitions The POWER Hypervisor firmware and the hosted operating systems communicate with each other through POWER Hypervisor calls hcal1s 8 3 1 Logical partitioning technologies 342 Logical partitions LPARs which are also known as virtual servers in Flex System and PureFlex System and virtualization increase usage of system resources and add a new level of configuration possibilities This section provides an overview of these technologies Dedicated LPAR Logical partitioning is available on all POWER5 POWER6 and POWER7 Systems or later This technology offers the ability to make a server run as though it were two or more independent servers When a physical system is logically partitioned the resources on the server are divided into subsets that are called LPARs Processors memory and I O devices can be individually assigned to logical partitions The LPARs hold these resources for exclusive use You can separately install and operate each dedicated LPAR because LPARs run as independent logical servers with the resources allocated to them Because the resources are dedicated to use by the partition it is called Dedicated LPAR Dynamic LPAR By using dynamic logical partitioning DLPAR y
370. l Console 44 IBM Flex System p270 Compute Node Planning and Implementation Guide The console is a 19 inch rack mounted 1 U unit that includes a language specific IBM Travel Keyboard The console kit is used with the Console Breakout cable that is shown in Figure 2 10 This cable provides serial and video connections and two USB ports The Console Breakout cable can be attached to the KVM connector on the front panel of x86 based compute nodes including the FSM Figure 2 10 Console Breakout cable The CMM in the chassis also allows direct connection to nodes via the internal chassis management network that communicates to the FSP or iMM2 on the node which allows remote out of band management 2 5 8 Rack cabinet The Enterprise configuration includes an IBM PureFlex System 42 U Rack Table 2 17 lists the major components of the rack and options Table 2 17 Components of the rack AAS feature XCC feature Description code code 7953 94X 93634AX IBM 42 U 1100 mm Enterprise V2 Dynamic Rack EU21 PureFlex Door ECO1 Gray Door selectable in place of EU21 FC03 Side Cover Kit Black EC02 Rear Door Black flat Chapter 2 IBM PureFlex System 45 2 5 9 Available software for Power Systems compute node In this section we describe the software that is available for the Power Systems compute node Virtual I O Server AIX and IBM i VIOS is preinstalled on each Power Systems compute node with a primary operating syst
371. l for a VIOS client LPAR by using one of the following methods gt IVM user interface gt VIOS command line Opening a virtual terminal with the IVM user interface Open the virtual terminal for the VIOS the only way to access the console remotely for the VIOS managed by IVM and the VIOS clients by using this method Java required Opening the virtual terminal of a partition requires a supported Java enabled browser Complete the following steps to open the virtual terminal of a partition 1 Select the partition for which you want to open a terminal Chapter 7 Power node management 313 2 Click More Tasks Open terminal window as shown in Figure 7 149 View Modify Partitions To perform an action on a partition first select the partition or partitions and then select the task System Overview Total system memory 32 GB Total processing units 24 Memory available 276 62 GB Processing units available 21 6 Reserved firmware memory 1 38 GB Processor pool utilization 1 23 5 1 System attention LED Inactive Partition Details ai Create Partition Shutdown More Tasks More Tasks Open terminal window Delete eag 1 Its0VIOS6A Create based on Operator panel service functions Reference Codes Mobility Figure 7 149 IVM option to open terminal window to an LPAR 3 The virtual terminal window opens and prompts for the VIOS IVM padmin password
372. l processing unit Commercial Processing Workload Cascading Style Sheets configure to order domain controller Data Center Bridging dual chip module device description Dynamic Host Configuration Protocol dual inline memory module dynamic logical partition Domain Name System Dynamic Path Selection Dynamic Reconfiguration Connector drive Digital Signature Algorithm Digital Video Disc error checking and correcting electromagnetic compatibility Electronic Service Agent Enterprise Switch Bundle everything to everything Enhanced Technical Support Fibre Channel Fibre Channel Arbitrated Loop Fibre Channel Forwarder Fibre Channel identifier Fibre Channel over Ethernet Fibre Channel Protocol fourteen data rate 601 FDX FIP FLOGI FPMA FSM FSP FTP GA Gb GB GIF GPU GSA GUI HA HAL HBA HDD HEA HH HMC HTML HTTP I O IBM IDE IEEE IMM IOPS 602 full duplex FCoE Initialization Protocol Fabric Login Fabric Provided MAC Address Flex System Manager Flexible Service Processor File Transfer Protocol general availability gigabit gigabyte graphic interchange format Graphics Processing Unit General Service Agents graphical user interface high availability hardware abstraction layer host bus adapter hard disk drive Host Ethernet Adapter half high Hardware Management Console Hypertext Markup Language Hypertext Transfer Protocol input output International Business Machin
373. latform management IBM SmartCloud Entry is the first tier in a three tier family of cloud offerings that is based on the Common Cloud Stack CCS foundation The following offerings form the CCS gt SmartCloud Entry gt SmartCloud Provisioning gt SmartCloud Orchestrator 50 IBM Flex System p270 Compute Node Planning and Implementation Guide IBM SmartCloud Entry is an ideal choice to get started with a private cloud solution that can scale and expand the number of cloud users and workloads More importantly SmartCloud Entry delivers a single consistent cloud experience that spans multiple hardware platforms and virtualization technologies which makes it a unique solution for enterprises with heterogeneous IT infrastructure and a diverse range of applications SmartCloud Entry provides clients with comprehensive laaS capabilities For enterprise clients who are seeking advanced cloud benefits such as deployment of multi workload patterns and Platform as a Service PaaS capabilities IBM offers various advanced cloud solutions Because IBM s cloud portfolio is built on a common foundation clients can purchase SmartCloud Entry initially and migrate to an advanced cloud solution in the future This standardized architecture facilitates client migrations to the advanced SmartCloud portfolio solutions SmartCloud Entry offers simplified cloud administration with an intuitive interface that lowers administrative overhead and improve
374. le at this website http www redbooks ibm com abstracts sg247590 html The wizard continues with Storage as shown in Figure 11 7 on page 507 For ease of storage management the console can automatically manage the virtual storage adapters that are required for the virtual server You also can individually customize the virtual storage adapters In this instance we are allowing automatic management of virtual adapters Chapter 11 Installing IBMi 505 The following options are now available to provide storage as shown in Figure 11 7 on page 507 Virtual Disks LUNs are created out of a shared storage pool that is addressable by the VIOS which should provide paths to storage for this client partition It is recommended that fully provisioned volumes are provided if virtual disks are used Physical Volumes A hdisk or disks are allocated from available volumes to the VIOS VIOS is queried to see which disks are available and the list is presented to you Fibre Channel Disks are addressed via Virtual Fibre Channel devices rather than virtual SCSI adapters Disks must be presented to the host VIOS physical storage adapter and NPIV addresses that are in place Support For more information about currently supported storage systems and to use NPIV adapters or Fibre Channel disks for IBM i check the System Storage Interoperability Center SSIC which is available at this website http ibm com systems support storage
375. le height double wide four bays Intermix of node types is supported supported Chassis per 42 U rack 54 IBM Flex System p270 Compute Node Planning and Implementation Guide Co E Management One or two CMMs for basic chassis management Two CMMs form a redundant pair one CMM is standard in 8721 A1x The CMM interfaces with the integrated management module IMM or flexible service processor FSP integrated in each compute node in the chassis There is an optional IBM Flex System Manager appliance for comprehensive management including virtualization networking and storage management I O architecture Up to eight lanes of I O to an I O adapter with each lane capable of up to 16 Gbps bandwidth Up to 16 lanes of I O to a standard node with two adapters There are a wide variety of networking solutions including Ethernet Fibre Channel FCoE RoCE and InfiniBand Power supplies Model 8721 A1x x config or 7893 92X e config 2500 W or 2100 W power modules two minimum six maximum Up to six power modules that provide N N or N 1 redundant power Power supplies are 80 PLUS Platinum certified that provides 95 efficiency at 50 load and 92 efficiency at 100 load Power capacity of 2500 W or 2100 W output rated at 200 VAC Each power supply contains two independently powered 40 mm cooling fan modules For more information see 3 5 Power supplies on page 63 Fan modules 10 fan modules eight 80 mm fan modules and two 40 mm
376. lease 7 1 TR6 With i 7 1 TR6 the limit is 128 64 client partitions can share a single NPIV port Because you have only an 8 Gb or 16 Gb physical port for the NPIV adapter performance problems occur if too many clients attempt to use the NPIV adapter at the same time with SVC V7000 this might include multiple paths to the same LUN For that reason we say that you can have 128 unique LUN paths under a single client adapter This same limit is applied to tape devices that are configured via NPIV Every control path tape drive has two LUNs and every non control path tape drive has one LUN that applies to this calculation This LUN limit applies only to IBM i clients because the limitation is enforced by the IBM i Licensed Internal Code The limitation of 64 partitions that share a single FC port is enforced by the HMC VIOS so that applies to any type of client partition IBM Flex System p270 Compute Node Planning and Implementation Guide 11 2 Creating an IBM i client virtual server To create the IBM i client virtual server complete the following steps 1 Create the IBM i client virtual server definition On the FSM GUI under the Manage Power System Resources tab right click the host server on which you want to create the client and select System Configuration Create Virtual Server as shown in Figure 11 2 The Create Virtual Server wizard starts Power Systems Resources a Perfo Su Search the tabl h e mance mma i e
377. lect Device Device Current Device Number Position Name l SCSI CD ROM loc U7954 24X 1077E3B V2 C2 T1 L8200000000000000 Navigation keys M return to Main Menu ESC key return to previous screen Type menu item number and press Enter or select Navigation key 1 Figure 12 36 SCSI CD ROM in position one 6 Select the drive from which you want to boot As shown in Figure 12 36 there is only one drive to select which is the virtual optical media that is linked to the Red Hat Enterprise Linux DVD ISO image The system now boots from the ISO image Figure 12 37 shows the boot of the virtual media and the VNC parameters Welcome to the 64 bit Red Hat Enterprise Linux 6 1 installer Hit lt TAB gt for boot options Welcome to yaboot version 1 3 14 Red Hat 1 3 14 35 e16_0 1 Enter help to get some basic usage information boot linux boot linux vnc vncpassword mypassword Figure 12 37 Installation prompt with VNC parameters 586 IBM Flex System p270 Compute Node Planning and Implementation Guide It is possible to stop the boot process by pressing the Tab key You can then enter the following optional parameters on the command line To use VNC and perform an installation in a graphic environment run the linux vnc vncpassword yourpwd command The password must be at least six characters long To install Red Hat Enterprise Linux 6 1 on a multipath external disk run the linux mpath command For more in
378. lect Task Interpartition Logical LAN loc U7954 24X 1077E3B V5 C4 T1 1 Information 2 Normal Mode Boot 3 Service Mode Boot Navigation keys M return to Main Menu ESC key return to previous screen X eXit System Management Services Type menu item number and press Enter or select Navigation key Figure 9 24 Select boot mode 28 Click X to exit SMS Chapter 9 Operating system installation methods 461 29 Respond to the prompt to confirm the exit In the next window select Yes Your installation displays a window similar to the one that is shown in Figure 9 25 chosen network type ethernet auto none auto server IP 9 42 241 191 client IP 9 27 20 216 gateway IP 9 27 20 1 device vdevice 1 1an 30000004 42 db fe 36 16 4 U7954 24X 1077E3B V5 C4 T1 MAC address loc code BOOTP request retry attempt 1 TFTP BOOT Server LP sates en 9 42 241 191 Client LPivs 6srete e 240 oes aware 9 27 20 216 Gateway Pike oii ESEN En 9 27 20 1 SUDNEU MaS Kass adatlaran hs Gare scaces 255 255 252 0 Ca y Fa PON aiGicent areca cos netea ciara tftpboot vios2 7954 stglabs ibm com TRIP RELICS cassie waa eee Oars 5 BOCK SIZE e ncts oe kensa pa tase es 512 Figure 9 25 Machine booting from NIM 30 Proceed with the operating system installation as normal 9 5 Optical media installation Optical media physical or virtual is another method for installing system
379. lect the IEEE 802 1Q capable adapter option to allow future dynamic adds of VLANs Select the Use this adapter for Ethernet bridging option and set the Priority value In a dual VIOS environment that intends to use one of the high availability modes the corresponding adapters on each VIOS with the same Port Virtual Ethernet value must have a unique priority Chapter 8 Virtualization 9385 i https 9 42 171 90 hre wel T2e03 Create Virtual Ethernet Adapter Server 7954 24X 5N107787B General Advanced Virtual ethernet adapter Adapter ID Pp O O VSwitch ETHERNETO Default E Port Virtual Ethernet VLAN ID 4091 View Virtual Network This adapter ts required for virtual server activation IEEE Settings Select this option to allow additional virtual LAN IDs for the adapter IEEE 802 19 compatible adapter Maximum number of VLANs 20 Add VLAN ID aa Additional VLAN IDs a Shared Ethernet Settings Select Ethernet bridging to link bridge the virtual Ethernet to a physical network Use this adapter for Ethernet bridging Priority a lor Figure 8 36 Virtual Ethernet values when used for a SEA 3 Click OK when the values are specified 386 IBM Flex System p270 Compute Node Planning and Implementation Guide The wizard returns to the Virtual Adapters window that shows an updated table that reflects the previous steps as shown in Figure 8 37 Create Lpar Wizard Server 7954 24X
380. les Properties and settings for I O Modules in the chassis Fans and Cooling Cooling devices installed in your system Power Modules and Management Power devices consumption and allocation Component IP Configuration Single location for you to view and configure the various IP address setting of chad Chassis Internal Network Provides internal connectivity between compute node ports and the internal CMM m Hardware Topology Hierarchical view of components in your chassis Generate Reports of hardware information Figure 7 140 Starting the Component IP Configuration page from the CMM 2 From the menu line click Chassis Management gt Component IP Configuration Chapter 7 Power node management 305 3 Figure 7 141 shows the Component IP Configuration page From the table click View of the wanted node The IP information for the service processor FSP in this example is shown IBM Chassis Management Module USERID Se A System Status Multi Cnassis Monitor Events Service and Support Chassis Management Mgt Module Management v Component IP Configuration I O Modules Bay Device Name IPv4 Enabled IP Address 1 IO Module 1 Component IP configuration Node 06 node06 p270 2 IO Module 2 3 IO Module 3 IPv4 Addresses 9 42 171 37 IPv6 Addresses Compute Nodes fd8c 215d 178e cOde 3640 b5ff fea7 24F Bay Device Name fe80 3640 b5ff fea7 24e 1 Node 01 node01 x240 a 2 Node 02 node02 x240 3 Node 03 node03 x240
381. lex System p270 Compute Node Planning and Implementation Guide 3 3 I O modules The I O modules provide external connectivity and internal connectivity to the nodes in the chassis These modules are scalable in terms of the number of internal and external ports that can be enabled how these ports can be used to aggregate bandwidth and create virtual switches within a physical switch The number of internal and external physical ports that are available exceeds previous generations of products These additional ports can be scaled or enabled as requirements grow and more capability can be introduced The Enterprise Chassis can accommodate a total of four I O modules which are installed in a vertical orientation into the rear of the chassis as shown in Figure 3 3 I O module I O module I O module I O module bay 1 bay 2 bay 4 i e JN J N o T TT os E 4 fr J vo amp
382. liance o off Help Logou Home Chassis Mlan Manage Powe Terminal Co xy Terminal Console A terminal console session to virtual server ITSO VIOS has been started in a separate window Accept any java Security warnings Figure 7 59 Validating with the FSM Chapter 7 Power node management 245 If SOL is not disabled you receive the error that is shown in Figure 7 60 when you are trying to open a virtual terminal console to the first virtual server on a Power compute node For more information about disabling SOL see Serial Over LAN on page 217 7989 VI0S Terminal Console File Edit Font Encoding Options In order to access the terminal you must first authenticate with the following management console 9 27 20 199 User ID USERID Password Connecting Connection successful Open in progress The open failed The session may already be open on another management console The server may not be ready to accept connections Attempts to open the session failed Please close the terminal and retry the op en at a later time If the problem persists Please contact IBM support Received end of file Exiting Figure 7 60 Console open failure on virtual server ID 1 with SOL enabled Opening a virtual terminal console session with the FSM CLI The FSM CLI alternative to open a virtual terminal session is the vtmenu command Note The FSM vtmenu can be used only for VIOS AIX
383. links between end devices and FC switches the end device logs in to the fabric FLOGI The device exchanges information with the switch by using well known addresses over its direct link to the switch In an FCoE network with potentially intermediate Ethernet links and possibly switches these login functions become more complicated They are handled by the FIP IBM Flex System p270 Compute Node Planning and Implementation Guide FIP allows end devices for example a p260 host with a CN4058 8 port 10Gb Converged Adapter to discover FCFs and the VLANs with which to connect to them Then FIP allows the device to establish those connections which are the VN_Port to VF_Port virtual links FIP includes the following high level steps 1 The end device or compute node broadcasts a FIP VLAN request to the CN4093 and any other FCF in the Ethernet network 2 FCFs that have VF_Ports reply with a VLAN notification frame that lists VLANs that the end device or compute node can use 3 The compute node discovers the FCFs that it can log in to by broadcasting a Discovery Solicitation frame in the discovered VLAN 4 FCFs respond with Discover Advertisement frames These frames contain such information as an FCF priority and the identifier of the fabric to which the FCF connects 5 The end device determines which FCF it wants to connect to for fabric login and sends a FIP Fabric Login FLOGI request to the FCF to log in to the fabric 6 Th
384. lized 3 days 5 days Function delivered One node FSM Configuration Discovery Inventory Review Internal Storage configuration Basic Network Integration using pre configured switches factory default No external SAN integration No FCoE changes No Virtualization No Cloud Skills Transfer Basic virtualization VMware KVM and VMControl No external SAN Integration No Cloud Up to four nodes Not Included Included Included included Advanced virtualization Not Not Included Included Server pools or VMware included included cluster configured VMware or VMControl No external SAN integration No FCoE Config Changes No Cloud Configure SmartCloud Not Not Not Included included included Entry included Basic External network 48 IBM Flex System p270 Compute Node Planning and Implementation Guide integration No FCoE Config changes No external SAN integration First chassis is configured with 13 nodes Included Included Included Included No add on Configure up to 14 nodes within one chassis Up to two virtualization engines ESXi KVM or PowerVM Configure up to 14 nodes within one chassis Up to two virtualization engines ESXi KVM or PowerVM Configure up to 14 nodes within one chassis Up to two virtualization engines ESXi KVM or PowerVM In addition to the offerings that are listed in Table 2 18 on page 48 two other services offerings are now available for PureFlex
385. lled system Network card configuration 1 choose whether the network MAC address F D7 40 09 04 02 interface associated to this Configuration type network card will be disabled IP 934217169 configured automatically GHCP or manually 2 if manually set Netmask 255 255 254 0 the IP address netmask and the Gateway Optional 9 42 170 1 gateway IP for it Figure 12 16 Network settings Save the configuration Chapter 12 Installing Linux 569 21 In the General settings page see Figure 12 17 configure the keyboard mouse localization time zone and root password and then click Next IBM Installation Toolkit for PowerLinux General settings for the installed system Input peripherals Keyboard English JS we English US O Y US Localization a n US System security Confirm root password jeeesssss eeue _ _ Red Hat specific RHN activation key Optional O O uit Prev Met Specify the settings for the instaled system Keyboard Select the language setting for the keyboard Language Select the language to be used on the installed system Timezone Select the timezone to be Used for the time and date settings on the installed system Check Use UTC if you want to use Universal Time Coordinated UTC which is the international time standard Root password Enter the root password for the installed system The password may be amv egg tend contain anv
386. locations Cooling zone 2 Cooling zone 1 Figure 3 8 Enterprise Chassis node cooling zones and fan module locations When a node is not inserted in a bay an airflow damper closes in the midplane to prevent air from being drawn through the unpopulated bay By inserting a node into a bay the damper is opened thus allowing cooling of the node in that bay Table 3 6 shows the relationship between the number of fan modules and the number of nodes supported Table 3 6 Fan module options and numbers of supported nodes Fan module option Total number of fan Total number of nodes modules supported Second option Chassis area The chassis area for the node is effectively one large chamber The nodes can be placed in any slot however preferred practices indicate that the nodes must be placed as close together as possible to be inline with the fan modules Chapter 3 Introduction to IBM Flex System 71 3 6 2 Switch and Chassis Management Module cooling There are two other cooling zones for the I O switch bays These zones zones 3 and 4 are on the right and left side of the bays as viewed from the rear of the chassis Cooling zones 3 and 4 are serviced by 40 mm fan modules that are included in the base configuration and cool the four available I O switch bays Upon hot swap removal of a 40 mm fan module a back flow damper in the fan bay closes The backflow damper prevents hot air from entering the system from the rear of t
387. luded gt Supports the networking infrastructure that you have today including Ethernet Fibre Channel and InfiniBand gt Offers industry leading performance with 1 Gb 10 Gb and 40 Gb Ethernet 8 Gb and 16 Gb Fibre Channel FCoE RoCE and QDR FDR InfiniBand gt Provides pay as you grow scalability so you can add ports and bandwidth when needed 1 4 8 Infrastructure The IBM Flex System Enterprise Chassis is the foundation of the offering which supports intelligent workload deployment and management for maximum business agility The 14 node 10 U chassis delivers high performance connectivity for your integrated compute storage networking and management resources The chassis is designed to support multiple generations of technology and offers independently scalable resource pools for higher usage and lower cost per workload The following features are available gt Achassis map that provides multiple view overlays to track health firmware inventory and environmental metrics gt Configuration management for a repeatable setup of compute network and storage devices gt Remote presence applications for remote access to compute nodes with single sign on gt Quick search that provides results as you type Beyond the physical world of inventory configuration and monitoring IBM Flex System Manager enables the following virtualization and workload optimization for a new class of computing gt Resource usage
388. luded factory integration and lab services optimization Revised in the fourth quarter of 2013 IBM PureFlex System now consolidates the three previous offerings Express Standard and Enterprise into two simplified pre integrated offerings Express and Enterprise that support the latest compute storage and networking requirements Clients can select from either of these offerings that help simplify ordering and configuration As a result PureFlex System helps cut the cost time and complexity of system deployments which reduces the time to gain real value Enhancements include support for the latest compute nodes I O modules and I O adapters with the latest release of software such as IBM SmartCloud Entry with the latest Flex System Manager release PureFlex 4Q 2013 includes the following enhancements New PureFlex Express New PureFlex Enterprise New Rack offerings for Express 25U 42U or none New compute nodes x222 p270 p460 New networking support 10 GbE Converged New SmartCloud Entry v3 2 offering YYYY YV Y 16 IBM Flex System p270 Compute Node Planning and Implementation Guide The IBM PureFlex System includes the following offerings gt Express An infrastructure system for small and mid size businesses This is the most cost effective entry point with choice and flexibility to upgrade to higher function For more information see 2 4 IBM PureFlex System Express on page 22 Enterprise An infra
389. mation window by pressing F10 to continue Install Licensed Internal Code LIC Disk selected to write the Licensed Internal Code to Serial Number Type Model I O Bus Controller Device YGEYXXFKUJWE 6B22 050 0 1 0 Select one of the following Restore Licensed Internal Code Install Licensed Internal Code and Initialize system Install Licensed Internal Code and Recover Configuration Install Licensed Internal Code and Restore Disk Unit Data Install Licensed Internal Code and Upgrade Load Source Om A UW Ne Selection 2 F3 Exit F12 Cancel Figure 11 17 Installing LIC and Initialize system menu Chapter 11 Installing IBMi 517 518 8 The Initialize Disk status window opens that shows elapsed time After the initialization is complete an LIC installation status window opens as shown in Figure 11 18 Install Licensed Internal Code Status Install of the Licensed Internal Code in progress Percent 25 complete lanl elastase lalla alll altel attetatattalatatetalale Elapsed time in minutes 0 5 Please wait Wait for next display or press F16 for DST main menu Figure 11 18 License Internal Code installation status IBM Flex System p270 Compute Node Planning and Implementation Guide 9 The IPL or Install the System window opens as shown in Figure 11 19 You must mount the next Optical image on the virtual Optical device You are not prompted for the next device until later in the installation proc
390. mber that is on the installation media Confirm your applicable language feature as shown in Figure 11 21 Press Enter to continue installing the operating system Select a Language Group System E1277E3B Note The language feature shown is the language feature installed on the system Type choice press Enter Language feature 2 8s lt 4 8 i ws 2924 F3 Exit F12 Cancel Figure 11 21 Language feature selection Chapter 11 Installing IBMi 521 12 The system performs an LIC initial program load before the operating system installation as shown in Figure 11 22 This process takes approximately 5 minutes to complete Status displays appear on the console You do not need to respond to any of these displays Licensed Internal Code IPL in Progress 07 02 13 09 14 33 IPL Type es see sss Attended Start date and time 07 02 13 09 14 33 Previous systemend Normal Current step total 1 16 Reference code detail C6004050 IPL step Time Elapsed Time Remaining gt Storage Management Recovery 00 00 00 Start LIC Log Main Storage Dump Recovery Trace Table Initialization Context Rebuild Item Current Total Sub Item Identifier Current Total Figure 11 22 License Internal Code IPL The following initial program load IPL steps are shown in the IPL Step in Progress display Authority Recovery Journal Recovery Database Recovery
391. ment For more information about working in the System 36 environment V4R5 or earlier see System 36 Environment Programming SC41 4730 Press Enter 19 The message Your password has expired might appear Press Enter The Change Password window opens Change the password from QSECOFR to your own choice First enter the old password QSECOFR Then enter the new password of your choice Enter the new password again as verification 526 IBM Flex System p270 Compute Node Planning and Implementation Guide 20 The Work with Software Agreements window opens as shown in Figure 11 28 Select to display the software agreements for MCHCOD which includes LIC and the IBM i operating system 5770SS1 Read and accept these agreements If the software agreements are declined you are given the choice to power down the system or return and accept the agreements Press Enter Work with Software Agreements System E1277E3B Currently selected language English Type options press Enter 5 Display Licensed Product Product Accept Opt Program Option Release Status MCHCOD No 5770SS1 BASE VZR1MO No Bottom F3 Exit Fl1l Display description F12 Cancel F13 Select language F19 Display trademarks F22 Restore software agreements Figure 11 28 Work with Software Agreements menu Installation of the base operating system is now complete and installation of Licensed Programs LICPGMs can now be started Chapter 11 Installing IBM
392. mentation Guide 2 4 7 Rack cabinet The Express configuration includes the options of being shipped with or without a rack Rack options include 25 U or 42 U size Table 2 9 lists the major components of the rack and options Table 2 9 Components of the rack AAS feature XCC feature Description code code 42U 7953 94X IBM 42 U 1100 mm Enterprise V2 Dynamic Rack EU21 PureFlex door ECO1 Gray Door FC03 Side Cover Kit Black EC02 Rear Door Black flat 25U 7014 25 IBM S2 25U Standard Rack ERGA PureFlex door Gray Door No Rack 4650 No Rack specify 2 4 8 Available software for Power Systems compute nodes In this section we describe the software that is available for Power Systems compute nodes VIOS AIX and IBM i VIOS are preinstalled on each Power Systems compute node with a primary operating system on the primary node of the PureFlex Express configuration The primary OS can be one of the following options gt AIX v6 1 gt AIX v7 1 gt IBM i v7 1 Chapter 2 IBM PureFlex System 33 RHEL and SUSE Linux on Power VIOS is preinstalled on each Linux on Power selected compute node for the virtualization layer Client operating systems such as Red Hat Enterprise Linux RHEL and SUSE Linux Enterprise Server SLES can be ordered with the PureFlex Express configuration but they are not preinstalled The following Linux on Power versions are available gt RHEL v5U9 POWER7 gt RHEL v6U4 POWER7 or PO
393. most everyday tasks System Management Services The logical partition boots to the System Management Services SMS menu Chapter 8 Virtualization 9395 396 Diagnostic with default boot list DIAG_DEFAULT The logical partition boots that uses the default boot list that is stored in the system firmware This mode is normally used to boot client diagnostics from the CD ROM drive Use this boot mode to run stand alone diagnostic tests Diagnostic with stored boot list DIAG_STORED The logical partition performs a service mode boot that uses the service mode boot list that is saved in NVRAM Use this boot mode to run online diagnostic tests Open Firmware OK prompt OPEN_FIRMWARE The logical partition boots to the open firmware prompt This option is used by service personnel to obtain more debug information After you make your selections in this window click Next to continue Profile Summary window The Profile Summary is that last window of the wizard as shown in Figure 8 45 on page 397 Review the partition profile selections and if changes are needed click Back to move to the appropriate window to make changes If no changes are needed select Finish to create the VIOS partition IBM Flex System p270 Compute Node Planning and Implementation Guide Profile Summary Create Partition w Partition Profile This is a summary of the partition and profile Click w Processors Finish to create the parti
394. mperature and calculated exhaust heat index temperature This information helps identify data center hot spots that require attention gt Soft power capping Soft power capping extends the allowed energy capping range further beyond a region that can be guaranteed in all configurations and conditions 120 IBM Flex System p270 Compute Node Planning and Implementation Guide When an energy management goal is to meet a particular consumption limit soft power capping is the mechanism to use Processor core nap The IBM POWER7 processor uses a low power mode called nap that stops processor execution when there is no work to be done by that processor core The latency of exiting nap falls within a partition dispatch context switch such that the IBM POWER Hypervisor uses it as a general purpose idle state When the operating system detects that a processor thread is idle it yields control of a hardware thread to the POWER Hypervisor The POWER Hypervisor immediately puts the thread into nap mode Nap mode allows the hardware to clock off most of the circuits inside the processor core Reducing active energy consumption by turning off the clocks allows the temperature to fall which further reduces leakage static power of the circuits that causes a cumulative effect Unlicensed cores are kept in core nap mode until they are licensed and they return to core nap mode when unlicensed again Processor core sleep mode To save even more e
395. multiple manufacturers gt Install only supported DIMMs as described at this IBM ServerProven website http www ibm com servers eserver serverproven compat us For the p270 Table 4 8 shows the required placement of memory DIMMs depending on the number of DIMMs that are installed Processor 1 Table 4 8 DIMM placement p270 a E B A E G Using mixed DIMM sizes All installed memory DIMMs do not have to be the same size but it is a preferred practice that the following groups of DIMMs be kept the same size gt Slots 1 4 gt Slots5 8 gt Slots 9 12 gt Slots 13 16 Chapter 4 Product information and technology 95 4 7 Active Memory Expansion 96 The optional Active Memory Expansion feature is a POWER7 technology that allows the effective maximum memory capacity to be much larger than the true physical memory Applicable to AIX V6 1 Technology Level 4 TL4 or later this innovative compression and decompression of memory content that uses processor cycles allows memory expansion of up to 100 By using this configuration an AIX V6 1 TL4 or later partition can do more work with the same physical amount of memory A server also can run more partitions and do more work with the same physical amount of memory Active Memory Expansion uses processor resources to compress and extract memory contents The trade off of memory capacity for processor cycles can be an excellent choice but t
396. n The fans are populated depending on nodes that are installed To support the base configuration and up to four standard width nodes or two double wide nodes a chassis ships with four 80 mm fans and two 40 mm fans installed The minimum configuration of 80 mm fans is four which provide cooling for up to four standard width nodes as shown in Figure 5 9 on page 158 This configuration is the base configuration Chapter 5 Planning 157 Node Bays Cooling zone Cooling zone Front View Rear View Figure 5 9 Four 80 mm fan modules support a maximum of four standard width nodes Six installed 80 mm fans typically support four more standard width nodes within the chassis to a maximum of eight as shown in Figure 5 10 Cooling zone Cooling zone Node Bays Front View Rear View Figure 5 10 Six 80 mm fan modules support a maximum of eight standard width nodes 158 IBM Flex System p270 Compute Node Planning and Implementation Guide To cool more than eight standard width or more than four double wide nodes all fan positions must be populated as shown in Figure 5 11 B i amp I f NO gt O Xo 1 s Sa ii i D es m B 5 e a a a 7 jis I7 D lt e 5 m l5 5 s 3 ta B B e 1 il Ai H j r f 3 DEBOG i l Node Bays Cooling zone Cooling zone Front
397. n between VN PORT Oe fc 00 01 0d 00 and FCF 74 99 75 70 41 c4 has been established The FCF component is complete To verify that our configuration is correct we can examine the FCoE database that shows the Port Worldwide Names PWWN that are to be used for zoning Example 6 4 shows the output of the show fcoe database ISCLI command where connections are established between the V7000 Storage Node on ports INTA13 and INTA14 and the Compute Node 8 in bay 8 FCoE also is configured and a connection is established from port INTA8 Example 6 4 Displaying the FCoE database entries Router config vlan show fcoe database 1002 010c01 10 00 5c 78 24 52 44 43 Oe fc 00 01 0c 01 INTA8 1002 010d00 50 05 07 68 05 08 03 71 Oe fc 00 01 0d 00 INTA14 1002 010c00 50 05 07 68 05 08 03 70 Oe fc 00 01 0c 00 INTAI13 Total number of entries 3 We can also confirm connectivity from the V7000 Storage Node by reviewing the System Details option from the V7000 GUI or Isportfc via the CLI Figure 6 6 on page 179 shows Canister 1 of the V7000 where the 10 Gb Ethernet port is active which details the PWWN or WWPN in Figure 6 6 on page 179 178 IBM Flex System p270 Compute Node Planning and Implementation Guide Canisters mo Canister 2 5 iSCSI Alias Failover iSCSI Name ign 1986 03 com ibm 2145 flexsystemr 000 _nodez Flex System 7000 gt Monitoring gt System Details 7 a Era a ae T sc _ SST Refresh Failover Partner Node node2
398. n disk The following supported methods for AIX installation are available e New and Complete Overwrite Use this method when you are installing a new system or reinstalling one that must be erased e Migration installation Use this method when you are upgrading an older version of AIX AIX 5L V5 3 or AIX V6 1 to a newer version such as AIX V7 1 This option retains all of your configuration settings The tmp directory is erased during installation Chapter 10 Installing VIOS and AIX 493 e Preservation installation This method is similar to the New and Complete Overwrite option except that it retains only the home directory and other user files This option overwrites the file systems Option 2 Primary Language Environment Settings AFTER Install After you select the correct type of installation choose the language for the installation a keyboard and cultural convention Option 3 Security model You can use this option to enable the trusted computer database and other security options as shown in Figure 10 4 Security Models Type the number of your choice and press Enter 1 Trusted AIX 2 Other Security Options Trusted AIX and Standard Security options vary based on choices LSPP SbD CAP CCEVAL TCB gt gt gt 0 Continue to more software options 88 Help 99 Previous Menu gt gt gt Choice 0 Figure 10 4 Security options selection Option 4 More Options Software Install
399. n each compute node It provides system monitoring event recording and alerts and manages the chassis its devices and the compute nodes The chassis supports up to two CMMs If one CMM fails the second CMM if present can detect its inactivity self activate and take control of the system without any disruption The CMM is central to the management of the chassis The CMMs are inserted in the back of the chassis and are vertically oriented When you are looking at the back of the chassis the CMM bays are on the far right side as shown in Figure 7 3 CMM bay 1 is the lower position and CMM 2 is the upper position Figure 7 3 Chassis Management Module bays IBM Flex System p270 Compute Node Planning and Implementation Guide Through an embedded firmware stack the CMM implements functions to monitor control and provide external user interfaces to manage all chassis resources You can use the CMM to perform the following functions gt Define login IDs and passwords gt Configure security settings such as data encryption and user account security gt Select recipients for alert notification of specific events gt Monitor the status of the compute nodes and other components gt Find chassis component information gt Discover other chassis in the network and enable access to them gt Control the chassis compute nodes and other components gt Access the I O modules t
400. n resources of a Ipp_ source or mksyb and corresponding SPOT are also required The NIM Base Operating System BOS installation options are configured for the AIX machine resource by using the proper AIX lpp_source or mksysb and SPOT resources The virtual server or LPAR is started and the SMS is accessed to configure the TCP IP parameters for the AIX and NIM server The installation boot order is set for the network device that was defined in step 3 After you exit to normal boot a window opens that shows the network parameters for BOOTP as shown in Figure 9 25 on page 462 Chapter 10 Installing VIOS and AIX 491 6 A window opens that shows the AIX kernel loading You are prompted to select the installation language English by default as shown in Figure 10 1 gt gt gt 1 Type 1 and press Enter to have English during install 88 Help gt gt gt Choice 1 Figure 10 1 Installation language selection 7 After the language is selected the installation options are displayed as shown in Figure 10 2 Welcome to Base Operating System Installation and Maintenance Type the number of your choice and press Enter Choice is indicated by gt gt gt gt gt gt 1 Start Install Now with Default Settings 2 Change Show Installation Settings and Install 3 Start Maintenance Mode for System Recovery 4 Configure Network Disks iSCSI 5 Select Storage Adapters 88 Help 99 Previous Menu gt gt gt Cho
401. n stops the daemon from running and prevents any routing information from being exported from this HMC Systems Management displays tasks to manage servers logical partitions and frames Use these tasks to set up configure view status troubleshoot and apply solutions for servers This section describes the tasks to manage a server Servers The servers node represents the servers that are managed by this HMC To add servers complete the following steps Before you begin The Power compute node must be discovered by the CMM and the IP address for the FSP on the same subnet as the CMM These steps are described in 7 7 2 Connecting a Power compute node to the CMM on page 208 and Component IP configuration on page 211 Chapter 7 Power node management 279 1 Select Systems Management Servers in the navigation pane 2 Click Connections gt Add Managed Systems in the work pane as shown in Figure 7 99 y Ny Hardware Management Console LF hecroct Help Logoft a fy iv Systems Managernent gt Servers View i Welcome He EPP Ue Filter Tasks Views Hl n l ay GME ile a Select A Name Status A le leer ees sag Available Memory GB Reference Code il Servers i J ib System Plans e Total 0 Filtered 0 Selected 0 at O Fitered 0 Selected A HMC Management S M4 Service Management am Updates llasks Servers g E C
402. n the right side IBM Flex System Manager Welcome USERID Problems o off Compliance o o Eo TT Manage Powe W _ Configure A Select Ac Chassis blan Manage Power Systems Resources k Welcome Flex System Manager Version Power Systems Resources common EL Hosts eae ea eee eaa Search the table Search Virtual Servers Select Name Access State Reference Code 3 Proble ee HW server 7954 24x 8110778 Mok Started OK Power Units Figure 7 36 Manage Power Systems Resources view SDMC similarities Readers who are familiar with the Systems Director Management Console SDMC recognize this part of the FSM GUI because the layout and usage is similar The Manage Power Systems Resources view can automatically be opened and added to the main row of tabs for a User ID each time you log in as shown in Figure 7 37 Open the drop down menu in the upper right corner of the FSM browser sections and select Add to My Startup Pages and follow the prompts IBM Flex System Manager Welcome USERID Problems i if Compliance i p Help Home Chassis hlan Manage Powe x Select Action Close Page Manage Power Systems Resources k Welcome Flex System Manager Version Power Systems Resources Common Tasks Figure 7 37 Adding view to start up pages 230 IBM Flex System p270 Compute Node Planning and Implementation Guide As shown in Figure 7 3
403. nable Serial Over LAM a Enable Local Power Control Apply Cancel Figure 7 25 Clearing Serial Over LAN for a compute node option 4 Click Apply Chapter 7 Power node management 219 The change takes effect immediately 7 7 4 Service and Support option The Service and Support option is used for reviewing detected problems troubleshooting opening a service request and for updating chassis settings Service and Support Chassis Management Mot Module Management Problems addressed by IBM Support if you have enabled service and support to report problems Configure your system to monitor and report service events BIST connectivity status service data and service reset Download Service Data Obtain a compressed file of relevant service data Figure 7 26 Service and Support tab The Service and Support menu has four menu items gt Problems Shows a grid of detected problems You can open a service request directly to IBM gt Settings Use this menu item to configure the chassis enter contact information country proxy access and so on gt Advanced Status This menu item provides advanced service information and more service tasks You might be directed by IBM Support staff to review or perform tasks in this section gt Download Service Data By using this menu item you can download CMM data send management module data to an email recipient SMTP must be set up first and download
404. nable businesses to rapidly deploy IT services ata reduced cost Moreover they are built on decades of expertise enabling deep integration and central management of a comprehensive open choice infrastructure system and dramatically cutting down on the skills and training that is required for management and deployment IBM PureFlex Systems combine advanced IBM hardware and software with patterns of expertise and integrates them into optimized configurations that are simple to acquire and deploy which helps you to get faster time to value for your solution Chapter 1 Introduction 3 1 2 Choosing an IBM PureFlex System or IBM Flex System If you are looking to build your own system or upgrade an existing blade server installation you can make use of an IBM Flex System which is a build to order BTO solution that is designed to help you go beyond blade servers These offerings include the following features gt IBM PureFlex System The IBM PureFlex System is a pre configured and pre integrated IT infrastructure solution that is available in three configurations with x86 or POWER processor based compute nodes More configuration options are available to meet your precise IT infrastructure needs If you want a pre configured pre integrated infrastructure with integrated management and cloud capabilities that is factory tuned from IBM IBM PureFlex System is the answer IBM Flex System Custom build infrastructure to your spe
405. nally each switch has a controller In each case the management controller provides an access point for the next level of system managers and a direct user interface 3 4 3 Chassis Management Module The Chassis Management Module CMM is a hot swap module that is central to the management of the chassis and is required in each chassis The CMM automatically detects any installed modules in the chassis and stores vital product data VPD from the modules Chapter 3 Introduction to IBM Flex System 61 The CMM also acts as an aggregation point for the chassis nodes and switches including enabling all of the management communications by Ethernet connection EnergyScale functions of the POWER7 and POWER7 processor chips are managed by the CMM The CMM is also the key component that enables the internal management network The CMM has a multiport L2 1 Gb Ethernet switch with dedicated links to all 14 node bays the four switch bays and the optional second CMM The second optional CMM provides redundancy in an active standby mode by using the same internal connections as the primary CMM and is aware of all activity of the primary CMM through the trunk link between the two CMMs This configuration ensures that the backup CMM is ready to take over in a failover situation 3 4 4 IBM Flex System Manager The next tier in the management stack is the IBM Flex System Manager FSM management appliance The FSM a dedicated special purpose
406. nchronization update Chapter 8 Virtualization 411 14 Click details to display the Resource Synchronization Details window as shown in Figure 8 63 This example indicates that all of the changes that were made were synchronized with exception of processor modifications Those changes are pending and require a restart to update Resource Synchronization Details itsoVIOS6A 1 Changing resource allocations while a partition is active may result in pending and current resource values not being synchronized The most common reason for this is that certain resource changes may take some time to synchronize particularly memory changes Synchronizing these values requires that the partition communication state be active The resource types are listed below along with their current state If the resource is not synchronized then details about the latest synchronization commands run will be displayed Memory Resource synchronized Yes Memory Weight Resource synchronized Yes Memory Entitlement Resource synchronized Yes Processing Units Resource synchronized Resource will not synchronize because the pending and current minimum or maximum values are ace not synchronized Restart your partition in order to complete the synchronization Latest commands run on partition Time Return Code Command Output Synchronization successful code 0 10 23 15 6 02 48 PM O Processors Resource synchronized jit Resource
407. nd dynamic VM placement that is based on usage energy hardware predictive failure alerts or host failures Chapter 1 Introduction 7 For more information about the FSM see the following resources gt The IBM Flex System Manager Product Guide http www redbooks ibm com abstracts tips0862 html gt The IBM Flex System topic on the Flex amp PureFlex Information Center http publib boulder ibm com infocenter flexsys information topic c om ibm acc 8731 doc product_page html Figure 1 3 shows the IBM Flex System Manager Figure 1 3 The IBM Flex System Manager 1 4 3 Power Systems virtualization management FSM HMC and IVM The IBM Flex System Manager is the preferred appliance for managing an IBM Flex System environment with its high end management virtualization and cloud capabilities However if a Hardware Management Console HMC or Integrated Virtualization Manager IVM is more convenient for the user to manage Power Systems virtualization these management interfaces are supported for Power Systems compute nodes IVM must be activated in VIOS on each compute node to use virtualization capabilities After you configure an IP address on VIOS you open a browser window to that IP address and the IVM user interface loads If advanced capabilities are required such as Advanced Memory Expansion AME or Multiple Shared Processor Pools an FSM or HMC is required For more information about management capabilit
408. ne core must be enabled in the compute node For example with the EPRE two socket four chip 24 core Compute Node you can unconfigure a maximum of 23 cores leaving one core configured The field core override option specifies the number of functional cores that are active in the compute node By using the field core override option you can increase or decrease the number of active processor cores in the compute node The compute node firmware sets the number of active processor cores to the entered value The value takes effect when the compute node is rebooted The field core override value can be changed only when the compute node is powered off The advanced system management interface ASMI is used to change the number of functional override cores in the compute node For more information see this website http publib boulder ibm com infocenter flexsys information topic com ibm acc psm hosts doc dpsm managing hosts_launch_asm html For more information about the field core override feature see this website http publib boulder ibm com infocenter powersys v3r1m5 topic p hby fi eldcore htm For more information see this website http publib boulder ibm com infocenter powersys v3r1m5 topic p 7hby vi ewprocconfig htm Chapter 4 Product information and technology 83 System maintenance The configuration information about this feature is stored in the anchor card see 4 12 Anchor card on page 124 and the system board
409. nergy the POWER7 processor has a lower power mode that is referred to as sleep Before a core and its associated private L2 cache enter sleep mode the cache is flushed transition look aside buffers TLB are invalidated and the hardware clock is turned off in the core and the cache Voltage is reduced to minimize leakage current Processor cores that are inactive in the system such as license deactivated cores are kept in sleep mode Sleep mode saves about 80 power consumption in the processor core and its associated private L2 cache Processor chip winkle mode The most amount of energy can be saved when a whole POWER7 chipset enters the winkle mode In this mode the entire chiplet is turned off including the L3 cache This can save more than 95 power consumption Processor folding Processor folding is a consolidation technique that dynamically adjusts over the short term the number of processors available for dispatch to match the number of processors demanded by the workload As the workload increases the number of processors made available increases As the workload decreases the number of processors made available decreases This dynamic reallocation of processor cores to task execution optimizes energy efficiency of the entire system as unused processors remain in low power idle states longer Chapter 4 Product information and technology 121 4 11 2 Power Capping and Power Saving options and capabilities The IBM Flex S
410. net A total of three are defined two for use in SEA adapters and the third for a control channel for a future dual VIOS environment A virtual SCSI viscus adapter is also defined to support a client LPAR Complete the following steps to create the virtual Ethernet and virtual SCSI adapters 1 From the Virtual Adapter window select Actions gt Create Ethernet Adapter to create the first virtual Ethernet as shown in Figure 8 35 on page 385 384 IBM Flex System p270 Compute Node Planning and Implementation Guide Virtual Adapters Create Partition Create Virtual Adapter P Ethernet Adapter Edit Fibre Channel Adapter rween logical partitions The current Processors Properties SCSI Adapter gt Virtual Adapters Delete Serial Adapter Optional Settings Profile Summary Server Serial 0 Any Partition Any Partition Slot Yes Server Serial 1 Any Partition Any Partition Slot Yes Total 2 Filtered 2 Selected 0 Figure 8 35 Adding virtual Ethernet adapters for a VIOS 2 Inthe Create Virtual Ethernet Adapter window as shown in Figure 8 36 on page 386 enter or accept the following characteristics for the bridging virtual Ethernet adapter Accept the default Adapter of 2 This value change can be changed if needed Set the Port Virtual Ethernet also referred to as PVID option to 4091 Select the This adapter is required for virtual server activation option Se
411. network for storage traffic FC offers relatively high speed low latency and more importantly built in back pressure mechanisms to provide lossless behavior which is critical for storage subsystems so that data packets are not dropped during periods of network congestion Until recently transmission speeds from FC equipment were faster than that of Ethernet where FC used speeds of 2 Gbps 4 Gbps 8 Gbps and 16 Gbps Ethernet offered 100 Mbps or 1 Gbps However with improved and faster Ethernet equipment 10 Gbps is becoming more widely available and used for host server connections Higher speeds of 40 Gbps Ethernet are now available and a 100 Gbps standard was ratified and equipment will become common soon With an enhancement to Ethernet known as Data Center Bridging DCB this can now perform lossless transmission on Ethernet based networks which means that FCP can now use this physical layer and meet or exceed the speeds that are available on traditional FC SANs With these advancements momentum is growing in converged networking of FC and traditional Ethernet data traffic With it comes the benefits of a reduction in complexity of managing two disparate types of networks improved usage hardware consolidation and lower cost of ownership By using a single infrastructure for both networks the costs of procuring installing managing and operating the data center infrastructure can be lowered The improved speeds and capabilit
412. nfigure TCP IP menu to select configuration tasks Before you start to configure your system complete the following steps to review the menu 1 On the command line enter G0 TCPADM and press Enter to access the TCP IP Administration menu 2 Specify Option 1 Configure TCP IP and press Enter to access the Configure TCP IP menu CFGTCP Note Ensure that the user profile you are performing this task under has TOSYSCFG special authority 11 9 1 Configuring a line description You must create an Ethernet line description as the communication object for TCP IP To configure a line description for an Ethernet line complete the following steps 1 On the command line enter the Create Line Description command CRTLINETH and press F4 Prompt to access the Create Line Desc Ethernet menu 2 At the Line description prompt specify a line name use any name 3 At the Resource name prompt specify the resource name 4 Press Enter to see a list of more parameters Specify values for any other parameters that you want to change then press Enter to submit Chapter 11 Installing IBMi 547 11 9 2 Turning on IP datagram forwarding lf you want the IP packets to be forwarded among different subnets you must turn on IP datagram forwarding To turn on IP datagram forwarding complete the following steps 1 From the command line enter the Configure TCP IP command CFGTCP and press Enter to access the Configure TCP IP menu 2
413. nfigured to the same mode for example EXT11 EXT12 can be configured to FC while EXT13 EXT14 can be configured to Ethernet mode Table 6 4 lists the supported transceivers for each mode Note The Omni ports in the CN4093 require different transceivers for Ethernet mode to FC mode and operating at different speeds Table 6 4 Omni port mode specific transceivers Feature code Supported Omni Description port mode EB28 10 Gb Ethernet IBM SFP SR Transceiver ECB9 10 Gb Ethernet IBM SFP LR Transceiver 3382 10 Gb Ethernet 10 Gbase SR SFP MM Fiber Transceiver 8 4 Gb FC IBM 8 Gb SFP Software Optical Transceiver 6 2 1 FCoE VLANs Ports that are used to connect by using FCoE must be isolated into a separate VLAN on the CN4093 10Gb Converged Scalable Switch When defined the VLAN must have a VLAN number and the following components gt Port Membership Named ports as described in Table 6 3 on page 173 The VLAN must include at least one FC defined port paired FC Omni ports can be in a separate FC VLAN gt Switch Role Full switch fabric or NPV mode gt Default VLAN number for FCoE 1002 The switch mode for the FCoE VLAN determines whether it has the switching element thus FCF capability or must pass all data to an external SAN switch for FCF services thus NPV capability For a compute node to connect to internal storage devices such as the V7000 Storage Node the VLAN must have FCF enabled Because all storage tr
414. ng is used verify whether it is supported by the operating system and applications Important To avoid possible issues when you replace a failed switch module do not use automatic failoback for NIC teaming A newly installed switch module has no configuration data and it can cause service disruption Virtual Link Aggregation Groups In many data center environments downstream switches connect to upstream devices which consolidate traffic as shown in Figure 5 2 on page 146 Chapter 5 Planning 145 ae XK XK X STP blocks x implicit loops f f fed a ISL ae a m eee Layer Peers Links remain VLAGs LOA Layer Servers Figure 5 2 Typical switching layers with STP versus VLAG A switch in the access layer might be connected to more than one switch in the aggregation layer to provide network redundancy Typically the Spanning Tree Protocol is used to prevent broadcast loops which block redundant uplink paths This setup has the unwanted consequence of reducing the available bandwidth between the layers by as much as 50 In addition STP might be slow to resolve topology changes that occur during a link failure which can result in considerable MAC address flooding By using Virtual Link Aggregation Groups VLAGs the redundant uplinks remain active and use all the available bandwidth By using the VLAG feature the paired VLAG peers appear to the downstream device as a single virtual e
415. ng a command line interface CLI over a Telnet or Secure Shell SSH connection SOL is required to manage Power Systems compute nodes that do not have KVM support or that are managed by IVM SOL provides console redirection for both System Management Services SMS and the server operating system The SOL feature redirects server serial connection data over a LAN without requiring special cabling by routing the data by using the CMM network interface The SOL connection enables Power Systems compute nodes to be managed from any remote location with network access to the CMM SOL offers the following advantages gt Remote administration without KVM headless servers gt Reduced cabling and no requirement for a serial concentrator gt Standard Telnet SSH interface which eliminates the requirement for special client software The CMM CLI provides access to the text console command prompt on each server through a SOL connection which enables the Power Systems compute nodes to be managed from a remote location 4 11 IBM EnergyScale IBM EnergyScale technology provides functions that help you to understand and dynamically optimize the processor performance versus processor power and system workload and to control IBM Power Systems power and cooling usage Chapter 4 Product information and technology 119 The IBM Flex System CMM uses EnergyScale technology which enables advanced energy management features to conserve power and improve
416. ng information for the Dual VIOS Adapter is shown in Table 4 9 Table 4 9 Dual VIOS Adapter ordering information EC2F IBM Flex System Dual VIOS Adapter For more information about dual VIOS and partitioning see Chapter 8 Virtualization on page 333 Both 2 5 inch HDDs and 1 8 inch SSDs are supported however the use of 2 5 inch drives imposes restrictions on DIMMs that are used as described in the next section The drives attach to the cover of the server as shown in Figure 4 16 The IBM Flex System Dual VIOS Adapter sits below the I O adapter that is installed in I O connector 2 Dual VIOS Adapter installs under I O adapter 2 Drives mounted on the underside of the cover Figure 4 16 The p270 showing the HDD locations on the top cover 4 8 1 Storage configuration impact to memory configuration The type of local drives 2 5 inch HDDs or 1 8 inch SSDs that are used has the following effects on the form factor of your memory DIMMs gt If 2 5 inch HDDs are chosen only Very Low Profile VLP DIMMs can be used because of internal space requirements currently 4 GB and 8 GB sizes Chapter 4 Product information and technology 99 There is not enough room for the 2 5 inch drives to be used with Low Profile LP DIMMs Verify your memory requirements to make sure that it is compatible with the local storage configuration gt The use of 1 8 inch SSDs provides more clearance for the DIMMs and
417. ngs 4utoconfigured Addresses IP Address Prefix Length Static IP Addresses Select IP Address Prefix Length Figure 7 94 LAN Adapter Details IPv6 Settings The following options are available gt Autoconfigure options Autoconfigure IPv6 addresses If this option is selected the autoconfiguration process includes creating a link local address and verifying its uniqueness on a link determining what information should be autoconfigured addresses other information or both In the case of addresses it is whether they should be obtained through the stateless mechanism the stateful mechanism or both Use DHCPV6 to configure IP settings This option enables stateful autoconfiguration of IPv6 addresses by using the DHCMvV6 protocol gt Static IP Addresses As shown in Figure 7 94 clicking Add opens an IPv6 Settings window in which you can specify an IPv6 address and prefix Chapter 7 Power node management 275 Flex System configurations Although not required consider assigning an IPv6 address to the HMC adapter Chassis components at a minimum use a link local address LLA for internal communications Often a Flex System configuration is configured similar to a PureFlex IPv6 environment with an IBM IPv6 prefix of fd8c 215d 178e cOde and a prefix value of 64 The last half of the address is the last 64 bytes of the LLA address Firewall Settings tab The Firewall Settings tab of the LAN Adapter
418. ning columns The order and the number of columns can be tailored to the users preferences Manage Power Systems Resources k Welcome Flex System Manager Version Power Systems Resources E l Hosts eee tee ae ay Search the table search 4 Virtual Servers La Operating Systems Power Units Select Mame Access State Detailed Figure 7 44 Default table view of hosts The table in the content area can be customized for content and order by clicking Columns from the Actions drop down menu as shown in Figure 7 45 Manage Power Systems Resources k Welcome Flex System Manager Version Power Systems Resources Actions Y El gt Hosts E server 7954 24x sN107782 Eeformance Summary Search the table Search f Create Group 2 sinide Sasan Select Name State Detailed F server vos4 24x q SP ax tame 4 Started None Power Units Import Groups Columns Select All Deselect All Show Filter Row Clear All Filters Edit Sart Clear All Sorts Figure 7 45 Selecting the Columns option 234 IBM Flex System p270 Compute Node Planning and Implementation Guide The Columns view opens as shown in Figure 7 46 and allows editing of the columns that were selected for display and the wanted order in the content area table The example shows the Problems heading highlighted This heading can be repositioned in the order of the table by using the
419. nnect from it If you log off your session is ended If you disconnect your session is preserved and your tasks continue to run You can reconnect to the session at a later time and continue working Log off Disconnect Figure 7 87 HMC logoff or disconnect window 266 IBM Flex System p270 Compute Node Planning and Implementation Guide After you disconnect from the session you can reconnect to the session by selecting the session that you want to connect As shown in Figure 7 88 session ID 28 has two running jobs When you reconnect that session the jobs that you were doing previously are displayed You also see that there are three disconnected sessions for the user ID hscroot This is a typical situation when all users log in with the same user ID for example hsroot The disconnect feature provides another reason to use separate user IDs for each user The following disconnected sessions are available to user hscroot You can choose to either reconnect to one of these sessions or start a new session To reconnect select the session to which you wish to reconnect then click Reconnect To create anew session click New Session You can also delete a disconnected session by selecting the session you wish to delete and then clicking Delete If you d rather cancel connecting click Cancel Select Session Id Disconnect Time Creation Time Funning Tasks Sen 2a 2 Deer eel sl Allen sy ON Betas
420. none 4 0 4094 0 1 ETHERNETO al1 none msp 0 VIOS command This command creates a VIOS server that matches the one that was created in Creating the VIOS logical partition on page 375 with the HMC UI which shows the usage of the graphical interface Verification of success As with the previous FSM commands the syntax is the same only the smc1i prefix was removed A successful command produces a prompt with no message displayed To verify that the VIO Server was created run the Issyscfg command and scan the results for the name of your virtual server as shown in the following example hscroot itsoHMC1 gt Issyscfg r Ipar m Server 7954 24X SN107782B F name itsoVIOS6A To verify the content of the profile that was created as a result run the Issyscfg command with different parameters as shown in the following example hscroot itsoHMC1l gt Issyscfg r prof m Server 7954 24X SN107782B filter Ipar_names itsoVIOS6A IVM CLI method IVM can have only a single VIOS LPAR This LPAR is created when the VIOS is installed on a Power compute node and owns all the physical I O resources A fraction of the total CPU and memory also is assigned to the VIOS LPAR during the installation of the VIOS The values can be changed to match the workload that is expected on the VIOS if wanted after the VIOS installation completes After the VIOS is up the IVM command line is available and can be used to created client LPARs Ch
421. ns The HMC must be able to communicate directly with the Flexible Service Processor FSP on the compute nodes This requirement means the HMC must be able to reach the same IP subnet as the CMM 7 2 Chassis Management Module This section gives a brief overview of the CMM as shown in Figure 7 2 Usage information about the CMM when it is used to manage a Power based compute node also is described in 7 7 Management by using a CMM on page 204 Reset Port activity ia LED Ethernet connector RJ45 Port link LED Serial connector mini USB A USB connector o Power on LED Activity LED ou J Error LED Figure 7 2 Chassis management module Detailed CMM setup and overall usage information is not covered in this document For more information see Implementing Systems Management of IBM PureFlex System SG24 8060 which is available at this website http www redbooks ibm com abstracts sg248060 html Chapter 7 Power node management 187 For a hardware overview of the CMM see BM PureFlex System and IBM Flex System Products and Technology SG24 7984 which is available at this website http www redbooks ibm com abstracts sg247984 html 7 2 1 CMM overview 188 The CMM is a hot swap module that provides single chassis management and is used to communicate with the management controller i
422. nstalled in DIMM the ETE connector Raal Systems Gb NVRAM 256 MB DDR2 Management Ethernet t TPMD connector ports Anchor card VPD Figure 4 8 IBM Flex System p270 Compute Node block diagram The p270 compute node now has its POWER7 processors packaged as dual chip modules DCMs Each DCM consists of two POWER7 processors DCMs installed in the p270 consist of two six core chips that give 12 processor cores per socket In Figure 4 8 you can see the two DCMs with eight memory slots for each module Each module is connected to a P7IOC I O hub which connects to the I O subsystem I O adapters and local storage At the bottom of the block diagram you can see a representation of the flexible service processor FSP architecture Chapter 4 Product information and technology 81 Introduced in this generation of Power Systems compute nodes is a secondary SAS controller card which is inserted in the ETE connector This secondary SAS controller allows independent assignment of the internal drives to separate partitions 4 5 IBM POWER7 processor The IBM POWER7 processor is an evolution of the POWER7 architecture and represents an improvement in technology and associated computing capability of the POWER7 The multi core architecture of the POWER7 processor is matched with a wide range of related technologies to deliver leading throughput efficiency scalability and Reliability Availability and Serviceability RAS
423. nstructions may appear below Performing Connectivity Test FAILED 0980 007 Use the Configure Service Connectivity SMIT option to correct the problem Activation will continue 0513 071 The IBM ESAGENT Suysystem has been added 0513 059 The IBM ESAGENT Subsystem has been started Subsystem PID is 15728790 The Electronic Service Agent Component collects information about systems resources system configyuration system utilization performance capacity planning system failure logs and preventing maintenance event monitoring Your Information Your information excludes your financial statistical personal dat and your business plans F1l Help F2 Refresh F3 Cancel F6 Command F8 Image F9 Shel F10 Exit Find n Find Next Figure 7 173 ESA activation from VIOS cfgassist option The ESA service can be stop and started as needed by clicking cfgassist gt Electronic Service Agent Stop Electronic Service Agent or by clicking cfgassist Electronic Service Agent Start Electronic Service Agent 330 IBM Flex System p270 Compute Node Planning and Implementation Guide With ESA now active clicking Electronic Service Area from the IVM navigation area presents an active link in the work area as shown in Figure 7 174 Integrated Virtualization Manager ES 6 ee ssf Ane n Welcome padmin itsovios6A Edit my profile Help Log out Partition Management Electronic Service Agent e View Modify Partitions The Electron
424. nt values for both firmware locations Also the warning that the system will reboot is prominently displayed UPDATE AND MANAGE FLASH 802814 The image is valid and would update the temporary image to FW 73 00 AF773 021 The new firmware level for the permanent image would be FW773 00 AF773 019 The current permanent system firmware image is FW773 00 AF773 019 The current temporary system firmware image is FW773 00 AF773 019 The file var update flash image can be removed after the reboot WARNING Continuing will reboot the system Do you wish to continue Make selection use Enter to continue NO F3 Cancel F10 Exit Figure 7 167 System firmware update information and execution confirmation By using the arrow keys highlight YES and press Enter to continue The VIOS operating system shuts down and the Power compute node restarts Unlike the 1dfware command this method runs even if partitions other than the VIOS are active Chapter 7 Power node management 325 9 When the system is restarted verify the new firmware levels from the padmin user ID and the 1sfware command as shown in Figure 7 158 Isfware system AF773 _021 t AF773_019 p AF773_021 t Figure 7 168 Validating the system firmware update 7 10 4 Service and support IBM Electronic Service Agent ESA is used to monitor hardware problems and send the information automatically to IBM support It includes the
425. nt to perform the installation The RHEL installer graphical Welcome window opens 8 Select a preferred language for the installation process 9 Select the keyboard language 10 Select the storage devices to use for the installation as shown in Figure 12 40 For virtual disks hdisks or SAN disks select Basic Storage Devices What type of devices will your installation involve _ Basic Storage Devices Installs or upgrades to typical types of storage devices If you re not sure which option is right for you this is probably it _ Specialized Storage Devices _ Installs or upgrades to enterprise devices such as Storage Area Networks SANs This option will allow you to add FCoE iSCSI zFCP disks and to filter out devices the installer should ignore Figure 12 40 Select storage devices 588 IBM Flex System p270 Compute Node Planning and Implementation Guide 11 Select Fresh Installation overwriting any existing installation or Upgrade an Existing Installation as shown in Figure 12 41 At least one existing installation has been detected on your system What would you like to do Fresh Installation l Choose this option to install a fresh copy of Red Hat Enterprise Linux on your system Existing software and data may be overwritten depending on your configuration choices _ _ Upgrade an Existing Installation ka Choose this option if you would like to upgrade your existing Red
426. ntation Guide 9 Confirm your resource selections by running the smit nim_mac_res command and selecting Select List Allocated Network Install Resources as shown in Figure 9 11 Manage Network Install Resource Allocation Move cursor to desired item and press Enter List Allocated Network Install Resources Allocate Network Install Resources Deallocate Network Install Resources Available Network Install Resources Move cursor to desired item and press F7 ONE OR MORE items can be selected Press Enter AFTER making all selections gt LPP_AIX61_TLO4 SP01_RELO944 BOS pp_source gt SPOT _AIX61_TLO4 SP01 RELO944 spot AIX61_LAST_TL Ipp_source F1l Help F2 Refresh F3 Cancel F7 Select F8 Image F10 Exit Enter Do n Find Next Figure 9 11 Resource selection 10 Confirm your resource selections by running the smit nim_mac_res command and selecting List Allocated Network Install Resources Your machine is now created and your resources are assigned 11 Start the installation from the NIM by running the smit nim_mac_op command 12 Select your machine as shown in Figure 9 10 on page 450 Chapter 9 Operating system installation methods 451 13 Select the option to perform a BOS installation by selecting bos_inst perform a BOS installation as shown in Figure 9 12 TOP diag enable a machine to boot a diagnostic image cust perform software customization bos_inst perform a BOS installation m
427. ntegration virtualization or cloud It covers the setup of one node PureFlex Virtualized This offering is a five day Standard services offering that includes all tasks of the PureFlex Introduction and expands the scope to include virtualization another FC switch and up to four nodes in total PureFlex Enterprise This offering provides advanced virtualization including VMware clustering but does not include external integration or cloud It covers up to four nodes in total PureFlex Cloud This pre packaged offering is available which in addition to all the tasks that are included in the PureFlex Virtualized offering adds the configuration of the SmartCloud Entry environment basic network integration and implementation of up to 13 nodes in the first chassis PureFlex Extra Chassis Add on This offering is a services offering that extends the implementation of another chassis up to 14 nodes and up to two virtualization engines for example VMware ESXi KVM or PowerVM VIOS As shown in Table 2 18 on page 48 the four main offerings are cumulative for example Enterprise takes seven days in total and includes the scope of the Virtualized and Introduction services offerings PureFlex Extra Chassis is per chassis Chapter 2 IBM PureFlex System 47 Table 2 18 PureFlex Service offerings PureFlex Extra Chassis Add on 5 days PureFlex Cloud 10 days PureFlex Enterprise 7 days PureFlex PureFlex Intro Virtua
428. nterprise Linux for ppc64 0 Packages completed 15 of 1152 Installing glibc common 2 12 1 107 e16 ppc64 111 MB Common binaries and locale data for glibc lt Tab gt lt Alt Tab gt between elements lt Space gt selects lt F12 gt next screen Figure 12 24 Software package installation 576 IBM Flex System p270 Compute Node Planning and Implementation Guide 32 After the installation of the packages the virtual server reboots and you are prompted to change installation media as shown in Figure 12 25 and Figure 12 26 Use the unloadopt and loadopt commands as described in step 26 on page 574 to change the virtual media Name Lookin Summary IBMIT Insert DVD IBMIT 5 4 and press SPACE Total 3 OK gt Installed 0 a Errors 0 Figure 12 25 Media change request Unable to find the IBM Installation Toolkit CD in any of the available optical devices Please insert IBM Installation Toolkit CD into selected CD ROM drive and press enter when ready CD devices found dev sr0 Figure 12 26 Insert IBM Installation Toolkit CD Chapter 12 Installing Linux 577 33 For Red Hat Enterprise Linux installations the RHEL Setup Utility appears as shown in Figure 12 27 Select the tools as needed For more information about the utility see this website http docs redhat com docs en US Red_ Hat Enterprise Linux Text Mode Setup Utility 1 19 9 c 1999 2006 Red Hat Inc Choose a T
429. nterrupt N_Port ID Virtualization N Port Virtualization non volatile random access memory operating system Open Shortest Path First personal computer Peripheral Component Interconnect PCI Express Personal Communications power distribution unit power factor Priority based Flow Control process ID Proofs of Entitlement preventive service planning power supply unit program temporary fix port VLAN ID Pre boot eXecution Environment quad data rate redundant array of independent disks random access memory remote access services row address strobe Role Based Access Control registered DIMM Remote Direct Memory Access Red Hat Enterprise Linux Red Hat network RIO RIP RMC ROCE ROI ROM RPM RSA RSS RTE RX SAN SAS SATA SCM SCP SCPF SCSI SDD SDMC SEA SFP SFT SLES SLI SMIT SMP SMS SMT SMTP remote I O Routing Information Protocol Resource Monitoring and Control RDMA over Converged Ethernet return on investment read only memory Red Hat Package Manager Remote Supervisor Adapter Receive side scaling Remote Terminal Emulator receive storage area network Serial Attached SCSI Serial ATA Supply Chain Management secure copy start control program function Small Computer System Interface Subsystem Device Driver Systems Director Management Console Shared Ethernet Adapter small form factor pluggable switch fault tolerance SUSE Linux Enterprise Ser
430. ntity for establishing a multiport trunk The VLAG capable switches synchronize their logical view of the access layer port structure and internally prevent implicit loops The VLAG topology also responds more quickly to link failure and does not result in unnecessary MAC address flooding VLAGs are also useful in multi layer environments for both uplink and downlink redundancy to any regular LAG capable device as shown in Figure 5 3 on page 147 146 IBM Flex System p270 Compute Node Planning and Implementation Guide Layer 2 3 Border LACP capable Routers Layer 2 3 Region ISL VLAG ISL VLAG Pea Aa fea a VLAG VLAG 2 gt N LACP capable Servers mm LACP capable Server Figure 5 3 VLAG with multiple layers 5 5 2 SAN and Fibre Channel redundancy SAN infrastructure availability can be achieved by implementing certain techniques and technologies Most of them are widely used standards This section describes the most common technologies that can be implemented in an IBM Flex System environment to provide high availability for SAN infrastructure In general a typical SAN fabric consists of storage devices client adapters and SAN devices such as SAN switches or gateways and the cables that connect them The potential failures in a SAN include port failures both on the switches and in storage cable failures and device failures Chapter 5 Planning Consider the scenario of
431. number to the near multiple of 128 MB Memory Mode Dedicated Shared Dedicated Mode Total system memory 32 GB 32768 MB Current memory available for partition usage 26 62 GB 27264 MB Assigned memory 4 GB Figure 8 66 IVM Create Partition Memory window Minimum and maximum values for IVM usage You cannot specify minimum or maximum settings while you are using the wizard The value that is specified here is the desired value Minimum and maximum values can be edited after the virtual server is created Chapter 8 Virtualization 415 5 Complete the following steps in the Processors window as shown in Figure 8 67 a Select the processor mode of dedicated or shared In our example Shared is selected b Select the number of processors from the drop down menu When the shared option is selected this value represents the number of desired virtual processors When the dedicated option is selected the vale represents the number of cores that are assigned to the LPAR Our example assigns 4 virtual processors c Click Next to open the Ethernet window Create Partition Processors Name Processors E s In shared mode every assigned virtual processor uses 0 1 physical processors In dedicated mode every EES processor uses 1 physical processor Specify the desired number of processors and the processing mode ae Processors Storage UE Total system processors 24 Summary Assigned proces
432. o Virtual Adapters Optional Settings Profile Summary Figure 8 86 Creating a full system partition with FSM Complete the fields that are shown in Figure 8 86 with the following information Partition ID Partition name Assign a name such as full _sys_par Processing Settings Create Lpar Wizard Server 7954 24X SN107782B Create Partition This wizard helps you create a new logical partition and a default profile for it You can use the partition properties or profile properties to make changes after you complete this wizard To create a partition complete the following Information system name Server 7954 24e SN107 73268 Partition ID Partition name Full sys pan LJ Allow this partition to be suspended LJ Allow this partition to be remote restartable LJ Allow this partition to be TPM capable kanning VTOM Trusted Key is the default key San Finish Cancel For example 2 Click Next to assign a profile and all resources Chapter 8 Virtualization 433 434 4 The Partition Profile window opens as shown in Figure 8 87 Complete the fields with the following information Profile name For example new_profile Select Use all the resources in the system Create Lpar Wizard Server 7954 24X SN107782B Partition Profile Create Partition Partition Profile eh 4 profile specifies how many processors how Bee ares much memory and which O devices and slots
433. o 16 managed chassis Support up to 5 000 managed elements Auto discovery of managed elements Overall health status Monitoring and availability Hardware management Security management Administration Network management Network Control Storage management Storage Control Virtual machine lifecycle management VMControl Express YYYY YV YYYY Y The IBM Flex System Manager Advanced feature set offers all the capabilities of the base feature set and the following features gt Image management VMControl Standard gt Pool management VMControl Enterprise 3 5 Power supplies A minimum of two and a maximum of six power supplies can be installed in the Enterprise Chassis as shown in Figure 3 5 on page 64 All power supply modules are combined into a single power domain in the chassis which distributes power to each of the compute nodes and I O modules through the Enterprise Chassis midplane Chapter 3 Introduction to IBM Flex System 683
434. o a single set of logical unit numbers LUNs up to a maximum of eight host VIOS partitions Normally a dual VIOS host environment is set up with the IBM i LPAR as a client of both VIOS partitions This configuration allows resiliency of the client LPAR should a VIOS host partition fail or need to be brought down for service Figure 11 1 shows storage that is addressed by using a basic dual VIOS that is hosting an IBM i client partition IBM i client Ipar multipathing multipathing Disk Driver Disk Driver VSCSI Initiator H S O T N gt lt Figure 11 1 Overview of Storage virtualization for IBM i client LPARs With Power Systems compute nodes IBM i partitions do not have direct access to any physical I O hardware on the node in the chassis or outside the chassis This lack of direct access has the following implications Chapter 11 Installing IBMi 499 500 gt Disk storage is provided by attaching LUNs on a Fibre Channel storage area network SAN to VIOS then directly virtualizing them to IBM i by using the Flex System Manager FSM interface gt Optical media access for IBM i installation is provided by using an external USB DVD or through the VIOS supplied virtual media library gt N port ID Virtualization NPIV attached storage including tape media libraries can be used for Save and restore with a Fibre Channel attached tape library There is a limit of 64 unique LUNs per NPIV port before IBM i re
435. o configure them gt Change the startup sequence in a compute node gt Set the date and time gt Use a remote console for the compute nodes gt Enable multi chassis monitoring gt Set power policies and view power consumption history gt Support for IBM Feature on Demand gt Support for IBM Fabric Manager The CMM automatically detects installed compute and storage nodes and modules in the Enterprise Chassis and stores vital product data VPD on them 7 2 2 CMM user interfaces The CMM supports a web based graphical user interface that provides a way to perform chassis management functions within a supported web browser You can also perform management functions through the CMM command line interface CLI Both the web based and CLI interfaces are accessible through the single RJ45 Ethernet connector on the CMM or from any system that is connected to the same network The default security setting is Secure so HTTPS or SSH is required to connect to the CMM Chapter 7 Power node management 189 7 2 3 CMM default network information By default the CMM is configured to respond to Dynamic Host Configuration Protocol DHCP first before a static Pv4 address is used If a DHCP response is not received within 3 minutes of the CMM Ethernet port connecting to the network the CMM uses the factory default IP address and subnet mask During this 3 minute interval the CMM is inaccessible The IP behavior can be changed during the
436. o the number of compute nodes installable gt Yellow Some restrictions apply and some bays must be left unpopulated Table 3 4 Maximum supported number of compute nodes for installed power supplies Power N 1 N 1 N 1 N N N 1 N 1 N 1 N N supply N 5 N 4 N 3 N 3 N 5 N 4 N 3 N 3 configuration 6 total 5 total 4 total 6 total 6 total 5 total 4 total 6 total Chapter 3 Introduction to IBM Flex System 65 Power configurator For more information about exact configuration support see the Power configurator System x which is available at this website http www ibm com systems bladecenter resources powerconfig html IBM Systems Energy Estimator which is used for regular Power rack servers is not supporting Power Systems compute nodes The 2100 W and 2500 W power supplies are 80 PLUS Platinum certified The 80 PLUS Platinum standard is a performance specification for power supplies that are used in servers and computers To meet this standard the power supply must have an energy efficiency rating of 90 or greater at 20 of rated load 94 or greater at 50 of rated load and 91 or greater at 100 of rated load with a power factor of 0 9 or greater For more information about the 80 PLUS Platinum standard see this website https www 80PLUS org The Enterprise Chassis allows configurations of power policies to give N N or N 1 redundancy Tip N 1 in this context means a single backup device for N number of devices Any component
437. oVIOS6B Enter a number 1 2 1 The following objects of type profile were found Please select one 1 DefaultProfile Enter a number 1 Enter the source of the installation images dev cdrom home USERID dvdimage vl iso Enter the client s intended IP address 9 42 171 85 Enter the client s intended subnet mask 255 255 254 0 Enter the client s gateway 9 42 170 1 Note To use the adapter s default setting enter default for speed Enter the client s speed 100 auto Enter the client s duplex full auto Enter the numeric VLAN tag priority for the client 0 to 7 O none 0 Enter the numeric VLAN tag identifier for the client 0 to 4094 O none 0 Would you like to configure the client s network after the installation yes no no Figure 9 2 Starting the interactive installios command Network tip BOOTP and NFS are required for installios between the FSM or HMC and the VIOS installation target 442 IBM Flex System p270 Compute Node Planning and Implementation Guide 2 As shown in Figure 9 3 you are prompted for which FSM network interface to use for communicate with the new VIOS ethO or eth1 This should use ethO if a flat network was implemented Use eth1 if a diverse data network was selected when the FSM was set up For more information about these network models for the FSM see 7 1 Management network on page 185 Please select an adapter you would like to use for this installation WARNING The client
438. ode 422 IBM Flex System p270 Compute Node Planning and Implementation Guide Creating the virtual server for an IBM i installation is similar to the process that is used for creating a VIOS Complete the following steps 1 Set the Environment option to IBM i as shown in Figure 8 74 Create Virtual Server Server 7S95 Name o gt Name Reena This wizard helps you create and assign resources to a virtual server Yermory Processor pee Host name Server 7S95 42e SH1L058008 Ethernet PRusical I Wirtual server name Pantera aya 7989 IBMil UMM ary Wirtual server ID 11 Environment ReTreT TTT TTT TTT TTT rTTTTTTTTEE Fi Suspend capable C Assign all resources to this virtual server Figure 8 74 Create an IBM i virtual server 2 Click Next to go to the Memory settings The window that is shown in Figure 8 75 opens Specify the wanted quantity of memory Click Next Create Virtual Server Server 7895 421 5N1058008 Memory m Name 7 ade Select the memory mode and assigned memory for the virtual server Processor Ethernet Dedicated Memory Storage selection Total system memory 128 0 GB Physical I O Memory available 80 72 GB Load source console Accigned memory GB Summary Figure 8 75 IBM i virtual server memory Chapter 8 Virtualization 423 3 In the processor settings window choose a quantity of processors for the virtual server as shown in Figure 8 76 Click Next Create Vir
439. oding Options Version AF773 021 SHS l 7 Copyright IBM Corp lt 000 2008 All rights reserved Main Menu l Select Language setup Remote IPL Initial Program Load Change SCSI Settings select Console Select Boot Options Type menu item number and press Enter or select Navigation key ff Figure 7 113 Terminal console access Chapter 7 Power node management 289 If SOL is not disabled you receive the error message that is shown in Figure 7 114 when you are trying to open a virtual terminal console to the first partition on a Power compute node For more information about disabling SOL see Serial Over LAN on page 217 S 9 429 171 90 itsoVIOSBA f Semver 795 4 24 SN 1077828 File Edit Font Encoding Options Open in progress The open failed The session may already be open on another management console The server may not be ready to accept connections Figure 7 114 Console open failure to partition ID 1 when SOL is enabled Opening a virtual terminal console session with the HMC CLI The other alternative that is available with the FSM to access SMS menus for Power system partitions is to use the CLI based vtmenu Complete the following steps vtmenu and IBM i The FSM vtmenu can be used only for VIOS AIX and PowerLinux partitions IBM i does not use SMS and uses 5250 emulation for its system console For more information see 11 3 Configuring an IBM i console connection on page
440. ofile Summary Back Next gt I 0 Physical 1 0 Detailed below are the physical 1 0 resources for the managed system Select which adapters from the list you would like included in the profile and then add the adapters to the profile as Desired or Required Click on an adapter to view more detailed adapter information Add as desired te ie Be be Le le setect action Select Location Code U7S8AE 001 WZS028Y P1 C18 L1 CN4058 8 port 10Gb Converged Adapter U7S8AE 001 WZS5028Y P1 C19 11 FC3172 2 port 8Gb Fibre Channel Adapter U7B8AE 001 WZ5028Y P1 T2 PCI E SAS Controller U7S8AE 001 WZS028Y P1 T1 PCI to PCI bridge U78A4E 001 WZS028Y P1 C18 12 CN4058 8 port 10Gb Converged Adapter Total 5 Filtered 5 Gis a e e Figure 9 28 Using the HMC partition wizard to add the USB port When you are using the HMC to modify a partition click Configuration gt Manage Profiles profile name I O as shown in Figure 9 29 to assign the PCl to PCl bridge to the wanted partition Typically this device is added as Desired to allow relocation or removal later Physical 1 0 Detailed below are the physical I O resources for the managed system Select which adapters from the list you would like included in the profile and then add the adapters to the profile as Desired or Required Click on an adapter to view more detailed adapter information U78AE 001 W2Z5028Y P1 C18 L1 CN4058
441. oftware based IVM as shown in Figure 7 6 Integrated Virtualization Manager Welcome please enter your information z User ID x Password Please note After some time of inactivity the system will log you out automatically and ask you to log in again This product includes Eclipse technology http www eclipse org Required field Figure 7 6 IVM login panel For more information see Integrated Virtualization Manager for IBM Power Systems Servers REDP 4061 which is available at this website http www redbooks ibm com abstracts redp4061 html 7 5 1 IVM overview The IVM is a simplified hardware management solution that inherits most of the HMC features It manages a single server and is accessed by using a web browser on a workstation It is designed to provide a solution that enables the administrator to reduce system setup time and to make hardware management easier at a lower cost IVM provides a management model for a single system Although it does not offer all of the HMC capabilities it enables the use of IBM PowerVM technology IVM targets the small and medium systems that are best suited for this product IVM is an enhancement of the Virtual I O Server VIOS the product that enables I O virtualization in IBM Power Systems It enables management of Virtual I O Server functions and uses a web based graphical interface that enables the administrator to remotely manage the server with a bro
442. ogon t Hardware Management Console V7R7 7 0 2 Logon Please enter a userid and password below and click Logon Userid Q o O O Z Password OZ OOOO Figure 8 24 HMC logon page 3 Enter a valid FSM user ID and password and then click Log in The Welcome page opens as shown in Figure 8 25 Hardware Management Console i a te a a a em a a TEET Se Welcome HMC Version SE Welcome Use the Hardware Management Console HMC to manage this HMC as well as servers logical partitions managed systems and other a link in the navigation pane at the left E il Systems Management H IE FETE il oe hinama servers logical partitions managed systems and frames set oe current status troubleshoot and apply solutions Server 954 24x SN107fa28 an anipe ii System Plans Import deploy and manage system plans on the HMC Fes System Plans HMC Management Perform management tasks to set up configure and customize operatii M HMC Management this HMC ai Semice Management e Semice Management Perform service tasks to create customize and manage services asso gm LEASES am Updates Perform and manage updates on your system ga Status Bar View details of status and messages l Additional Resources Guided Setup Wizard Provides a step by step process to configure your HMC Installing and configuring the HMC vi guide Provides an online version of Installing and configuring the HMC vi
443. ol Storage management Storage Control Virtual machine lifecycle management VMControl Express YYYY YV YYYY Y The FSM advanced feature set offers all of the capabilities of the base feature set plus the following features gt Image management VMControl Standard gt Pool management VMControl Enterprise FSM management software includes the following features gt Monitoring and problem determination A real time multichassis view of hardware components with overlays for more information 192 IBM Flex System p270 Compute Node Planning and Implementation Guide Automatic detection of issues in your environment through event setup that triggers alerts and actions Identification of changes that might affect availability Server resource usage by a virtual machine or across a rack of systems Hardware management Automated discovery of physical and virtual servers and interconnections applications and supported third party networking Inventory of hardware components Chassis and hardware component views Hardware properties Component names and hardware identification numbers Firmware levels Usage rates Network management Management of network switches from various vendors Discovery inventory and status monitoring of switches Graphical network topology views Support for Keyboard Video and Mouse KVM pHyp VMware virtual switches and physi
444. ompute node slot in the chassis When the FSM is installed into an empty slot all connections to the chassis management and data networks are made automatically through the mid plane of the chassis to the CMM and I O switches After the FSM is installed in the chassis and discovered by the CMM the FSM setup wizard must be run The setup wizard requires a virtual console through the compute node s IMMv2 remote console facility or through a KVM that is connected to the breakout cable that is connected to the front of the FSM The FSM setup wizard starts automatically during the boot process For more information about this process see Implementing Systems Management of IBM PureFlex System SG24 8060 which is available at this website http www redbooks ibm com abstracts sg248060 html 7 4 IBM HMC 196 This section gives a brief overview of the HMC as shown in Figure 7 5 Figure 7 5 Desk side and rack mounted HMCs For more information see IBM Power Systems HMC Implementation and Usage Guide SG24 7491 which is available at this website http www redbooks ibm com abstracts sg247491 html IBM Flex System p270 Compute Node Planning and Implementation Guide 7 4 1 HMC overview The HMC runs as an embedded application on an Intel based workstation that can be a desktop or rack mounted system The embedded operating system and applications take over the entire system and no other applications are allowed to be loa
445. on IP datagram forwarding 00 eee eee 548 11 9 3 Configuring an interface 0 06 ee 548 11 9 4 Configuring a default route 0 0 cee ee 548 11 9 5 Defining TCP IP domain 0 000 ee 549 11 9 6 Defining a host table n nananana were etree teh aavawt ws 550 VOL Staring WRAP sprane Grok Seed ek oe IO ed ae Be ee 551 Chapter 12 Installing Linux 0 0 00 cee eee 553 12 1 IBM Installation Toolkit for PowerLinux 000 ee eee 554 Webel Usmo hetook e renro eera Seis eee at acca ee ae 555 12 2 Installing Red Hat Enterprise Linux 0 000 c ee eee 581 12 3 Installing SUSE Linux Enterprise Server 000000 eee 592 Abbreviations and acronyms 000 eee eee 601 Related publications 0 0 0 ccc eens 605 IBM TREQDOOKGS o eui acs Caren Bie Sais ee we ee Ba ee Ge ee ee ane 605 OMNES FESOULCES aeneanm Said Se ah asa die a aea gale wee Soca ae 606 FIC IO TOM IBM cc co ar Siero terete vets alee we ak ge ne Soe Sole aed ee ae ee ak 606 Contents Ix X IBM Flex System p270 Compute Node Planning and Implementation Guide Notices This information was developed for products and services offered in the U S A IBM may not offer the products services or features discussed in this document in other countries Consult your local IBM representative for information on the products and services currently available in your area Any reference to an I
446. on for a chassis component that is categorized by a row of tabs These details are read only from the Chassis View tab but user changeable options can be modified by clicking e Chassis Management gt Compute Node tab Figure 7 16 shows the active and interactive modes on the System Status view IBM Chassis Management Module USERID settings Lo i Compute Node System Status Multi Chassis Monitor Events Service and Suppo Search Name Node 06 node06 p270 Product Name IBM Flex System p270 Compute Node EBay 6 i Normal Machine Serial No Part Number Actions for Node 06 node06 p270 Power On Power Off Shutdown OS and Power Off Restart Immediately Restart with Non maskable Interrupt NMI Restart System Mgmt Processor Launch Compute Node Console Manage LEDs Boot to SMS Menu Details for Node 06 node06 p270 IBM Flex System p270 Compute Node Events General Hardware Firmware Power Environmentals IO Connectivity SOL Status Boot Sequence LEDs Boot Mode a e Device Name Node 06 node06 p270 BB Processors Product Name IBM Flex System p270 Compute Node Bay Width 1 Drives Module Description IBM Flex System p270 Figure 7 16 Power compute node management options from System Status 210 IBM Flex System p270 Compute Node Planning and Implementation Guide Component IP configuration The automatic node discovery process of the CMM allows the b
447. on source selection Chapter 12 Installing Linux 567 19 In the Network settings page See Figure 12 15 enter the host name and DNS server address select the network card if there is more than one card listed then click Configure to set the permanent IP address of virtual server after the installation IBM Installation Toolkit for PowerLinux Network settings for the installed system Global network settings Configure the network for the Fully qualified hostname BHRaYS installed system DNS server 9 42 170 1 Fully qualified hostname Network cards Specify a hostname for the MAC Adress Link Configuration installed system using the format F6 D7 A0 09 04 02 Disabled shown in this example IBMIT Linux Jocaldamain DNS server Optionally specify a DNs server The network cards for the installed system are shown To configure a network card select the card and click Configure To see more details about a card select the card and click Details Configure Details uit Prev Met Figure 12 15 Network settings for the installed system 568 IBM Flex System p270 Compute Node Planning and Implementation Guide 20 In Figure 12 16 select if the IP address of the installation is automatic via DHCP or manual static For a manual selection enter the details of the fixed IP address Netmask and Gateway and click Save then Next IBM Installation Toolkit for PowerLinux Network settings for the insta
448. onality Internal Ports CN4058 Converged Adapter Flex System Compute Node Flex system V7000 gt E Storage Node LA J Figure 6 3 Compute Node with CN4058 adapter and CN4093 Converged Switch 6 1 2 FCoE protocol stack The FCoE requirement is the use of a lossless Ethernet for example one that implements DCB extensions to Ethernet The structure of FCoE is that the upper layers of FC are mapped onto Ethernet as shown in Table 6 1 on page 169 The upper layer protocols and services of FC remain the same in an FCoE environment For example zoning fabric services and similar functions still exist within FCoE The difference is that the lower layers of FC including the physical layers are replaced Therefore FC concepts such as port types and lower layer initialization protocols are also replaced by new constructs in FCoE Such mappings are defined by the FC BB 5 standard 168 IBM Flex System p270 Compute Node Planning and Implementation Guide Table 6 1 FCoE protocol mapping FC 2M FCoE entity FC2P 2P 6 1 3 Converged Network Adapters Converged Network Adapters CNAs are required to service multiple protocol stacks on a single physical adapter A connection from the CNA connects to a lossless Ethernet switch such as the EN4093R 10Gb Scalable Switch or the CN4093 10Gb Converged Scalable Switch The CN4093 10Gb Converged Scalable Switch supports Fibre Channel Forwarder FCF serv
449. one of the following partitioning options Automatic on a disk Installs Linux on the chosen disk which is conventionally partitioned Any data that is contained in that specific disk is lost In the example that is shown in Figure 12 12 on page 565 disk sda the first and only virtual disk in the virtual server the other disks are sdb sdc and so on is automatically partitioned by the IBM Linux Installation toolkit Automatic partitioning using LVM Creates an LVM based partitioning scheme using all existing disks and installs Linux on the partitions according to the partitioning scheme Any data that is contained in that specific disk is lost Automatic partitioning using SW RAID Creates a software based partitioning using all existing disks and installs Linux on the partitions according to the partitioning scheme Any data that is contained in all disks is lost This option is available only if you have at least two disks on the system Driver disk Select whether a driver disk is used for the Linux installation IBM Flex System p270 Compute Node Planning and Implementation Guide More information can be found in IBM Installation Toolkit User s Guide IBM Installation Toolkit for PowerLinux Installation settings for the target system Settings Linux distribution Select Linus distribution Ped Hat Enterprise Linus 6 Update d w the Linux distribution to install instalation profile Deut more info on your system Disk a
450. onnections Add Managed System Figure 7 99 Adding a managed system 280 IBM Flex System p270 Compute Node Planning and Implementation Guide 3 Select Add a managed system and enter an IP address or host name and the password for a CMM supervisor level User ID then click OK as shown in Figure 7 100 Add Managed Systems Use this panel to add systems in the network to the systems managed by this HMC If you know the name or IP address of the system vou want to add enter Its specific name or IP address and click Gk If you want to find the IP addresses of systems in the network you can specify a range of IP addresses and click Ok to view the list of IP addresses with their system names that were discovered in the network You can then select one or more systems from the list to add to the managed systems of this HMC The discovery process will take a long time Add a managed system IP Address Host name g 49 471 37 Password O Find managed systems Enter a range of IP addresses to search for managed systems Beginning IP Address x Ending IF Address Figure 7 100 Add Managed Systems window 4 Click Add to confirm the addition of the managed system Confirm Add Systems The following systems will be added to the systems managed by this HMC Adding systems may be a lengthy process It may take anywhere from a few minutes to several hours depending on the network conditions Click 4dd to
451. onnectivity depending on the drive Support in a PureFlex configuration includes the external USB and Fibre Channel connections Table 2 8 shows the Multi Media Enclosure and available PureFlex options Table 2 8 Multi Media Enclosure and options 7226 Model 1U3 Multi Media Enclosure 7226 1U3 5763 DVD Sled with DVD RAM USB Drive 7226 1U3 8248 Half high LTO Ultrium 5FC Tape Drive 7226 1U3 Half high LTO Ultrium 6 FC Tape Drive Chapter 2 IBM PureFlex System 31 2 4 6 Video keyboard mouse option The IBM 7316 Flat Panel Console Kit that is shown in Figure 2 5 is an option to any PureFlex Express configuration that can provide local console support for the FSM and x86 based compute nodes Figure 2 5 IBM 7316 Flat Panel Console The console is a 19 inch rack mounted 1 U unit that includes a language specific IBM Travel Keyboard The console kit is used with the Console Breakout cable that is shown in Figure 2 6 This cable provides serial and video connections and two USB ports The Console Breakout cable can be attached to the keyboard video and mouse KVM connector on the front panel of x86 based compute nodes including the FSM Figure 2 6 Console Breakout cable The CMM in the chassis also allows direct connection to nodes via the internal chassis management network that communicates to the FSP or iMM2 on the node to allow remote out of band management 32 IBM Flex System p270 Compute Node Planning and Imple
452. onsists of the following components disk and software options gt IBM Storwize V7000 Controller 2076 124 gt SSDs 200 GB 2 5 inch 400 GB 2 5 inch gt HDDs 300 GB 2 5 inch 10K RPM 300 GB 2 5 inch 15K RPM 600 GB 2 5 inch 10K RPM 800 GB 2 5 inch 10K RPM 900 GB 2 5 inch 10K RPM 1 TB 2 5 inch 7 2 K RPM 1 2 TB 2 5 inch 10 K RPM gt Expansion Unit 2076 224 Up to nine per V7000 Controller IBM Storwize V7000 Expansion Enclosure 24 disk slots gt Optional software IBM Storwize V7000 Remote Mirroring IBM Storwize V7000 External Virtualization IBM Storwize V7000 Real time Compression 42 IBM Flex System p270 Compute Node Planning and Implementation Guide IBM Flex System V7000 Storage Node IBM Flex System V7000 Storage Node is one of the two storage options that is available in a PureFlex Enterprise configuration This option uses four compute node bays two wide x two high in the Flex chassis Up to two expansion units also can be in the Flex chassis each using four compute node bays External expansion units are also supported The IBM Flex System V7000 Storage Node consists of the following components disk and software options gt SSDs 200 GB 2 5 inch 400 GB 2 5 inch 800 GB 2 5 inch gt HDDs 300 GB 2 5 inch 10K RPM 300 GB 2 5 inch 15K RPM 600 GB 2 5 inch 10K RPM 800 GB 2 5 inch 10K RPM 900 GB 2 5 inch 10K RPM 1
453. ool Authentication configuration Firewall configuration Keyboard configuration Network configuration RHN Register System services lt Tab gt lt Alt Tab gt between elements Use lt Enter gt to edit a selection Figure 12 27 Red Hat configuration Utility 34 After the process is complete select Quit to exit the utility 578 IBM Flex System p270 Compute Node Planning and Implementation Guide 35 Log in to the Linux distribution as shown in Figure 12 28 RH64VS login root Password THREE PPT PEE PPE EH EPH EE PEE PE PE PE PH PP PE IBM Installation Toolkit for PowerLinux Simplified Setup Tool You have not yet run the Simplified Setup Tool To configure your system using the Simplified Setup Tool point your browser to https lt server ip or hostname gt 6060 where lt server ip or hostname gt is the IP address or host name of your system AERA AARAA A ERARE EH EEE EEE EH EEE EEE EE PE EEE EE root RH64VS Figure 12 28 First login after installation 36 Open a browser and enter the following address https lt server ip or hostname gt 6060 The window that is shown in Figure 12 29 opens Log in with the credentials you entered in step 21 on page 570 Sign in a a aja Password ok The IBM Installation Toolkit for PowerLinus Simplified Setup Tool guides you through the process of quickly and easily configuring ane or more open source workloa
454. ools such as firmware updates bootable USB key creation and clone or restore systems For more information about and to download the IBM Installation Toolkit for PowerLinux see this website http www 304 ibm com webapp set2 sas f lopdiags instal1tools home html For more information about and to download the Service and Productivity Tools for PowerLinux Servers see this website http wwwl14 software ibm com webapp set2 sas f lopdiags home html 12 1 1 Using the toolkit In this section we describe the process that is used to install Red Hat Enterprise Linux RHEL on a virtual server with the Toolkit SUSE Linux Enterprise Server SLES installation with IBM Installation Toolkit for PowerLinux is similar to the RHEL installation The panels that are shown in this section are identical between both distributions For more information see the IBM Installation Toolkit for PowerLinux user manual The following prerequisites must be met to use the toolkit gt A VIOS with a media repository gt Download the ISO file for the IBM Installation Toolkit for PowerLinux DVD and create the media disk in the VIOS Media Repository gt Acopy of the installation DVD of the Red Hat Enterprise Linux distribution and create a virtual media disk in the VIOS Media Repository gt A virtual server LPAR for the Linux installation with a virtual disk virtual Ethernet adapter and a virtual optical drive Chapter 12 Installing Linux 555
455. option below and then click Next if you want the partition to have all the resources in the system Use all the resources in the system Finish Figure 8 28 HMC Partition Profile window 3 Click Next then click Next again Processor settings The next step is to choose the type of processing model shared or dedicated and the quantities of the selected processor type This section describes how to create a partition with a dedicated processor Complete the following steps to configure a dedicated processor partition 1 Select Dedicated and then select Next as shown in Figure 8 29 on page 378 Chapter 8 Virtualization 377 i https 9 42 171 90 hrme wel T2d87 Create Partition w Partition Profile you can assign partial processor units from the shared processor pool Choose one of the processing modes below Memory Settings 1 0 Shared Assign partial processor units from the shared processor pool Virtual Adapters z z NA For example 50 or 1 25 processor units can be assigned to the partition Optional Settings Profile 5 a a tt Dedicated Assign entire processors that can only be used by the partition Figure 8 29 HMC Processors type selection window 2 Specify the following number of minimum desired and maximum processors for the partition as shown in Figure 8 30 on page 379 Minimum processors The minimum value is the total of processor resources that must be available before the
456. options You can use this option to choose whether to install graphics software such as X Window System to select the file system type jfs or jfs2 and to enable system backups at any time as shown in Figure 10 5 on page 495 494 IBM Flex System p270 Compute Node Planning and Implementation Guide Install Options Graphics Software System Management Client Software Create JFS2 File Systems Enable System Backups to install any system Installs all devices gt gt gt 5 Install More Software 0 Install with the settings listed above 88 Help 99 Previous Menu gt gt gt Choice 5 Figure 10 5 Install Options window 8 After you complete your options selection you are prompted to confirm your choices as shown in Figure 10 6 Overwrite Installation Summary Disks hdisk0 Cultural Convention en_US Language en_US Keyboard en_US JFS2 File Systems Created yes Graphics Software yes System Management Client Software yes Enable System Backups to install any system yes Selected Edition express Optional Software being installed gt gt gt 1 Continue with Install 88 Help WARNING Base Operating System Installation will 99 Previous Menu destroy or impair recovery of ALL data on the destination disk hdiskO gt gt gt Choice 1 Figure 10 6 Installation summary Chapter 10 Installing VIOS and AIX 495 9 To proceed click option 1 Continue with Install The packages are shown a
457. or an introduction to more advanced virtualization features at a highly affordable price gt PowerVM Standard Edition PowerVM Standard Edition provides the most complete virtualization functionality for AIX IBM i and Linux operating systems in the industry PowerVM Standard Edition is supported on Power Systems servers and includes features that are designed to allow businesses to increase system usage gt PowerVM Enterprise Edition PowerVM Enterprise Edition includes all of the features of PowerVM Standard Edition plus two new industry leading capabilities that are called Active Memory Sharing and Live Partition Mobility You can upgrade from the Express Edition to the Standard or Enterprise Edition and from Standard to Enterprise Editions Table 8 3 outlines the functional elements of the available PowerVM editions Table 8 3 Overview of PowerVM capabilities by edition PowerVM capability PowerVM PowerVM PowerVM Express Standard Enterprise Edition Edition Edition Maximum VMs 1000 Server 1000 Server Micro partitions Yes Yes Yes Virtual I O Server Yes Single Yes Dual Yes Dual 336 IBM Flex System p270 Compute Node Planning and Implementation Guide PowerVM capability PowerVM PowerVM PowerVM Express Standard Enterprise Edition Edition Edition Management VMControl IVM ECONO VMControl IVMP HMC a HMC Multiple SS Processor No Pools Live Partition Mobility Partition Live Partition Mobility
458. ork Interface en7 Standard Ethernet Network Interface etO 00 00 IEEE 802 3 Ethernet Network Interface etl 00 01 IEEE 802 3 Ethernet Network Interface MORE 6 F1l Help F2 Refresh F3 Cancel F8 Image F10 Exit Enter Do Find n Find Next FO Figure 8 53 Selecting an interface 404 IBM Flex System p270 Compute Node Planning and Implementation Guide 5 Figure 8 54 show the fields that are required to configure the VIOS IP address Enter the IP address information and press Enter to configure the OP address VIOS TCP IP Configuration Type or select values in entry fields Press Enter AFTER making all desired changes Entry Fields Hostname Internet ADDRESS dotted decimal Network MASK dotted decimal Network INTERFACE en0 Default Gateway dotted decimal NAMESERVER Internet ADDRESS dotted decimal DOMAIN Name CableType bnc F1 Help F2 Refresh F3 Cancel F4 List F5 Reset F6 Command F7 Edit F8 Image F9 Shel FIO Exit Enter Do Figure 8 54 Entering TCP IP configuration values 6 The IVM GUI should now be accessible from a workstation browser as described in 7 10 2 Accessing IVM on page 299 After the login information is completed for the first time the Guided Setup view is displayed as shown in Figure 8 55 on page 406 The Guided Setup is not covered in this document Chapter 8 Virtualization 405
459. ort Chassis Management Mgt Mod am Compute Nodes If specifying a power action for multiple nodes please be aware that in case of an error you will only be informer failed executing the action Successful nodes are ignored Different node types may take different amounts of time to complete the power action so in some cases the power st immediately reflected on the page In this case the user may have to perform a refresh F5 one or more times to see tt reflected on the page Power and Restart Actions Global Settings Columns Power On ice Type Health Status Power Bay Bay Type Power Off mpute Node B Normal On 4 Node Shutdown OS and Power Off mpute Node EA Normal On 2 Node Restart Immediately mpute Node EA Normal On 3 Node Restart with Non maskable Interrupt NMI TPute Node Ed Normal On 9 Node ipute Nod Normal od Restart System Mgmt Processor pe uie ba Ed 2 ole mpute Node 4 Normal On 7 8 Node Boot to SMS Menu TOUT umpute Node B Normal On 10 Node Figure 7 136 CMM Compute Node page Power On options 3 Starting the Power On process by using either method requires a confirmation as shown in Figure 7 137 Click OK to confirm and continue Confirm Power On x Do you wish to power on the following nodes Node 06 node06 p270 ok cancel Figure 7 137 CMM compute node Power On confirmation request Chapter 7 Power node management 303 4 Figure 7 138 and Figure 7 139
460. ort 10Gb Ethernet Adapter for IBM Flex System Tip To make the most use of the capabilities of the EN4054 adapter the following I O modules should be upgraded to maximize the number of active internal ports gt For the CN4093 EN4093 EN4093R and SI4093 I O modules Upgrade 1 enables all four ports of the adapter gt For the EN2092 switch Upgrade 1 is required to use all four ports of the adapter If no upgrades are applied to the Flex System switches only two ports per adapter are enabled For more information about this adapter see the IBM Redbooks Product Guide at this website http www redbooks ibm com abstracts tips0868 html 0pen Chapter 4 Product information and technology 109 4 9 7 IBM Flex System CN4058 8 port 10Gb Converged Adapter The IBM Flex System CN4058 8 port 10Gb Converged Adapter from Emulex enables the installation of eight 10 Gb ports of high speed Ethernet or FCoE into an IBM Power Systems compute node With eight ports it makes full use of all Ethernet switches in the IBM Flex System portfolio Table 4 16 lists the ordering part number and feature code Table 4 14 Ordering part number and feature code IBM Flex System CN4058 8 port 10Gb Converged Adapter The IBM Flex System CN4058 8 port 10Gb Converged Adapter has the following features and specifications gt Dual ASIC controller that uses the Emulex XE201 Lancer design allowing logical partitioning gt MSI X support gt IBM
461. ortal problems Open a service request Recent Activity Dy 0 serviceable problems require attention 0 service requests being investigated by IBM 0 reguests have been updated in the last 24 hours 0 serviceable problems opened in the last 24 hours Status Not activated Service and Support Manager is actively monitoring for serviceable problems However Electronic Service Agent is net configured for electronic service Getting Started with Electronic Service Agent transmissions Complete the Getting Started wizard to enable the transmission of problems inventory and performance measurement data to IBM Dynamic System Analysis DSA status error Service and A Support Manager encountered a problem trying to werify the status of the DSA collectors Electronic Service Agent has not been configured Complete the Getting Started wizard then try Test connection task to verify connection to backend Common Tasks Manage support files Verify OSA status List Running Power System Repairs setup and Configuration Getting Started with Electronic Service Agent Figure 7 73 Service and Support Manager window 256 IBM Flex System p270 Compute Node Planning and Implementation Guide 2 Click Getting Started with Electronic Service Agent under Setup and Configuration The agent configuration wizard starts as shown in Figure 7 74 Getting Started with Electronic Service Agent Welcome gt Welcome Yo
462. ot result in a client losing access to storage or the network 162 IBM Flex System p270 Compute Node Planning and Implementation Guide Converged networking In this chapter we describe the fundamental information for converged networking on Power Systems compute nodes We also describe the basic configuration of a converged network IBM Flex System This chapter includes the following topics gt 6 1 Introduction on page 164 gt 6 2 Configuring an FCoE network with the CN4093 on page 172 Copyright IBM Corp 2013 All rights reserved 163 6 1 Introduction 164 Converged networking is a combination of multiple network protocols that use disparate physical layers for transmission for example Fibre Channel traffic is transmitted over a separate physical Fibre Channel network while protocols such as TCP IP are transmitted over Ethernet networks Converged networking can reduce the requirement for this disparateness in networking infrastructure commonly converging FCP and TCP IP over a common Ethernet physical layer Fibre Channel storage area networks SANs are regarded as the high performance approach to storage networking Storage targets such as disk arrays and tape libraries are equipped with FC ports that connect to FC switches Host servers are similarly equipped with Fibre Channel host bus adapters HBAs that connect to the same FC switches This means that FC SAN fabrics are a separate and exclusive
463. ot the system Do you wish to continue Enter 1 Yes or 2 No 1 Figure 7 157 Idfware command that is used to update system firmware The command returns the levels of what the new temporary image is and the current values for both firmware locations Also a warning that the system will reboot is displayed 6 Enter 1 and then press Enter to continue The VIOS operating system shuts down and the Power compute node restarts When it is used to update system firmware the 1dfware command requires that all partitions except the VIOS LPAR are shut down An error message is displayed with the count of active partitions if this condition is not met 7 When the system restarts verify the new firmware levels from the padmin user ID and the 1sfware command as shown in Figure 7 158 Isfware system AF773_021 t AF773_019 p AF773_021 t Figure 7 158 Validating the system firmware update 318 IBM Flex System p270 Compute Node Planning and Implementation Guide Complete the following steps to perform system updates from the built in diagnostic function 1 Enter the diagmenu command from the padmin restricted shell or the diag command from root access authority In either case the command returns the window that is shown in Figure 7 159 Press Enter to continue DIAGNOSTIC OPERATING INSTRUCTIONS VERSION 6 1 8 15 801001 LICENSED MATERIAL and LICENSED INTERNAL CODE PROPERTY OF IBM C COPYRIGHTS BY IBM AND BY OTHER
464. oth These and other IBM trademarked terms are marked on their first occurrence in this information with the appropriate symbol or indicating US registered or common law trademarks owned by IBM at the time this information was published Such trademarks may also be registered or common law trademarks in other countries A current list of IBM trademarks is available on the Web at http www ibm com legal copytrade shtml The following terms are trademarks of the International Business Machines Corporation in the United States other countries or both Active Memory Micro Partitioning PureFlex AIX POWER PureSystems AIX 5L POWER Hypervisor Redbooks BladeCenter Power Systems Redpaper DB2 Power Systems Software Redbooks logo e Electronic Service Agent POWER6 ServerProven EnergyScale POWER6 Storwize Focal Point POWER7 System i IBM POWER7 Systems System Storage IBM Flex System POWER7 System x IBM Flex System Manager PowerHA Tivoli iDataPlex PowerLinux VMready iSeries PowerVM Workload Partitions Manager The following terms are trademarks of other companies Evolution and Kenexa device are trademarks or registered trademarks of Kenexa an IBM Company Intel Intel Xeon Intel logo Intel Inside logo and Intel Centrino logo are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other
465. ou can dynamically add or remove resources from a logical partition LPAR even while the LPAR is running Such resources include processors memory and I O components The ability to reconfigure dynamic LPARs encourages system administrators to dynamically redefine all available system resources to reach the optimum capacity for each defined dynamic LPAR Micro partitioning By using micro partitioning technology you can allocate fractions of processors to a logical partition A logical partition that uses fractions of processors is also known as a Shared Processor Partition or Micro partition Micro partitions run over a set of processors that are called a Shared Processor Pool Within the shared processor pool unused processor cycles can be automatically distributed to busy partitions as needed with which you can right size partitions so that more efficient server usage rates can be achieved By implementing the shared processor pool by using micro partitioning technology you can create more partitions on a server which reduces costs IBM Flex System p270 Compute Node Planning and Implementation Guide Virtual processors are used to allow the operating system manage the fractions of processing power that is assigned to the logical partition From an operating system perspective a virtual processor cannot be distinguished from a physical processor unless the operating system was enhanced to be made aware of the difference Physical processo
466. ower Saving options are available for Compute Nodes via the CMM on the same Power tab as Power Capping gt No Power Savings Indicates that there is no power saving policy set gt Static Low Power Saver Static Low Power Saver mode lowers the processor frequency and voltage on a fixed amount which reduces the energy consumption of the Compute Node while still delivering predictable performance This percentage is predetermined to be within a safe operating limit and is not user configurable The Compute Node is designed for a fixed frequency drop of almost 50 down from the nominal frequency the actual value depends on the type and configuration 122 IBM Flex System p270 Compute Node Planning and Implementation Guide Static Low Power mode is not supported during boot or reboot although it is a persistent condition that is sustained after the boot when the system starts running instructions gt Dynamic Power Saver DPS DPS mode varies processor frequency and voltage based on the usage of the POWER7 processors Processor frequency and usage are inversely proportional for most workloads which implies that as the frequency of a processor increases its usage decreases given a constant workload DPS mode makes the most of this relationship to detect opportunities to save power that are based on measured real time system usage When a system is idle the system firmware lowers the frequency and voltage to power energy saver mode
467. ower node management 191 This card is one of the features that makes the FSM unique when it is compared to other nodes that are supported by the chassis The management network adapter provides a physical connection into the private management network of the chassis so that the software stack has visibility into the data and management networks The preinstallation contains a set of software components that are responsible for performing certain management functions These components must be activated by using the available IBM Feature on Demand FoD software entitlement licenses and they are licensed on a per chassis basis You need one license for each chassis you plan to manage The management node comes standard without any entitlement licenses so you must purchase a license to enable the required FSM functionality There are two versions of IBM Flex System Manager base and advanced PureFlex note In a PureFlex configuration FSM base is included as part of the configuration and is licensed for the total number of chassis that is included in the original order FSM advanced is optional in all PureFlex configurations The FSM base feature set offers the following functionality Supports up to 16 managed chassis Supports up to 5 000 managed elements Auto discovers managed elements Provides overall health status Monitoring and availability Hardware management Security management Administration Network management Network Contr
468. own in Figure 9 38 The name parameter specifies the wanted name in the media repository The file parameter specifies the original file name in home padmin The 1srep command is used again to verify the addition to the media repository mkvopt name AIX7TL1ISP1 file AIX71TL1SP01 iso Isrep Size mb Free mb Parent Pool Parent Size Parent Free 10198 6905 rootvg 40896 6272 Name File Size Optical Access AIX7TL1SP1 3293 None ro Figure 9 38 Adding an ISO image to the media repository After the addition to the media repository is verified the original file can be deleted from home padmin if necessary Creating the virtual target device and assigning the media A client server virtual SCSI adapter pair is required as it is in the method of using the VIOS to virtualize a physical optical device to a client virtual server or partition For more information see 9 5 2 Preparing for a physical optical device virtualized by the VIOS on page 467 The mkvdev command with the fb0 flag is used to create a file backed optical virtual target device This device is assigned to a vhost that is associated with the wanted virtual or partition Figure 9 39 on page 471 shows the mkvdev command that is used to create virtual target devices that are assigned to partition 2 because of the vhost0 that is associated with that partition The 1smapp a11 command shows vtopt0 is assigned to virtual server or partition 2 but the backing device is s
469. p270 Compute Node Planning and Implementation Guide Note The IBM Flex System EN4132 2 port 10 Gb RoCE Adapter is only supported in I O adapter slots 2 3 and 4 This card cannot be installed in I O adapter slot 1 For more information about this adapter see the IBM Redbooks Product Guide that is available at this website http www redbooks ibm com abstracts tips0913 html 0pen 4 9 9 IBM Flex System IB6132 2 port QDR InfiniBand Adapter The IBM Flex System IB6132 2 port QDR InfiniBand Adapter from Mellanox provides the highest performing and most flexible interconnect solution for servers that are used in Enterprise Data Centers High Performance Computing and Embedded environments Table 4 15 lists the ordering part number and feature code Table 4 15 Ordering part number and feature code IB6132 2 port QDR InfiniBand Adapter The IBM Flex System IB6132 2 port QDR InfiniBand Adapter has the following features and specifications ConnectX2 based adapter one ASIC Virtual Protocol Interconnect VPI InfiniBand Architecture Specification V1 2 1 compliant IEEE Std 802 3 compliant PCI Express 2 0 1 1 compatible through an x8 edge connector up to 5 GTps Processor offload of transport operations CORE Direct application offload GPUDirect application offload Unified Extensible Firmware Interface UEFI Wake on LAN WoL RDMA over Converged Ethernet RoCE End to end QoS and congestion control Hardware based I O virtuali
470. partition Creating the media repository The media repository requires a VIOS storage pool The storage pool that is used can be the default rootvg storage pool or another pool can be created Another storage pool requires another physical volume A best practice is to have other volumes for creating more storage pools In this simplified example we create the media library or repository in rootvg The 1srep command is used to determine whether a media repository exists as shown in Figure 9 36 Only one media repository can exist on a VIOS Isrep The DVD repository has not been created yet Figure 9 36 Checking the VIOS for an existing media repository Use the VIOS mkrep command to create a media repository Figure 9 37 on page 470 shows the mkrep command that is used to create a 10 GB repository in the storage pool rootvg The size parameter value assumes a value that is available in the storage pool The 1srep command is used again to verify the new repository Chapter 9 Operating system installation methods 469 mkrep sp rootvg size 10G Virtual Media Repository Created Repository created within VMLibrary logical volume Isrep Size mb Free mb Parent Pool Parent Size Parent Free 10198 10198 rootvg 40896 627 Figure 9 37 Creating a media repository Loading the media repository To import an ISO image into the media repository that was transferred to the VIOS home padmin directory use the mkvopt command as sh
471. password to authenticate Flex System Manager to one or more target systems Then click Request Access to grant all authorized Flex System Manager users access ta the target system s User ID USERIG Bp scsword Request Access Selected targets Hame Access Trust State Server 7954 244 SNi0778 BBNo access B trusted H4 Page 1 of 1 F Fl 1 a Total 1 Figure 7 52 Requesting access to a Power compute node 3 With the access request complete click Close to exit the window and return to the server list view in the content area Chapter 7 Power node management 239 240 Inventory collection For the FSM to accurately manage a Power Systems compute node inventory information must be collected Usage note A Power based compute node is required to be in a power state of at least Standby before the inventory collection job completes without errors The example that is shown in Figure 7 48 on page 236 and Figure 7 49 on page 237 show the power on steps To accomplish this task perform the following steps 1 Right click the server object in the list as shown in Figure 7 53 Select Name Ww HH Serser 7954 248 8N10778 Performance Summary Search the table Access 2 Related Resources gt Topology Perspectives b Create Group IBA FShd Explorer Remove Add ta Automation Hardware Information Inventory Operations F ower OnOff Release Management Security Sy
472. pdate for a Power compute node call be downloaded from IBM Fix Central This package consists of an RPM and xm1 file as shown in Figure 7 153 Is O1AF773_021 021 rpm O1AF773_ 021 021 xml Figure 7 153 Power compute node system firmware update files IVM and IBM Fix Central Note When a Power compute node firmware update is requested from Fix Central the option that includes the packaging for IBM System Director does not need to be selected Only the rpm file is needed for the update process On the VIOS create the directory tmp fwupdate by using the command that is shown in Figure 7 154 from the padmin User ID or protected shell mkdir tmp fwupdate Figure 7 154 Directory location for update RPM file When you are performing an FTP transfer get on the VIOS directly from IBM Fix Central or an FTP that was put from another workstation to the VIOS The target of the transfer should be tmp fwupdate 316 IBM Flex System p270 Compute Node Planning and Implementation Guide Installing the system firmware update The installation process requires two steps unpacking the update and then the actual installation Install the system firmware update by completing the following steps 1 Enter root access authority oem_setup_env 2 Unpack the RPM file by using the rpm Uvh ignoreos tmp fwupdate filename rpm Command The image file is unpacked to the tmp fwupdate directory The installation process can be completed wit
473. pe menu item number and press Enter or select Navigation key Figure 9 21 Select boot options 25 Select option 6 Network as shown in Figure 9 22 Version AF773 033 SMS 1 7 c Copyright IBM Corp 2000 2008 All rights reserved Select Device Type Diskette Tape CD DVD IDE Hard Drive Network List all Devices Navigation keys M return to Main Menu ESC key return to previous screen Type menu item number and press Enter or select Navigation key Figure 9 22 Select device type Chapter 9 Operating system installation methods 459 After selecting this option you are prompted again for the network service as you were in Figure 9 18 on page 457 Make the same selection here that is option 1 BOOTP 26 Select the same network adapter that you selected previously as shown in Figure 9 23 Version AF773_ 033 SMS 1 7 c Copyright IBM Corp 2000 2008 All rights reserved Select Device Device Current Device Number Position Name ie 3 Interpartition Logical LAN loc U7954 24X 1077E3B V5 C4 T1 Navigation keys M return to Main Menu ESC key return to previous screen Type menu item number and press Enter or select Navigation key Figure 9 23 Network adapter selection 460 IBM Flex System p270 Compute Node Planning and Implementation Guide 27 In the Select Task window select option 2 Normal Mode Boot as shown in Figure 9 24 SMS 1 7 c Copyright IBM Corp 2000 2008 All rights reserved Se
474. perations Manager SG24 7464 Implementing IBM Systems Director Active Energy Manager 4 1 1 SG24 7780 Implementing Systems Management of IBM PureFlex System SG24 8060 Integrated Virtualization Manager for IBM Power Systems Servers REDP 4061 NIM from A to Z in AIX 5L SG24 7296 Positioning IBM Flex System 16 Gb Fibre Channel Fabric for Storage Intensive Enterprise Workloads REDP 4921 Copyright IBM Corp 2013 All rights reserved 605 gt gt Storage and Network Convergence Using FCoE and iSCSI SG24 7986 TotalStorage Productivity Center V3 3 Update Guide SG24 7490 You can search for view download or order these documents and other Redbooks Redpapers Web Docs draft and other materials at this website http www ibm com redbooks Online resources The following websites are also relevant as further information sources gt IBM US Announcement letter for p270 http ibm com common ssi cgi bin ssialias infotype dd amp subtype ca amp amp h tml fid 897 ENUS113 064 IBM Flex System p270 Compute Node product page http ibm com systems flex hardware servers p270 IBM Flex System Information Center http publib boulder ibm com infocenter flexsys information IBM Flex System p270 Compute Node Installation and Service Guide http publib boulder ibm com infocenter flexsys information topic c om ibm acc 7954 doc printable doc html IBM Redbooks Product Guides for IBM Flex System servers and options
475. physical I O adapters might be limiting To make full use of the virtualization capabilities that are provided by the POWER Hypervisor and the VIOS together a virtual server for the VIOS is normally created The following simplified examples are used only to demonstrate the various techniques and might not use best practices Also they should not be considered as recommendations of configurations This simple configuration that is used in these examples is based on a p270 Compute Node and a single VIOS All of the installation physical adapters are assigned to this VIOS A simple virtual networking configuration is used with three virtual Ethernet adapters defined This section includes the following topics gt 8 5 1 Using the CLI on page 349 gt 8 5 2 GUI methods on page 354 gt 8 5 3 Modifying the VIOS profile on page 399 8 5 1 Using the CLI Many integrators and system administrators make extensive and efficient use of the CLI rather than use a graphical interface for their virtual server creation and administration tasks Tasks can be scripted and often the tasks are completed faster by using the command line Scripts In many cases existing scripts that were written for use on an HMC can run unchanged on FSM Similarly scripts that are written to run on an HMC usually run on IVM managed system with minor changes When you are using any of the command line methods to create a virtual server or LPAR the
476. porary partition is shut down and deleted and the server remains in an Operating state Power On Server 7954 24X SH10 7 787B To power onthe managed system select a Power on option and click DE Poweron option Normal M When you select Mamma jon the Partition Start Policy defines how the n Hardware Discovery The current setting for the Partition Start Poltetomomennanageccsrstem is User Initiated Lise the Properties task for the managed system to change the Partition Start Policy Figure 7 107 HMC managed server Power On options For this example select Normal from the drop down list then click OK The Power On window closes and returns to the work pane view As the server powers up reference codes are displayed that indicate the various stages of the Power On process Figure 7 108 on page 286 shows an early reference code and the final status after the Power On process completes Chapter 7 Power node management 285 Systems Management Servers Views Table se ee ee Filter Tasks Views Available Select Mame a Status a Processing Available a Reference Units Memory GB Code Initializing Max Page Size Systems Management Servers 22 625 C100C1FF View Table eee ee Filter Tasks Views Available Select Name a Stare a Processing Available a Reference Units Memory GB Code E
477. prise indicator feature code AAS feature XCC feature Description code code EFDC Not applicable IBM PureFlex System Enterprise Indicator Feature Code EVD1 Not applicable IBM PureFlex System Enterprise with PureFlex Solution for SmartCloud Desktop Infrastructure 2 5 1 Enterprise configurations PureFlex Enterprise is available in a single or multiple chassis up to three chassis per rack configuration as a traditional Ethernet and Fibre Channel combination or a converged solution that uses Converged Network Adapters CNAs and FCoE All chassis in the configuration must use the same connection technology The required storage in these configurations can be a IBM Storwize V7000 or a IBM Flex System V7000 Storage Node Compute nodes can be Power or x86 based or a hybrid combination that includes both The IBM FSM provides the system management Ethernet and Fibre Channel Combinations have the following characteristics gt Power x86 or hybrid combinations of compute nodes gt 1Gbor 10 GbE adapters or LAN on Motherboard LOM x86 only gt 10 GbE switches Chapter 2 IBM PureFlex System 35 gt 16 Gb or 8 Gb for x86 only Fibre Channel adapters gt 16 Gb or 8 Gb for x86 only Fibre Channel switches CNA configurations have the following characteristics gt Power x86 or hybrid combinations of compute nodes gt 10Gb CNAs or LOM x86 only gt 10 Gb Converged Network switch or switches Configurations There are eight
478. products This information contains examples of data and reports used in daily business operations To illustrate them as completely as possible the examples include the names of individuals companies brands and products All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental COPYRIGHT LICENSE This information contains sample application programs in source language which illustrate programming techniques on various operating platforms You may copy modify and distribute these sample programs in any form without payment to IBM for the purposes of developing using marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written These examples have not been thoroughly tested under all conditions IBM therefore cannot guarantee or imply reliability serviceability or function of these programs You may copy modify and distribute these sample programs in any form without payment to IBM for the purposes of developing using marketing or distributing application programs conforming to IBM s application programming interfaces Copyright IBM Corp 2013 All rights reserved X Trademarks IBM the IBM logo and ibm com are trademarks or registered trademarks of International Business Machines Corporation in the United States other countries or b
479. ps 1 on page 582 to step 6 on page 586 At the SUSE welcome prompt Figure 12 46 start the VNC installer by typing install vnc 1 vncpassword password where password is your password Welcome to SuSE SLE 11 GA Type install to start the YaST installer on this CD DVD Type slp to start the YaST install via network Type rescue to start the rescue system on this CD DVD Welcome to yaboot version 1 3 11 SuSE Enter help to get some basic usage information boot install vnc 1 vncpassword pas sword Figure 12 46 SUSE Welcome screen For more information about these tasks see the Architecture Specific Installation Considerations chapter in the SLES 11 Deployment Guide available from https www suse com documentation sles11 592 IBM Flex System p270 Compute Node Planning and Implementation Guide 1 The first window is the installation mode window as shown in Figure 12 47 Preparation Installation Mode wv Welcome gt System Analysis TimeZone Installation Server Scenario Select Mode Installation Summary Perform Installation Configuration New Installation Check Installation Hostname gt Update Network Customer Center Online Update Repair Installed System P we Service Clean Up Release Notes Hardware Configuration Include Add On Products from Separate Media Help Abort Back Figure 12 47 Installation Welcome window Chapter 12
480. pter on page 116 4 9 12 IBM Flex System FC5054 4 port 16Gb FC Adapter on page 117 4 9 1 I O adapter slots There are two I O adapter slots available on the p270 The I O adapter slots on IBM Flex System nodes are identical in shape form factor There is no onboard network capability in the Power Systems compute nodes other than the Flexible Service Processor FSP NIC interface so an Ethernet adapter must be installed to provide network connectivity We describe the reference codes that are associated with the physical adapter slots in Assigning physical I O on page 370 Slot 1 requirements You must have one of the following I O adapters installed in slot 1 of the Power Systems compute nodes gt EN4054 4 port 10Gb Ethernet Adapter Feature Code 1762 gt EN2024 4 port 1Gb Ethernet Adapter Feature Code 1763 IBM Flex System CN4058 8 port 10Gb Converged Adapter EC24 Chapter 4 Product information and technology 103 A typical I O adapter is shown in Figure 4 19 D PCle connector Midplane connector Adapters share a common size 96 7 mm x 84 8 mm Guide block to ensure proper installation Figure 4 19 Underside of the IBM Flex System EN2024 4 port 1Gb Ethernet Adapter The large connector plugs into one of the I O adapter slots on the system board Also it has its own connection to the midplane of the Enterprise Chassis If you are familiar with IBM BladeCenter systems several o
481. pter window as shown in Figure 8 19 Create Virtual Server Server 7954 24 SN107782B8 Virtual Storage Adapters Name Memory Specify the virtual storage adapters required for this virtual server Processor 5 a as Ethernet Maximum number of virtual adapters 300 Virtual o gt Storage Adapters Edit Delete Adapter ID Connecting Virtual Server Connecting Adapter ID Note 1 You can use the Virtual Storage Management task to define the physical block storage for the VIOS As client partitions added and assigned storage the console will automatically create the SCSI or Fibre Channel server adapters that the cli virtual servers will use for storage access Figure 8 19 Defined virtual storage adapter properties 4 When all virtual storage adapters are defined click Next to save the settings and proceed to the physical adapters window Assigning physical I O Any virtual server can be assigned from installed physical I O adapters from one of the following sources M Expansion cards Integrated SAS Storage controller SAS Storage controller USB PCI to PCI bridge vy y Identifying the I O resource in the FSM configuration menus is necessary to assign the correct physical resources to the intended virtual servers 370 IBM Flex System p270 Compute Node Planning and Implementation Guide Complete the following steps 1 Choose the expansion card and storage controller from the list as shown in Figure 8
482. pters that can be created for an LPAR and that the maximum supported value is 1024 for any LPAR The Adapter ID is described in the following steps Note Set the maximum number of virtual adapters to one more than the highest ID number that you plan to assign If you do not set it correctly the wizard generates an error when assigning ID numbers to virtual adapters that exceed the current setting This value cannot be changed dynamically after a virtual server is activated Chapter 8 Virtualization 383 For this example enter the value 300 in the Maximum virtual adapters field increasing from the default of 10 as shown in Figure 8 34 Create Lpar Wizard Server 7954 24X 5SN107782B Virtual Adapters t Create Partition wf Partition Profile Actions Virtual resources allow for the sharing of physical hardware between logical partitions The current virtual adapter settings are listed below Maximum virtual adapters 300 Number of virtual adapters 2 Optional Settings d i A A Pe A N Bb D F 2 Select Action M Profile Summary Eeo E 5 Select gt Type Adapter ID Server Client Partition Partner Adapter Required Fi Server Serial O Any Partition O Server Serial 1 Any Partition Any Partition Slot Yes Any Partition Slot Yes Total Fjer 2 Saee a Finish Figure 8 34 HMC Virtual Adapters window The first adapters that are created in this example are virtual Ether
483. ption from the drop down menu In our example we selected AIX or Linux 3 Click Next to open the Memory window 414 IBM Flex System p270 Compute Node Planning and Implementation Guide 4 Complete the following steps in the Memory window as shown in Figure 8 66 a Select the dedicated or shared memory mode The shared option is available only if Active Memory Sharing AMS was configured In our example the Dedicated option is selected b In the Assigned memory field enter a value then select a value from the drop down menu In our example we used a value of 4 and a unit of GB c Click Next to open the Processors window Create Partition Memory Name Memory a Me ory p pA f T In dedicated mode the partition uses assigned memory from total system memory In shared mode the m rocessors t i i uses the memory from the system shared memory pool Ethernet SMR You cannot create a partition that uses shared memory because there is no shared memory pool defined fo Storage system If you want to assign shared memory for the partition use the View Modify System Properties Me tiee to exit the wizard and create a shared memory pool to enable shared memory on the system Summary TA p n If you want to assign dedicated memory for the partition specify the amount of memory in multiples of 12 assign for the partition Note If you specify a number that is not a multiple of 128 MB the wizard will round the
484. r Remove Connections Disconnect Another Management Console Add Managed System Hardware Information Figure 7 118 HMC update of the current system software version From the Change Licensed Internal Code window that is shown in Figure 7 119 you can start the update wizard view current system firmware information or select advanced features such as selecting the flash side to use temporary or permanent and reject fix Chapter 7 Power node management 293 3 Select Start Change Licensed Internal Code wizard and click OK to open the Specify LIC Repository window as shown in Figure 7 119 Change Licensed Internal Code Server 7954 24X SN107782B Click Start Change Licensed Internal Code wizard to perform a guided Update of managed system power and 1 0 Licensed Internal Code LIC Click View system information to examine current LIC levels including retrievable levels Click Select advanced features to update managed system and power LIC with more options and additional targeting choices Select the type of action to perform Start Change Licensed Internal Code wizard O View system information O Select advanced features Figure 7 119 Change Licensed Internal Code window 4 The Licensed Internal Code or LIC update code can be in several locations In our example an FTP site is used Select FTP site and click OK to open the FTP Access Information window Specify LIC Repository Server 7954 24X SN107782
485. r lIpar m Server 7954 24X SN107782B i name itsoVIOS6A profile name itsoVIOS6A new lpar_env vioserver lpar_id 1 min_mem 2048 desired_mem 8192 max_mem 10240 proc_mode ded min_ procs 2 desired procs 4 max_procs 6 sharing mode share idle procs active auto _start 0 lpar_io pool ids 1 2 io slots 2101021A none 1 21010218 n one 1 21010238 none 1 21010219 none 0 max_virtual_slots 300 virtual _serial_adapters 0 server 1 any any 1 1 server 1 any any 1 virtua 1_scsi_adapters 5 server 2 102 0 virtual_eth adapters 2 1 4091 1 1 ETHERNETO al1 none 3 1 1 4092 1 1 ETHERNETO al1 none 4 0 4094 0 1 ETHERNETO al1 none msp 0 Chapter 8 Virtualization 9351 352 VIOS command This command creates a VIOS server that matches the one that was created in Creating the virtual server on page 358 with the FSM UI which shows the usage of the graphical interface Verifying success A successful command produces a prompt with no message displayed To verify that the VIO Server was created run the smcli Issyscfg command and scan the results for the name of your virtual server as shown in the following example USERID itsoFSM2 gt smcli Issyscfg r Ipar m Server 7954 24X SN107782B F name itsoVIOS6A To verify the content of the profile that was created as a result run the smcli Issyscfg command with different parameters as shown in the following example USERID itsoFSM2 gt smcli Issyscfg r prof m Server 7954 24
486. re 12 42 Disk space allocation selections 13 Select the software packages to install as shown in Figure 12 43 The default installation of Red Hat Enterprise Linux is a basic server install You can optionally select a different set of software now Basic Server Database Server Web Server Enterprise Identity Server Base Virtual Host Desktop iA J 2 2 Software Development Workstation Minimal Figure 12 43 RPM packages selection The software installation process starts 590 IBM Flex System p270 Compute Node Planning and Implementation Guide When the VNC installation is complete the window that is shown in Figure 12 44 opens The virtual server reboots the console returns to alohanumeric mode and you can connect to the server by using Secure Shell SSH or Telnet Congratulations your Red Hat Enterprise Linux installation is complete Please reboot to use the installed system Note that updates may be available to ensure the prope functioning of your system and installation of these updates is recommended after the reboot Figure 12 44 End of VNC installation As the system boots progress of the operation is displayed as shown in Figure 12 45 Starting cups OK Mounting other filesystems OK Starting HAL daemon OK Starting iprinit OK Starting iprupdate OK Retrigger failed udev events OK Adding udev persistent rules OK Starting iprdump OK Loading
487. re codes Some features are not listed here for brevity Table 2 5 Components of the chassis and switches AAS feature XCC feature Description code code 7893 92X 8721 HC1 IBM Flex System Enterprise Chassis 7955 01M 8731 AC1 IBM FSM AOTF 3598 IBM Flex System EN2092 1GbE Scalable Switch ESW7 A3J6 IBM Flex System Fabric EN4093R 10Gb Scalable Switch ESW2 A3HH IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch 26 IBM Flex System p270 Compute Node Planning and Implementation Guide 2 4 3 Compute nodes The PureFlex System Express requires at least one of the following compute nodes gt IBM Flex System p24l p260 p270 or p460 Compute Nodes IBM POWER or POWER7 based see Table 2 6 gt IBM Flex System x220 x222 x240 or x440 Compute Nodes x86 based see Table 2 7 Table 2 6 Power Based Compute Nodes ECS4 7954 24X IBM Flex System p270 Compute Node POWER7 Table 2 7 x86 based compute nodes ECS7 7906 25X IBM Flex System x220 Compute Node ECSB 7916 27X IBM Flex System x222 Compute Node 0457 7863 10X IBM Flex System x240 Compute Node ECSB 7917 45X IBM Flex System x440 Compute Node 2 4 4 IBM FSM The IBM FSM is a high performance scalable system management appliance It is based on the IBM Flex System x240 Compute Node The FSM hardware is preinstalled with Systems Management software that you can use to configure monitor and manage IBM PureFlex Systems Chapter 2 IBM PureFlex System 27
488. rea network VLAN case and up to 65 390 65 408 minus 14 minus 4 if VLAN tagging is used gt The POWER Hypervisor presents itself to partitions as a virtual 802 1Q compliant switch The maximum number of VLANs is 4096 Virtual Ethernet adapters can be configured as untagged or tagged following the IEEE 802 1Q VLAN standard Chapter 8 Virtualization 343 344 gt An AIX partition supports 256 virtual Ethernet adapters for each logical partition Aside from a default port VLAN ID the number of additional VLAN ID values that can be assigned per virtual Ethernet adapter is 20 which implies that each virtual Ethernet adapter can be used to access 21 virtual networks gt Each operating system partition detects the VLAN switch as an Ethernet adapter without the physical link properties and asynchronous data transmit operations Any virtual Ethernet can also have connectivity outside of the server if a Layer 2 bridge to a physical Ethernet adapter is configured in a VIOS partition The device that is configured in this fashion is the SEA Important Virtual Ethernet is based on the IEEE 802 1Q VLAN standard No physical I O adapter is required when a VLAN connection is created between partitions No access to an outside network is required for inter partition communication Virtual SCSI The POWER Hypervisor provides a virtual SCSI mechanism for virtualization of storage devices Virtual SCSI allows secure communications between a
489. read all optical media formats through the read interface of the device driver b Only USB tape drives and USB DVD RAM drives can be virtual devices in a client partition For all other USB devices the USB controller must be assigned to a partition for the partition to have access to the USB device Table 4 18 lists the IBM USB devices that are supported for use in the IBM 7226 Multimedia Storage Enclosure Model 1U3 7226 1U3 Table 4 18 Supported USB devices for the IBM 7226 Multimedia Storage Enclosure Model 1U3 7226 1U3 code VIOS AIX and Linux IBM i e Sa sinine use onnan pe e e e a ee anus ae es e ee a The AIX operating system supports the mksysb system backup restore operations by using any of the USB removable media types The AIX operating system does not support using a USB device as a target for an AIX operating system installation The AIX operating system and VIOS only support writing to DVD RAM media but can read all optical media formats through the read interface of the device driver b Only USB tape drives and USB DVD RAM drives can be virtual devices in a client partition For all other USB devices the USB controller must be assigned to a partition for the partition to have access to the USB device 126 IBM Flex System p270 Compute Node Planning and Implementation Guide 4 13 2 Supported non IBM USB devices Table 4 19 lists the non IBM USB device types can attach to the Power Systems compute nodes Due to t
490. red to allow relocation or removal later from the running virtual server General Processors Memory Physical 1 0 Detailed below are the physical I O resources for the hast Select which adapters from the list you would like included in the profile and then add the adapters to the profile as Desired or Required Click on an adapter to view more detailed adapter information 1 0 Virtual Adapters Power Controlling Add as reguired l Add as desired Remove Properties EPLET Select Location Code UF8AE 001L W2Z500E4 P1 Ci s Li l U78AE 001 WZS5S00E4 P1 C19 L1 U78AE 001 WZS5S00E4 P1 T2 Fi U78AE 001 WZS500E4 P1 T1 l U78BAE 001 WZ500E4 P1 C18 L2 Total 5 Filtered 5 Displayed 5 Selected 1 9K cancel Description ete EN4054 4 port 10Gb Ethernet Adapter FC3172 2 port 8Gb Fibre Channel Adapter PCI E S45 Controller PCI to PCI bridge EN4054 4 port 10Gb Ethernet Adapter Added Settings Bus 312 515 317 a Figure 9 27 Using the FSM to assign the USB port to an existing virtual server profile HMC managed compute node When you are creating a partition of any type with the HMC by using the wizard select the PCl to PCl bridge device under the Physical I O Adapters option as shown in Figure 9 28 on page 465 IBM Flex System p270 Compute Node Planning and Implementation Guide Create Partition Partition Profile Processors Virtual Adapters Optional Settings Pr
491. roperties or profile properties after you complete this wizard System name Server 954 24 SHLO7 7826 Partition ID 2 Partition name full_sys_par Partition environment ALX or Linux Profile name new profile Boot mode NORMAL Using all resources Finish cancer Figure 8 88 Profile summary window when creating full system partition with HMC Chapter 8 Virtualization 435 436 IBM Flex System p270 Compute Node Planning and Implementation Guide Operating system installation methods In this chapter we describe the methods that are available to install supported operating systems on the IBM Flex System p270 Compute Node This chapter includes the following topics 9 1 Comparison of methods on page 438 9 2 Accessing System Management Services on page 438 9 3 Installios installation of the VIOS on page 440 9 4 Network Installation Management method on page 446 9 5 Optical media installation on page 462 9 6 TFTP network installation for Linux on page 478 9 7 Cloning methods on page 487 YYYY YV YV Yy We describe how to install each of the operating systems in subsequent chapters Copyright IBM Corp 2013 All rights reserved 437 9 1 Comparison of methods Installation method compatibility among operating systems is shown in Table 9 1 Table 9 1 Installation methods Compatibility among operating systems and management appliance Inst
492. rprise Server 11 ppc64 Kernel 2 6 27 19 5 ppc64 console Slesll e4kc login Figure 12 53 SLES11 Login screen The basic SLES installation is complete You can choose to install more RPMs from the IBM Service and Productivity Tool web page Chapter 12 Installing Linux 599 600 IBM Flex System p270 Compute Node Planning and Implementation Guide Abbreviations and acronyms AAS AC ACL AFP AFT ALB AME AMM AMS ASHRAE ASIC ASMI BBI BOOTP BOS BRD BTO CD CD ROM CEE CFM CLI CMM CN CNA Advanced Administrative System alternating current access control list Advanced Function Printing adapter fault tolerance adaptive load balancing Advanced Memory Expansion Advanced Management Module Active Memory Sharing American Society of Heating Refrigerating and Air Conditioning Engineers application specific integrated circuit Advanced System Management Interface browser based interface boot protocol Base Operating System board build to order compact disk compact disc read only memory Converged Enhanced Ethernet cubic feet per minute command line interface Chassis Management Module Congestion Notification Converged Network Converged Network Adapter Copyright IBM Corp 2013 All rights reserved CPU CPW CSS CTO DC DCB DCM DEVD DHCP DIMM DLPAR DNS DPS DRC DRV DSA DVD ECC EMC ESA ESB ETE ETS FC FCAL FCF FCID FCOE FCP FDR centra
493. rs Status 7 Ready Service and Support Manager is actively monitoring for Common Tasks serviceable problems and Electronic Service Agent is configured to automatically transmit problems inventory and Manage support files performance measurement data to IBM Send test problem P Dynamic System Analysis DSA status verification in Test connection to IBM l progress Service and Support Manager is currently verifying Verify DSA status the status of the DSA collectors Lt Roniing Powar Sistem Repais Setup and Configuration Manage settings Manage your system contacts Getting Started with Electronic Service Agent Figure 7 82 Ready status for Service and Support Manager Chapter 7 Power node management 263 264 Testing the connection to IBM support A further test of connectivity can now be performed from the Service and Support Manager page click Test connection to IBM under Common Tasks A confirmation question is displayed as shown in Figure 7 83 Test connection to IBM A test will be performed to ensure that the Electronic Service Agent tool can connect to IBM support The results of the test will appear in the Event Log Figure 7 83 Testing connection to IBM support Check the event log by clicking Home gt Plug ins Flex System Manager gt Event Log When the event log is shown enter Electronic in the search field and click Search The search results return a log entry similar to the ex
494. rs are abstracted into virtual processors that are available to partitions The meaning of the term physical processor here is a processor core For example in a six core server there are six physical processors 8 3 2 Virtual I O adapters The POWER Hypervisor provides the following types of virtual I O adapters as described in the following sections gt Virtual Ethernet gt Virtual SCSI on page 344 gt Virtual Fibre Channel on page 344 gt Virtual serial adapters TTY console on page 346 Virtual I O adapters are defined by system administrators during logical partition definition Configuration information for the adapters is presented to the partition operating system Virtual Ethernet The POWER Hypervisor provides an IEEE 802 1Q VLAN style virtual Ethernet switch that allows partitions on the same server to use fast and secure communication without any need for a physical connection Virtual Ethernet support starts with AIX 5L V5 3 or the appropriate level of Linux supporting virtual Ethernet devices The virtual Ethernet is part of the base system configuration Virtual Ethernet has the following major features gt Virtual Ethernet adapters can be used for IPv4 and IPv6 communication and can transmit packets up to 65 408 bytes in size Therefore the maximum transmission unit MTU for the corresponding interface can be up to 65 394 65 408 minus 14 for the header in the non virtual local a
495. rsion is AlX V6 1 with the 6100 08 Technology Level with Service Pack 3 or later For more information about AIX V6 1 maintenance and support see the Fix Central website at http www ibm com eserver support fixes fixcentral main pseries aix AIX V7 1 The supported version is AIX V7 1 with the 7100 02 Technology Level with Service Pack 3 For more information about AIX V7 1 maintenance and support see the Fix Central website at http www ibm com eserver support fixes fixcentral main pseries aix IBM i The supported versions are gt IBM 6 1 with i 6 1 1 K machine code or later gt IBM i 7 1 TR6 or later Virtual I O Server is required to install IBM i in a Virtual Server on IBM Flex System p270 Compute Node because all I O must be virtualized Linux Linux is an open source operating system that runs on numerous platforms from embedded systems to mainframe computers It provides a UNIX like implementation in many computer architectures At the time of this writing the following versions of Linux on POWER7 processor technology based servers are supported gt SUSE Linux Enterprise Server 11 Service Pack 2 for POWER or later with current maintenance updates available from Novell to enable all planned functionality gt Red Hat Enterprise Linux 6 4 for POWER or later Linux operating system licenses are ordered separately from the hardware You can obtain Linux operating system licenses from IBM to be included
496. rt the CMM Typically only needed when experiencing problems 3 d Reset to Defaults Sets all current configuration settings back to default values E File Management View or delete Files in the CMM local storage file system Figure 7 14 Management Module management options The following menu options are of most interest for managing compute nodes and are described in this section gt System Status gt Chassis Management Compute Nodes gt Chassis Management Component IP Configuration These options are described in 7 7 3 Power compute node management on page 209 The Service and Support tab information is described in 7 7 4 Service and Support option on page 220 7 7 2 Connecting a Power compute node to the CMM During a chassis power up or when the compute node is first inserted into the chassis the CMM automatically performs a discovery process that detects and collects information about the new system No other action is required to connect or register the new compute node to the CMM This process is indicated on a newly inserted compute node by a fast green flash of the power indicator LED When the discovery process is complete the LED changes to a slow flash and actions can be performed on the compute node The discovery process for Power based compute nodes can take several minutes to complete 208 IBM Flex System p270 Compute Node Planning and Implementation Guide During the discovery process
497. rver can be opened from the FSM This virtual terminal console can be used for initial operating system installation network configuration and debug or general access if wanted for VIOS AIX and PowerLinux virtual servers IBM i uses 5250 emulation for its system console For more information see 11 3 Configuring an IBM i console connection on page 512 In any view of the FSM that shows a Power compute node virtual server object a virtual terminal console can be opened by right clicking the option In the example the starting point is the Manage Power Systems Resources view Flex Note When a Power Systems compute node is managed by an FSM SOL must be disabled for the node at the CMM to allow access to the virtual terminal for the first virtual server of the node For more information about disabling SOL see Disabling SOL for chassis on page 218 or Disabling SOL for an individual compute node on page 219 Chapter 7 Power node management 2483 To open a virtual terminal console complete the following steps 1 Click the wanted server under Hosts in the navigation area Right click the virtual server in the work area table Select Operations gt Console Window Open Terminal Console as shown in Figure 7 57 Server 7954 245 5M107782B Performance Summary Search the table Search Select Name gt Part Id Access State Reference a El O Elitar Related Resources
498. ry Support Bottom Copying temporary files from installation device Figure 11 32 Installation of Licensed Programs progress window No response is required to these status displays until a change of media is required which is shown in a break message an example of which is shown in Figure 11 33 Display Messages System E1277E3B Queue QSYSOPR Program DSPMSG Library QSYS Library Severity 95 Delivery BREAK Type reply if required press Enter Load the next volume in optical device OPTO1 X G Reply 4G Figure 11 33 Media load break message 532 IBM Flex System p270 Compute Node Planning and Implementation Guide 8 After all of the selected LICPGMs are installed the system prompts you to accept the license agreements as shown Figure 11 34 Software Agreement System E1277E3B Licensed program 5770DG1 Licensed program option BASE Release 2 6 se 2 VZRIMO International Program License Agreement Part 1 General Terms BY DOWNLOADING INSTALLING COPYING ACCESSING CLICKING ON AN ACCEPT BUTTON OR OTHERWISE USING THE PROGRAM LICENSEE AGREES TO THE TERMS OF THIS AGREEMENT IF YOU ARE ACCEPTING THESE TERMS ON BEHALF OF LICENSEE YOU REPRESENT AND WARRANT THAT YOU HAVE FULL AUTHORITY TO BIND LICENSEE TO THESE TERMS IF YOU DO NOT AGREE TO THESE TERMS DO NOT DOWNLOAD INSTALL COPY ACCESS CLICK ON AN ACCEPT BUTTON OR USE THE PROGR
499. s 1 Click the Chassis Management gt Compute Nodes menu bar option and then click the wanted compute node as shown in Figure 7 24 a Compute Nodes Pi If specifying a power action for multiple nodes please be aware that in case of an error you will only be informed about the nodes that failed executing the action Successful nodes are ignored Different node types may take different amounts of time to complete the power action so in some cases the power status change will not be immediately reflected on the page In this case the user may have to perform a refresh F5 one or more times to see the power status change reflected on the page Power and Restart Actions Settings Columns Device Mame Device Type Health Status Power Bay Bay Type Machine Typ hodeO1 x240 Compute Mode E Normal On Mode of ara hodeO2 x240 Compute Mode E Normal On 2 Mode oF oP ACT hodeOs x240 Compute Mode E Normal On d Mode of ara nodel4 x240 Compute Node E Normal On 4 Mode oF ah ACT Figure 7 24 Selecting wanted compute node from Compute Nodes view 2 Click the General tab 3 Clear the Serial Over LAN check box as shown in Figure 7 25 Mode name noded6 p270 Auto power on mode Restore previous state Power On Delay seconds D Mode Bay data Bay data status Unsupported Management Network Status Down Internal Mgmt Port MAC 344085 A 02 4F Powered On Time 2 days 14 hours 3 min 55 secs Number of OS Boots D E
500. s Managed System and Power Server 7954 24 SN 1077828 YES View Levels Select Target Name Figure 7 127 Change LIC wizard confirmation window 13 Figure 7 128 shows a final confirmation to continue with a disruptive update or the option to cancel Click OK to continue i Confirm the action Server 7954 24X 5N107782B Click OK to start the disruptive operation otherwise click Cancel Hac onga HGR cance Figure 7 128 Disruptive operation confirmation 14 The update process copies the profile backup files as shown in Figure 7 129 Click OK to continue i Information Server 7954 24xX SN107787B Current profile data backup files have been copied 7954 240 107 7928 Jvarfhscforofiles 107 7826 fbackupFile_FirmwareUpdateQlary s var iscforofiles 107782B directory fbackupFile_FirmwareUpdateD LAF 3 dir HSCFO226 Figure 7 129 Profile data backupO Chapter 7 Power node management 297 15 Figure 7 130 Figure 7 131 and Figure 7 132 show various progress messages that are displayed during the update process Change Licensed Internal Code Wizard Progress Server 7954 245 5N107782B Function duration time Elapsed time Select Object Name CO 7954 24u 1077828 Installing updates Gord oo OO 00334 o Managed System Primary Writing update files OK Details Figure 7 130 LIC update progress window Change Licensed Internal Code Wizar
501. s Connect to the Internet directly Summary Connect to the Internet through an HTTP proxy serwer Proxy server host name Port number Proxy server requires authentication User name Password Test Internet Connection Figure 7 77 Getting started with ESA Connection page 5 The Connection page allows the setup and testing of access to the Internet When the configuration process is complete click Test Internet Connection An unsuccessful test results in a message that is shown in Figure 7 78 3 ATKUPDS44E An error occurred while testing for connectivity to the Internet Ensure that the IBM Flex System Manager Server system has connectivity to the Internet through ports 80 and 443 then try the operation again If the problem persists try the operation again later Figure 7 78 Unsuccessful Internet test access error message 260 IBM Flex System p270 Compute Node Planning and Implementation Guide A successful connection test displays the message that is shown in Figure 7 79 P i ATKUPD1491 Internet connection test completed successfully Figure 7 79 Successful Internet test access message 6 When the test returns successfully click Next to continue to the Authorized IBM IDs window as shown in Figure 7 80 Authorize IBM IDs Provide an IBM ID to be associated with information sent by Electronic Service Agent af Welcome Your Pr company contact Providing your IBM I
502. s GX interface 84 IBM Flex System p270 Compute Node Planning and Implementation Guide gt Memory controllers gt 1 0 links ie eta Di aa arg o AEE EE EEE HENETENEE ct che andi ine E G Bt AI BR rent it Sa yS a Sorana i RAR 255A ciiai See PF peat a Ree a _prtst pests s E gt mere i EEEREREL EA PREREREE T S53 sor WD DEERE S GEEUEREERS Buren ay DERPEREE Figure 4 9 POWER7 processor architecture POWER7 processor overview The POWER7 processor chip is fabricated with IBM 32 nm silicon on insulator SOI technology that uses copper interconnects and implements an on chip L3 cache by using eDRAM The POWER7 processor chip is 567 mm and is built by using 2 100 000 000 components transistors Eight processor cores are on the chip each with 12 execution units 256 KB of L2 cache per core and access to up to 80 MB of shared on chip L3 cache For memory access the POWER7 processor includes a double data rate 3 DDR3 memory controller with four memory channels To scale effectively the POWER7 processor uses a combination of local and global high bandwidth SMP links Chapter 4 Product information and technology 85 Table 4 4 summarizes the technology characteristics of the POWER7 processor Table 4 4 Summary of POWER7 processor technology Technology POWER7 processor Fabrication technology 32 nm lithography Copper interconnect Silicon on insulator eDR
503. s as shown in Figure 12 49 which shows the progress of the installation Preparation ww Welcorne af System Analysis y Time Zone Installation af Server Scenario af Installation Summary b Perform Installation Configuration Check Installation Hostname Network Customer Center Online Update Service CleanUp Release Notes Hardware Configuration Perform Installation pa sn e Tine Total 229GB 1138 SUSE Linux Enterprise Server 11 11 0 Mediurn 1 2 259 GB 1138 Actions performed seming type of partition jdewsdal to 41 Creating partition dew sdaz Setting type of partition dewsdaz to 82 Creating partition dewsdaz Formatting partition dewsda2 2 01 GB with swap Formatting partition dewsda3 17 96 GB with ext3 Mounting dewsda2 to swap Adding entry for mount point swap to jete fstab Mounting dewsdaz to Adding entry for mount point to Jeta fstab Installing yast2 country data 2 17 32 1 3 ppc64 rprn installed size 163 00 kB Installing util lirnusx lang 2 14 1 11 15 ppc64 rprn installed size 3 57 MB Installing util linux lang 2 14 1 11 15 ppc64 rprmn installed size 3 57 MB 66 Installing Packages Remaining 2 25 GB F K Figure 12 49 Perform Installation window Chapter 12 Installing Linux 595 5 The final phase of the basic installation process is shown in Figure 12 50 AUT LT Le O Finishing Basic Installation
504. s for their VN_Ports The FC BB 5 standard allows these MAC addresses to be assigned by the FCF during FLOGI or by the ENode MAC addresses that are assigned by the FCFs are called Fabric Provided MAC Addresses FPMAs MAC addresses that are assigned by the end devices are called Server Provided MAC Addresses SPMAs The CNAs and the FCFs today implement only FPMAs hence it is provided by the CN4093 or if the EN4093 is used it is upstream FCF FCFs fabric mode and N_Port ID Virtualization As described previously an FCF is the FC switching element in an FCoE network One of the characteristics of an FC switching element is that it joins the FC fabric as a domain It gives the CN4093 the capability to switch data between the compute node by using FCoE and an external storage controller that is attached to the external FC SAN fabric It also provides connectivity to external FCoE but does not support E port attachment to switches In a mixed FC FCoE fabric the FCF also often acts as the conversion device between FC and FCoE Each FCF that operates in full fabric mode or switch mode as an FC switch joins the existing FC fabric as a domain If the CN4093 is not used in this mode and it becomes a gateway device to an external FC or FCoE SAN N_Port ID Virtualization NPIV is used Connections involving NPIV equally apply to FCoE as they do in FC connectivity 6 2 Configuring an FCoE network with the CN4093 172 In this section we describ
505. s Server 7954 24X SN1077E3B S 255 255 254 0 p itsoVIOS6A r DefaultProfile i 9 42 171 85 d home USERID dvdimage v1 iso g 9 3 170 1 P auto D auto A eth1 Z Retrieving information for available network adapters This will take several minutes Figure 9 7 Installios CLI command install Chapter 9 Operating system installation methods 445 The steps are similar to the previous method however the selection of a network adapter on the virtual server is not required The process configures each available adapter in turn and performs a test ping to the FSM until one is found that works When a working adapter is found the installation proceeds and the output to the window is identical to the interactive method 9 4 Network Installation Management method The Network Installation Management NIM method is used most often in a Power Systems environment You can use NIM to install your servers and back up restore and upgrade software and to perform maintenance tasks For more information about NIM see NIM from A to Z in AIX 5L SG24 7296 which is available at this website http www redbooks ibm com abstracts sg247296 html To perform a NIM installation complete the following steps 1 Setup a Domain Name Server DNS or include the machine you are about to install in the etc hosts file of your AIX NIM server 2 Create the machine in the NIM environment by running the following command smit nim_mkmac 3 In the ne
506. s have been copied 7954 d4 1077ESB Avar hso profiles 1077E3B fbackupFile_FirmwareUpdateOlLaF 73 varf hse profiless1077E3B directory backupFile_FirmwareUpdateOgLaF 3 dir Update is disruptive to servers concurrent install only deferred disruptive activate This will force an auto accept of the current level Disruptive install and activate Auto accept the currently running firmware image as part of the operation This option is only applicable to activated firmware levels Installations with a deferred activation concurrency will always perform an auto accept System Target Checks Search the table Search System Mame LIC Type Readiness Applied Level Committed Level Platform IPL Level Next IPL 7954 444 1077E3B Managed System Primary Passed 3 3 3 3 Figure 7 70 Target Check Results window Continue the update process by clicking Next Chapter 7 Power node management 253 Figure 7 71 shows the Summary window that lists what update is going to be applied to an object or objects Multiple servers objects can be selected from the Host content window Click Finish to complete the wizard and open the job scheduler When the job scheduler is started you can select to display the update job Summary The updates will now be installed on the selected systems Verify the installation settings below Selected updates Mame Version Severity W Product Category Downloa
507. s is 9 42 170 1 Subnetmask IP address is 255 255 254 0 Getting adapter location codes pci 800000020000219 ethernet 0 ping successful Network booting install adapter Figure 9 5 Interactive installios powering up the virtual server and test ping 444 IBM Flex System p270 Compute Node Planning and Implementation Guide After the activation and IP configuration step completes the window displays the current LED code of the installation process as shown in Figure 9 6 When the process is complete the last message should indicate that the Base Operating System BOS installation is 100 complete Mon Jul 29 11 08 07 2013 var log nimol log Mon Jul 29 11 08 21 2013 nimol installios led code 0612 info Accessing remote files unconfiguring network boot device var log nimol log Mon Jul 29 11 20 31 2013 nimol installios led code 0c56 info Running user defined customization Mon Jul 29 11 08 07 2013 var log nimol log 2013 07 29T11 20 33 193670 04 00 ioserver nimol info BOS install 100 complete Figure 9 6 Real time display of installation log installios tip If the installios command ends early or does not complete run the installios u command to completely unconfigure and clean up the previous attempt 9 3 2 CLI installation A single command can be used with the same parameters that were entered as shown in Figure 9 7 USERID itsoFSM1 gt installios
508. s operations productivity with an easy self service user interface It is open and extensible for easy customization to help tailor to unique business environments The ability to standardize virtual machines and images reduces management costs and accelerates responsiveness to changing business needs Extensive virtualization engine support includes the following hypervisors PowerVM gt VMware vSphere 5 gt KVM gt Microsoft Hyper V vy The latest release of PureFlex announced October 2013 allows the selection of SmartCloud Entry 3 2 This now supports Microsoft Hyper V and Linux KVM that uses OpenStack The product also allows the use of OpenStack APIs Also included is IBM Image Construction and Composition Tool ICCT ICCT on SmartCloud is a web based application that simplifies and automates virtual machine image creation ICCT is provided as an image that can be provisioned on SmartCloud You can simplify the creation and management of system images with the following capabilities gt Create golden master images and software appliances by using corporate standard operating systems gt Convert images from physical systems or between various x86 hypervisors Chapter 2 IBM PureFlex System 51 gt Reliably track images to ensure compliance and minimize security risks gt Optimize resources which reduces the number of virtualized images and the storage that is required for them Reduce time to value for ne
509. s required for the node There are many options in this window but you do not need to set them all to set up the installation Most importantly set the correct gateway for the machine With your machine created in your NIM server assign to it the resources for the installation When you are installing a system from NIM you must have other resources defined that is at least one spot and one Ipp_ source or one Spot and one mksysb which feature the following definitions mksysb This item is a system image backup that can be recovered on the same or another machine spot A spot is what your system uses from the NIM at boot time It contains all boot elements for the NIM client machine Spots can be created from a mksysb or from installation media lpp_source An Ipp_source is the place where the NIM has the packages for installation They can be created from installation media and fix packs Creating installation resources The steps for creating the installation resources are not described here For more information see NIM from A to Zin AIX 5L SG24 7296 The smit fast path for creating resources is nim_mkres 5 Assign the installation resources to the machine For this example we are performing an RTE installation so we use spot and pp_source for the installation Run the following command smit nim_mac_res 448 IBM Flex System p270 Compute Node Planning and Implementation Guide 6 Select Allocate Network
510. s reserved Select Task Interpartition Logical LAN loc U7954 24X 1077E3B V6 C4 T1 Information Normal Mode Boot Service Mode Boot Navigation keys M return to Main Menu ESC key return to previous screen Type menu item number and press Enter or select Navigation key Figure 9 48 Media selection 9 When you select your optical drive you have three options Select option 2 Normal Mode boot then select option 1 Yes in the next window The boot process for your CD displays and you can continue with the operating system installation process as normal 9 6 TFTP network installation for Linux We can use the standard tools of any Linux distribution to manage a network installation This method is useful when an optical drive is not available or if a NIM server is not installed and configured Any Linux x86 based computer can be used as the TFTP server and virtually any Linux distribution can be easily configured to perform this task In this section we describe how to implement this function First you must set up the following standard Linux services on the installation server gt tftpd gt dhcpd used only to allow netboot using bootpd to a specific MAC address gt NFS server 478 IBM Flex System p270 Compute Node Planning and Implementation Guide This section includes the following topics gt 9 6 1 SUSE Linux Enterprise Server 11 gt 9 6 2 Red Hat Enterprise Linux 6 on page 485 9 6
511. s the managed system to activate this logical partition automatically the next time the managed system is powered on If this option is not selected the partition profile sets the managed system so that you must activate this logical partition manually the next time the managed system is powered on gt Enable redundant error path reporting Select this option to enable the reporting of server common hardware errors from this logical partition to the HMC The service processor is the primary path for reporting server common hardware errors to the HMC By selecting this option you can set up redundant error reporting paths in addition to the error reporting path that is provided by the service processor Server common hardware errors include errors in processors memory power subsystems the service processor the system unit vital product data VPD nonvolatile random access memory NVRAM I O unit bus transport RIO and PCI clustering hardware and switch hardware Server common hardware errors do not include errors in I O processors IOPs I O adapters IOAs or I O device hardware If this option is selected this logical partition reports server common hardware errors and partition hardware errors to the HMC If this option is not selected this logical partition reports only partition hardware errors to the HMC 394 IBM Flex System p270 Compute Node Planning and Implementation Guide This option is available only if the serv
512. s they are installed 496 IBM Flex System p270 Compute Node Planning and Implementation Guide 11 Installing IBM i This chapter describes the installation of the IBM i operating system on the p270 Compute Node by using virtual media IBM i 7R1 TR6 is used For more information about full operating system support see 5 1 2 Software planning on page 132 This chapter includes the following topics 11 1 Planning the installation on page 498 11 2 Creating an IBM i client virtual server on page 501 11 3 Configuring an IBM i console connection on page 512 11 4 Installing the IBM i operating system on page 513 11 5 Installing Licensed Programs on page 528 11 6 IPL and Initialize System on page 536 11 7 Installing Program Temporary Fix packages on page 537 11 8 Installing software license keys on page 545 11 9 Basic TCP IP configuration on page 547 Vvvvvvrvvvy Y Copyright IBM Corp 2013 All rights reserved 497 11 1 Planning the installation Because an IBM Flex System Enterprise Chassis by default is not shipped with any optical devices we describe the installation via virtual media that is imported to a VIOS virtual media library The client partition can use this for installation purposes so that no other equipment is required We also assume that there is a compatible storage device serving disk to the VIOS partitions which can then be virtualized to the
513. server ID 1 Environment VIOS x AIX Linux IBMi ual trusted platform module VTPM Warming The VTPM key is set te default key Figure 8 9 Setting the VIOS virtual server name and ID 3 Enter the following information Virtual server name we used itsoVIOS6A Server ID we gave our VIOS an ID of 1 Also specify the Environment option to identify this environment as a VIOS 4 Click Next 360 IBM Flex System p270 Compute Node Planning and Implementation Guide Memory and processor settings The next task is to choose the amount of memory for the VIOS virtual server Starting with Figure 8 10 which you reach by performing the steps in Creating the virtual server on page 358 complete the following steps Create Virtual Server Server 7954 24 SN1077826 Memo P Name ry lt gt Memory Select the memory mode and assigned memory for the virtual server Dedicated Memory Total system memory 32 00 GB Memory available 30 63 GB Assigned memory GB Figure 8 10 Specify the memory information for the VIOS virtual server 1 Change the value to reflect the amount of wanted memory in gigabytes Decimal fractions can be specified to assign memory in megabyte increments This memory is the amount of memory the hypervisor attempts to assign when the VIOS is activated We assign the VIOS 8 GB of memory Minimum and maximum values You cannot specify minimum or maximum settings
514. settings for the specified LAN interface address Select Allow Incoming to allow access to incoming network traffic from all hosts or select Allow Incoming by IP Address to allow access by incoming network traffic from hosts that are specified by an IP address and network mask Name Services tab You use the Name Services tab to specify DNS for configuring the console network settings as shown in Figure 7 96 DNS is a distributed database system for managing host names and their associated IP addresses With DNS users can use names to locate a host rather than using the IP address Customize Network Settings Identification LAN Adapters Name Services Routing DNS Configuration Use DHCP DNS Settings DNS enabled DNS Server Search Order Po LY Domain Suffix Search Order PL Figure 7 96 Name Services tab Chapter 7 Power node management 277 Routing tab In the Routing tab you specify routing information for configuring the console network settings such as add delete or change routing entries and specify routing options for the HMC as shown in Figure 7 97 ae Customize Network Settings FL Identification LAN Adapters Mame Services Routing Routing Information Select Type Destination Gateway Subnet Mask Interface NeW Default Gateway Information Gateway address Gateway device any LJ Enable routed Figure 7 97 Routing tab Routing Information The routing informat
515. sh memory device that is inserted into the HMC an external FTP site or the HMC hard disk drive Chapter 7 Power node management 291 292 The following example describes the use of an external FTP server for updating the current Licensed Internal Code which is more commonly known as system firmware on a Power compute node Terms The terms system firmware platform firmware Licensed Internal Code LIC and Machine Code are used interchangeably in this section Firmware naming convention In a name such as O1AFXXX_YYY_ZZZ includes the following components gt XXX is the stream release level gt YYY is the service pack level gt ZZZ is the last disruptive service pack level In this example the system firmware 01AF773_016 is described as release level 773 service pack 016 Acquiring system firmware update The firmware update for a Power compute node call be downloaded from IBM Fix Central This package consists of an RPM and xml file as shown in Figure 7 117 ls O1AF773_ 016 016 rpm O1AF773 016 016 xml Figure 7 117 Power compute node system firmware rom update file HMC and IBM Fix Central When a Power compute node firmware update is requested from Fix Central the option that includes the packaging for IBM System Director should be chosen to include the xm1 file that is required by the HMC Other files are included but only the rpm and xml1 file are needed The file that is obtained from IBM Fix Centr
516. show the progress and completion of the Power On task Click Close to return to the CMM interface Powering on server Please wait Figure 7 138 CMM compute node power on progress indicator Nodes The selected power action has been successfully submitted to the node s for a execution The status of the action is not known by the CMM until the node s sends an acknowledgement that will be logged in the event log Close Figure 7 139 CMM compute node power on completion message 304 IBM Flex System p270 Compute Node Planning and Implementation Guide ASMI method Complete the following steps to use the ASMI method 1 Access the ASMI web page by using the https protocol from a browser session The ASMI IP address was assigned from the CMM during the initial setup and configuration of the chassis The address of the all nodes can be found by using the CMM as shown in Figure 7 140 IBM Chassis Management Module System Status Multi Chassis Monitor Events v USERID Settings Log Out Help IS Service and Support Chassis Management Mgt Module Management Search Chassis Properties and settings for the overall chassis Chassis Change chassis name System Information Compute Nodes Properties and settings for compute nodes in the chassis Storage Nodes Properties and settings for storage nodes in the chassis Chassis Graphical View Chassis Table View Active Events 1 0 Modu
517. sical port but consider the management of bandwidth of each protocol Priority based Flow Control PFC which is part of the CEE DCBX 802 1Qbb standard is enabled when cee enable is set on a switch PFC works at a port level and can have values assigned at a port level or global switch level PFC pauses traffic at a port level that is based on 802 1p priority values in the VLAN tag PFC is enabled on priority value 3 by default which ensures lossless behavior that is vital for FCoE 182 IBM Flex System p270 Compute Node Planning and Implementation Guide Power node management The IBM Flex System Enterprise Chassis brings a whole new approach to management This approach is based on a global management appliance the IBM Flex System Manager FSM which you can use to view and manage functions for all of your Enterprise Chassis components These components include the Chassis Management Module CMM I O modules computer nodes and storage The FSM is standard with IBM PureFlex System configurations that contain Power Systems compute nodes Traditional methods of managing Power based servers the Hardware Management Console HMC and Integrated Virtualization Manage IVM are now supported and are described in this chapter The HMC and IVM management options are available in Build to Order BTO or Configure to Order CTO configurations System management at the basic chassis level uses the CMM and the native switch managers on each I O
518. sign allows for more choices to configure the machine to match your needs Table 4 7 on page 94 lists the available memory options for the p270 Power Systems compute node Chapter 4 Product information and technology 93 Table 4 7 Memory options Fome coas Dosono Soe Fata 8196 2x 4 GB DDR3 DIMM 1066 MHz LP 2x 32 GB DDR3 DIMM 1066 MHz DASD local storage option dependency on memory form factor Because of the design of the on cover storage connections clients that seek to use SAS HDDs must use VLP DIMMs 4 GB or 8 GB The cover cannot close properly if LP DIMMs and SAS HDDs are configured in the same system However SSDs and LP DIMMs can be used together For more information see 4 8 Storage on page 98 There are 16 buffered DIMM slots on the p270 as shown in Figure 4 13 DIMM 1 P1 C1 DIMM 2 P1 C2 DIMM 3 P1 C3 DIMM 4 P1 C4 DIMM 5 P1 C5 DIMM 6 P1 C6 DIMM 7 P1 C7 DIMM 8 P1 C8 DIMM 9 P1 C9 DIMM 10 P1 C10 DIMM 11 P1 C11 DIMM 12 P1 C12 DIMM 13 P1 C13 DIMM 14 P1 C14 DIMM 15 P1 C15 DIMM 16 P1 C16 Figure 4 13 Memory DIMM topology 94 IBM Flex System p270 Compute Node Planning and Implementation Guide The following memory placement rules should be observed gt Install DIMM fillers in unused DIMM slots to ensure proper cooling gt Install DIMMs in pairs gt Both DIMMs in a pair must be the same size speed type and technology You can mix compatible DIMMs from
519. sistance is needed 78 IBM Flex System p270 Compute Node Planning and Implementation Guide For more information about the front panel and LEDs see BM Flex System p270 Compute Node Installation and Service Guide which is available at this website http www ibm com support 4 2 2 Labeling IBM Flex System offers several options for labeling your server inventory to track your machines It is important to not put stickers on the front of the server across the bezel s grating because this inhibits proper airflow to the machine We provide the following labeling features gt Vital Product Data VPD sticker On the front bezel of the server is a vital product data sticker that lists the following information about the machine as shown in Figure 4 5 Machine type Model Serial number Figure 4 5 Vital Product Data sticker gt Node bay labeling on IBM Flex System Enterprise Chassis Each bay of the IBM Flex System Enterprise Chassis has space for a label to be affixed to identify or provide information about each Power Systems compute node as shown in Figure 4 6 Figure 4 6 Chassis bay labeling Chapter 4 Product information and technology 79 gt Pull out labeling Each Power Systems compute node has two pull out tabs that can also accommodate labeling for the server The benefit of using these tabs is that they are affixed to the node rather than the chassis as shown in Figure 4 7 4 3
520. so included are external SAN B24 switches and Top of Rack TOR G8264 Ethernet switches The TOR switches enable the data networks to allow other chassis to be configured into this solution not shown Chapter 2 IBM PureFlex System 37 k BS ZSS a a a a a E a e a a A m AAAA AA a a a a e a BD E a a E G A SEES G a D SES OOOO m OO Oooo Access Points 40Gb ISL Midplane Connections Chassis Boundary Management 1GbE label Chassis Elements Data 10GbE label Rack Mounted Elements Data 40GbE Data 8Gb FC 2 Figure 2 7 PureFlex Enterprise with External V7000 and FCoE 38 IBM Flex System p270 Compute Node Planning and Implementation Guide There is a management network that is included in this configuration that is composed of a 1 GbE G8062 network switch The Access points within the PureFlex chassis provide connections from the clients network into the internal networking infrastructure of the PureFlex system and connections into to the Management network 2 5 2 Chassis Table 2 12 lists the major components of the IBM Flex System Enterprise Chassis including the switches Feature codes The tables in this section do not list all feature codes Some features are not listed here for brevity Table 2 12 Components of the chassis and switches AAS feature XCC feature Description code code E
521. sors 1 0 Processing Mode i 2 Shared 216 available3 processors Dedicated 21 availab 5 rated processors Figure 8 67 IVM Create Partition Processors window Minimum and maximum values for IVM usage You cannot specify minimum or maximum settings while you are using the wizard The value that is specified here is the desired value Minimum and maximum values can be edited after the virtual server is created 416 IBM Flex System p270 Compute Node Planning and Implementation Guide 6 IVM creates two virtual Ethernet adapters by default for use by the LPAR Complete the following steps in the Ethernet window as shown in Figure 8 68 a From the adapter table select the virtual Ethernet that is presented by the VIOS to which each virtual Ethernet adapter on the new LPAR should be mapped This example maps the LPAR adapter 1 to virtual Ethernet 1 ento Virtual Ethernet 1 entO was predefined to be a SEA which allows the LPAR to have external network connectivity More LPAR adapters can be created by clicking Create Adapter b Click Next to open the Storage Type window Create Partition Ethernet Name Ethernet Memory 2 S 2 Specify the desired virtual Ethernet for each of this partition s virtual Ethernet adapters If you do not wish tf configure an adapter then select a virtual Ethernet of none Virtual Ethernet Configuration olorage Optical Tape Summary 1 entO U78AE 001 WZSROZE P1 C1
522. ssis Up to two expansion units can also be in the Flex chassis each using four compute node bays External expansion units are also supported Figure 2 3 IBM Flex System V7000 Storage Node The IBM Flex System V7000 Storage Node consists of the following components disk and software options gt gt IBM Storwize V7000 Controller 4939 A49 SSDs 200 GB 2 5 inch 400 GB 2 5 inch 800 GB 2 5 inch HDDs 300 GB 2 5 inch 10K 300 GB 2 5 inch 15K 600 GB 2 5 inch 10K 800 GB 2 5 inch 10K 900 GB 2 5 inch 10K 1 TB 2 5 inch 7 2K 1 2 TB 2 5 inch 10K Expansion Unit 4939 A29 IBM Storwize V7000 Expansion Enclosure 24 disk slots Optional software IBM Storwize V7000 Remote Mirroring IBM Storwize V7000 External Virtualization IBM Storwize V7000 Real time Compression 30 IBM Flex System p270 Compute Node Planning and Implementation Guide 7226 Multi Media Enclosure The 7226 system as shown in Figure 2 4 is a rack mounted enclosure that can be added to any PureFlex Express configuration and features two drive bays that can hold one or two tape drives and up to four slim design DVD RAM drives These drives can be mixed in any combination of any available drive technology or electronic interface in a single 7226 Multimedia Storage Enclosure Figure 2 4 7226 Multi Media Enclosure The 7226 enclosure media devices offers support for SAS USB and Fibre Channel c
523. ssors Virtual Adapters Optional Settings Profile Summary I O Physical I O Detailed below are the physical 1 0 resources for the managed system Select which adapters from the list you would like included in the profile and then add the adapters to the profile as Desired or Required Click on an adapter to view more detailed adapter information Add as required Add as desired l e amp 28 Select Action Select Location Code U SA4E 001 W2ZSRO2E P1 R1 PCI E 545 Controller Required U SAE 001 W2Z5RO02E P1 T1 PCI to PCI bridge Required U7YSAE 001 W2ZS5SRO2E P1 C18 L1 EN4054 4 port 10Gb Ethernet Adapter Required U7SAE 001 WZSRO2E P1 C18 L2 EN4054 4 port 10Gb Ethernet Adapter Required Total Filtered 4 Figure 8 33 HMC I O assignment window updated Virtual adapters In this task the process is repeated for each virtual adapter to be defined on the VIOS but the characteristics differ from each adapter type The order in which the adapters are created does not matter However the Adapter ID determines the order that similar adapters are configured as devices The Virtual Adapters window as shown in Figure 8 34 on page 384 shows a summary each virtual adapter in tabular form and options to create more from the Actions drop down menu As each adapter is created the table is updated to show the new adapter and properties The maximum number of virtual adapters represents the total number of virtual ada
524. st connectivity and storage end points where required for resilience and performing similar actions on an adjacent FCoE network to eliminate a CN4093 from being a point of failure in storage addressability All interfaces that are to use FCoE must be in the same VLAN IBM Flex System p270 Compute Node Planning and Implementation Guide Figure 6 7 provides an example of a p460 compute node that is equipped with two CN4058 8 port 10Gb Converged Adapter cards that are running a converged network to two CN4093 10Gb Converged Scalable Switches that are installed in the Enterprise Chassis p460 compute node LEGEND IOM1 CN4093 FCSO ENTO e INTA1 D pog FCS1 ENT4 VIOS1 FCP VIOS1 TCP VIOS2 FCP VIOS2 TCP FCS4 ENT4 FCS5 ENTS FCS1 ENT1 FCS4 ENT4 FCS5 ENT5 Figure 6 7 Dual VIOS environment in a dual width compute node with CN4058 Converged adapters The diagram shows each VIOS having both ASICs off a CN4058 adapter The diagram also shows switch resiliency to provide adapter level resiliency per VIOS bifurcate the secondary ASIC off each CN4058 card to each VIOS This example reduces the need for dedicated adapters for FC traffic or any use of FC based I O modules for this node Chapter 6 Converged networking 181 In this example each VIOS is segregating traffic protocols TCP and FCP to separate physical ports on the adapters It is possible to converge both protocols on to each phy
525. st of ownership TCO gt Improve business responsiveness and operational speed by dynamically reallocating resources to applications as needed to better anticipate changing business needs gt Simplify IT infrastructure management by making workloads independent of hardware resources so that you can make business driven policies to deliver resources that are based on time cost and service level requirements gt Move running workloads between servers to maximize availability and avoid planned downtime 8 2 PowerVM The PowerVM platform is the family of technologies capabilities and offerings that deliver industry leading virtualization on the IBM Power Systems It is the umbrella branding term for Power Systems virtualization Logical Partitioning Micro Partitioning POWER Hypervisor Virtual I O Server Live Partition Mobility Workload Partitions and more As with Advanced Power Virtualization in the past PowerVM is a combination of hardware enablement and added value software The licensed features of each of the three separate editions of PowerVM are described in 8 2 1 PowerVM editions on page 336 PowerVM is a combination of hardware enablement and added value software When we talk about PowerVM we are talking about the features and technologies that are listed in Table 8 1 Table 8 1 PowerVM features and technologies Features and technologies Function provided by PowerVM Hypervisor Hardware platform 3
526. st of the installed system firmware levels and a list of actions that can be performed Chapter 7 Power node management 321 Although not required committing the current temporary image to the permanent location should be considered as a general firmware maintenance task UPDATE AND MANAGE FLASH 802810 The current permanent system firmware image is FW773 00 AF773 016 The current temporary system firmware image is FW773 00 AF773_ 019 The system is currently booted from the temporary firmware image Move cursor to selection then press Enter Validate and Update System Firmware Validate System Firmware Commit the Temporary Image F1 Help FIO Exit F3 Previous Menu Figure 7 162 Committing the temporary image to the permanent side 4 Use the down arrow key and select Commit the Temporary Image Press Enter to start the commit process Figure 7 163 show the commit process in progress UPDATE AND MANAGE FLASH 802830 The commit operation is in progress Please stand by F3 Cancel F10 Exit Figure 7 163 Commit operation in progress 322 IBM Flex System p270 Compute Node Planning and Implementation Guide 5 Figure 7 164 shows the completion of the commit process Press Enter to continue UPDATE AND MANAGE FLASH 802818 The commit operation was successful F3 Cancel F10 Exit Figure 7 164 Showing the commit operation is complete 6 Press F3 to exit to the Task Selection menu and select Update and Mana
527. st64 sles11 label sles11 append quiet usevnc 1 vncpassword passw0rd instal l nfs 10 1 2 51 temp sles11 Figure 9 52 yaboot conf xx xXx XX XX XX XX 7 Figure 9 53 shows an example of the etc exports file with the exported directory that contains the image of the SUSE Linux Enterprise Server 11 DVD datil sles11 rw insecure no root squash Figure 9 53 Exports NFS server configuration sample 8 On the installation server or virtual server start the dhcpd and nfsd services Chapter 9 Operating system installation methods 481 9 On the target virtual server start netboot as shown in the Figure 9 54 Version AF773_ 033 SMS 1 7 c Copyright IBM Corp 2000 2008 All rights reserved Main Menu Select Language Setup Remote IPL Initial Program Load Change SCSI Settings Select Console Select Boot Options Type menu item number and press Enter or select Navigation key 5 Figure 9 54 Select boot options 10 Select option 5 Select Boot Options The window that is shown in Figure 9 55 opens Version AF773_ 033 SMS 1 7 c Copyright IBM Corp 2000 2008 All rights reserved Multiboot Select Install Boot Device Configure Boot Device Order Multiboot Startup lt OFF gt SAN Zoning Support Management Module Boot List Synchronization om B amp B Ww Pr Navigation keys M return to Main Menu ESC key return to previous screen X eXit System Management Services Type menu item number and press Enter or sel
528. standard width compute node that can be installed in any chassis node bay and provides full management capabilities for up to eight chassis All functions and software are preinstalled and are initially configured with Quick Start wizards which integrates all components of the chassis nodes and I O modules The IBM Flex System Manager includes the following features gt Asingle pane of glass to manage multiple chassis and nodes Discovery of nodes in a managed chassis Integrated x86 and POWER servers storage and network management Virtualization management VMControl Upward integration to an existing Tivoli environment YY vV Yy IBM Flex System Manager is a hardware appliance with a specific hardware configuration and preinstalled software stack The appliance concept is similar to the Hardware Management Console in Power Systems environments However FSM expands upon the capabilities of these products Although based on a Intel compute node the hardware platform for FSM is not interchangeable with any other compute node A unique expansion card that is not available on other compute nodes allows the software stack to communicate on the private management network The FSM is available in two editions IBM Flex System Manager and IBM Flex System Manager Advanced 62 IBM Flex System p270 Compute Node Planning and Implementation Guide The IBM Flex System Manager base feature set offers the following functionality Support up t
529. stem Configuration System Status and Health Semice and Support gt Search ah Ww Discover OS and Update Firmware Views and Collect Inventoany View Hetwok Topology Pa Reference Code Problerr Collect inventon Figure 7 53 Starting inventory request of Power compute node 2 Click Inventory Collect Inventory to start the collection Nearly all processes in the FSM application are run as jobs and can be scheduled The scheduling can be immediate or in the future IBM Flex System p270 Compute Node Planning and Implementation Guide Figure 7 54 shows the job scheduler window that opens when the inventory collection process is started The start options are to run now default or schedule to be run at a later time For this is example the default of Run Now is acceptable Launch Job Schedule Notification Options Job name and schedule lob Hame Collect Inventory June 18 2013 9 51 54 AM EDT Choose when to run the job pon How schedule Figure 7 54 Starting inventory collection job Chapter 7 Power node management 241 242 3 Click OK at the bottom of the window When the job starts a notification is sent to the originating window with options to Display Properties or Close Message as shown in Figure 7 55 Manage Power Systems Resources m ATKCOR1OZI J The following job has been created and started successfully Collect Inventory August 6 201
530. stem Enterprise Chassis optimizing time to value The FSM provides a world class user experience with a truly single pane of glass approach for all chassis components Featuring an instant resource oriented view of the Enterprise Chassis and its components the FSM provides vital information for real time monitoring An increased focus on optimizing time to value is evident in the following features gt Setup wizards including initial setup wizards which provide intuitive and quick setup of the FSM gt A chassis map which provides multiple view overlays to track health firmware inventory and environmental metrics gt Configuration management for a repeatable setup of compute network and storage devices gt Remote presence applications for remote access to compute nodes with single sign on gt Quick search provides results as you type Beyond the physical world of inventory configuration and monitoring the IBM Flex System Manager enables the following virtualization and workload optimization for a new class of computing gt Resource usage Within the network fabric the FSM detects congestions notification policies and relocation of physical and virtual machines including storage and network configurations gt Resource pooling The FSM pools network switches with placement advisors that consider VM compatibility processor availability and energy gt Intelligent automation FSM has automated a
531. stem Manager Version Power Systems Resources Cammar E l Hosts g Server 7954 24 SN107782 Performance Summary Search the table Search SOE Sees Select Name Access State Detailed LA Operating Systems q Pouce Unite Related Resources None o Topology Ferspectives Create Group IBhA FSh Explorer Remove Add to Automation Hardware Information Inventory Capacity on Demand C Operations Configuration Plans Power OnOff Configuration Template Release blanagement Create Virtual Server Security Curent Configuration System Configuration Deployment History TFT hE T FS F F F F F F Edit Host Manage System Flans Manage System Frofile System Status and Health Semice and Support lt iil FESI iiil Properties SS _ a Serwer to Storage Mapp View Workload Manage Virtual Serer Availabili M4 Page lofi ji s Selected 1 Total 1 Filtered 1 Figure 8 8 System Configuration create a virtual server option Chapter 8 Virtualization 359 2 Right click the wanted server then click System Configuration Create Virtual Server to start the wizard as shown in Figure 8 8 The window that is shown in Figure 8 9 opens Create Virtual Server Server 7954 24 SN107782B8 Name c gt Name Peres This wizard helps you create and assign resources to a virtual server Et Host name Server 7954 24X SNLOF7 S26 thernet Physical 10 Virtual server mame itsoVIOS6aA Virtual
532. stem V7000 Storage Node provide block and file storage for non persistent and persistent VDI deployments gt Flex System Manager and Virtual Desktop Management Servers easily and efficiently manage virtual desktops and VDI infrastructure gt Converged FCoE offers clients superior networking performance gt Windows 2012 and VMware View are available gt New Reference Architectures for Citrix Xen Desktop and VMware View are available For more information about these and other VDI offerings see the IBM SmartCloud Desktop Infrastructure page at this website http ibm com systems virtualization desktop virtualization Chapter 2 IBM PureFlex System 21 2 4 IBM PureFlex System Express The tables in this section represent the hardware software and services that make up an IBM PureFlex System Express offering The following items are described M 2 4 1 Available Express configurations 2 4 2 Chassis on page 26 2 4 3 Compute nodes on page 27 2 4 4 IBM FSM on page 27 2 4 5 PureFlex Express storage requirements and options on page 28 2 4 6 Video keyboard mouse option on page 32 2 4 7 Rack cabinet on page 33 2 4 8 Available software for Power Systems compute nodes on page 33 2 4 9 Available software for x86 based compute nodes on page 34 YY YYY YV Y Y To specify IBM PureFlex System Express in the IBM ordering system specify the indicator feature code tha
533. stem is ready for use Click Finish to log in to the system ef Sewer Scenario Please visit us at http Jea novell comli nu a Installation Summary a Perform Installation Configuration root Password Check Installation Hostname Ne teva tk Customer Center Online Update Service Users Clean Up Release Notes se i ie i i i i i a S Hardware Configuration X Clone This System for AutovaST Help Abort Back Figure 12 52 Installation Completed window 598 IBM Flex System p270 Compute Node Planning and Implementation Guide 8 The virtual server reboots again the VNC server is shut down and we can connect to the command line interface based system console through a virtual terminal by using SSH or Telnet as shown in Figure 12 53 Starting Name Service Cache Daemon done Checking ipr microcode levels Completed ipr microcode updates done Starting ipr initialization daemon done Starting irqbalance done Starting cupsd done Starting rtas_errd platform error handling daemon done Starting ipr dump daemon done Starting SSH daemon done Starting smartd unused Setting up remotefs network interfaces Setting up service remotefs network done Starting mail service Postfix done Starting CRON daemon done Starting INET services xinetd done Master Resource Control runlevel 3 has been reached Skipped services in runlevel 3 smbfs nfs smartd splash Welcome to SUSE Linux Ente
534. structure at the top of each page that gives you easy access to most functions as shown in Figure 7 10 Most menu options display more functions when clicked IBM Chassis Management Module USERID Settings Log Out Help Ww v 4 w vv Search A System Status Multi Chassis Monitor Events Service and Support Chassis Management Mgt Module Management Search Sat 22 Jun 2013 00 24 Figure 7 10 CMM navigation menu The following navigation menu tabs are available gt System Status gt Multi Chassis Monitor 206 IBM Flex System p270 Compute Node Planning and Implementation Guide gt Events as shown in Figure 7 11 on page 207 Service and Support Chassis Management Mot Module Management S Event Log Full log history of all events Event Recipients 4dd and modify E Mail SMMP and Syslog recipients Figure 7 11 Event options gt Service and Support as shown in Figure 7 12 Service and Support Chassis Management Mot Module Management Search Problems Problems addressed by IBM Support if you have enabled service and support to report problems Settings Configure your system to monitor and report service events Advanced BIST connectivity status redundant status and service reset Obtain a compressed file of relevant service data i I Figure 7 12 Service support options gt Chassis Management as shown in Figure 7 13 pport Chassis Management Mat Module Management
535. structure system that is optimized for scalable cloud deployments with built in redundancy for highly reliable and resilient operation to support critical applications and cloud services For more information see 2 5 IBM PureFlex System Enterprise on page 35 2 2 Components A PureFlex System configuration features the following main components gt gt A preinstalled and configured IBM Flex System Enterprise Chassis Choice of compute nodes with IBM POWER7 POWER7 or Intel Xeon E5 2400 and E5 2600 processors IBM FSM that is preinstalled with management software and licenses for software activation IBM Flex System V7000 Storage Node or IBM Storwize V7000 external storage system The following hardware components are preinstalled in the IBM PureFlex System rack Express 25 U 42 U rack or no rack configured Enterprise 42 U rack only The following choices of software are available Operating system IBM AIX IBM i Microsoft Windows Red Hat Enterprise Linux or SUSE Linux Enterprise Server Virtualization software IBM PowerVM KVM VMware vSphere or Microsoft Hyper V SmartCloud Entry 3 2 for more information see 2 7 IBM SmartCloud Entry for Flex system on page 50 Complete pre integrated software and hardware Optional onsite services to get you up and running and provide skill transfer Chapter 2 IBM PureFlex System 17 The hardware differences between Express an
536. sulated into FCoE by FCF and sent to the compute node The CN4093 10Gb Converged Scalable Switch with this VLAN configured and using FCF provides an example of FCoE gateway for bridging FCoE and FC networks It is where compute node 8 that is using FCoE connectivity can attach to external storage which is FC attached to the CN4093 Chapter 6 Converged networking 175 6 2 2 Administration interface for the CN4093 The following methods can be used to access the CN4093 10Gb Converged Scalable Switch to configure view or make changes gt A Telnet SSH connection via the Chassis Management Module gt A Telnet SSH connection over the network via data ports if configured or the external management port gt The Browser Based Interface BBI over the network gt A serial connection via the serial port mini USB RS232 cable is required The Telnet SSH connection can access two types of CLI a text menu based CLI IBMNOS or one that is based on the International Standard CLI ISCLI In this section we use the ISCLI to display and enter commands on the CN4093 For more information about the CN4093 10Gb Converged Scalable Switch see the IBM Information Center at this website http publib boulder ibm com infocenter flexsys information topic com ibm acc networkdevices doc Io module compassFC html 6 2 3 Configuring for Fibre Channel Forwarding In this section we create the VLAN as shown in Figure 6 5 on page 175 We also creat
537. sy NN ue a AU 2 SSO Te PIR ee eh el SS SS IPM en 2A BO a G SS EM llc 2 Oe ele 2 aM Figure 7 88 Reconnecting the previous session Chapter 7 Power node management 267 Components of the web based user interface The HMC workplace window consists of several major components as shown in Figure 7 89 Hardware Management Console Ej Welcome E Systems Management E Ug Servers 8233 E8B SM1 0DD51P CH Custom Groups i System Plans B HMC Management t N Service Management RI Updates Welcome HMC Version Use the Hardware Management Console HMC to manage this HMC as well as servers logical partitions managed systems and other resources Click on a link in the navigation pane at the left il Systems Management System Plans HMC Management Service Management Updates CE Status Bar Additional Resources E Guided Setup Wizard Installing and configuring the HMC v7 guide view as HTML Managing the HMC v7 guide View as HTML Servicing the HMC v7 guide View as HTML Go HMC Readme Online Information Manage servers logical partitions managed systems and frames set up configure view current status troubleshoot and apply solutions Import deploy and manage system plans on the HMC Perform management tasks to set up configure and customize operations associated with this HMC Perform service tasks to create customize and manage services associated with this HM
538. t IBM Flex System x220 x222 x240 or x440 Compute Nodes x86 based see Table 2 15 on page 41 Table 2 14 Power Systems compute nodes AAS feature MTM Description code 0497 1457 7FL IBM Flex System p24L Compute Node 0437 7895 22x IBM Flex System p260 Compute Node ECSD 7895 23A IBM Flex System p260 Compute Node POWER7 4 core only ECS3 7895 23X IBM Flex System p260 Compute Node POWER7 0438 7895 42X IBM Flex System p460 Compute Node 40 IBM Flex System p270 Compute Node Planning and Implementation Guide AAS feature MTM Description code ECS9 7895 43X IBM Flex System p460 Compute Node POWER7 ECS4 7954 24X IBM Flex System p270 Compute Node POWER7 Table 2 15 x86 based compute nodes AAS feature MTM Description code ECS7 7906 25X IBM Flex System x220 Compute Node ECSB 7916 27X IBM Flex System x222 Compute Node 0457 7863 10X IBM Flex System x240 Compute Node 7917 45X IBM Flex System x440 Compute Node 2 5 5 IBM FSM The IBM FSM is a high performance scalable system management appliance It is based on the IBM Flex System x240 Compute Node The FSM hardware is preinstalled with Systems Management software that you can use to configure monitor and manage IBM PureFlex Systems FSM is based on the following components Intel Xeon E5 2650 8C 2 0 GHz 20 MB 1600 MHz 95 W 32 GB of 1333 MHz RDIMMs memory Two 200 GB 1 8 inch SATA MLC SSD in a RAID 1 configuration 1 TB 2 5 inch SATA 7 2 K RPM hot swap 6 Gbps
539. t Dedicated Shared Assigned Processors Maximum pool processors 24 0 Available processors 22 0 Assigned processors 1 Figure 11 5 Creating a Virtual Server Processor panel 5 In the next panel click the option for the virtual Ethernet Adapter you require from the list and then click Edit The Virtual Ethernet Modify Adapter panel opens as shown in Figure 11 6 on page 505 Set the port virtual Ethernet VLAN to 1 which should be the default and then click OK Click Next in the Ethernet panel to continue the wizard 504 IBM Flex System p270 Compute Node Planning and Implementation Guide Virtual Ethernet Modify Adapter Specify an adapter ID and virtual Ethernet for this adapter Adapter Id Port Virtual Ethernet 1 WSI Type Id WST Type Version VSI Manager Id IEEE Settings Select this option to allow additional virtual LAN IDs for the adapter IEEE 02 19 compatible adapter Maxirnium number of WLANs 20 Additional WLAN IDs Shared Ethernet Settings Select Ethernet bridging to link bridge the virtual Ethernet to a physical network L Use this adapter for Ethernet bridging Priority aera p Advanced virtual ethernet configuration Figure 11 6 Creating a Virtual Server Ethernet Adapter panel For more information about configuring an IBM i client with Ethernet adapters and VLAN tagging see BM PowerVM Virtualization Managing and Monitoring SG24 7590 which is availab
540. t as shown in Figure 5 8 Change Power Capping Policy x No Power Limiting The maximum power limit will be determined by the active Power Redundancy policy Static Power Limiting Sets an overall chassis limit on the maximum input power When the static power limit is set a newly inserted or discovered component Will nok be given power permission and allowed to power on if its power requirement causes the total power to exceed the static power limit 100 6797 Watts Range 5024 7515 g0 5 of max allocation aK Cancel Figure 5 8 Setting power capping in the CMM Chapter 5 Planning 155 5 7 5 Chassis power requirements It is expected that the initial configuration based on the IBM PureFlex System configuration that is ordered plus any other nodes contains the necessary number of power supplies You need to know the number of power supplies that are needed to support the number of Power Systems compute nodes in the IBM Flex System Enterprise Chassis when a Power Systems compute node is added to an existing chassis In addition you must know the relationship between the number of Power Systems compute nodes and the number of power supplies in the chassis Table 5 6 shows the maximum number of Power compute nodes that can be installed for the power supplies that are used in the chassis The table uses the following color coded convention gt Green No restriction to the number of compute nodes installable gt
541. t is listed in Table 2 3 for each machine type Table 2 3 Express indicator feature code EFDA Not applicable IBM PureFlex System Express Indicator Feature Code Not applicable IBM PureFlex System Express with PureFlex Solution for IBM i Indicator Feature Code 2 4 1 Available Express configurations The PureFlex Express configuration is available in a single chassis as a traditional Ethernet and Fibre Channel combination or converged networking configurations that use Fibre Channel over Ethernet FCoE or Internet Small Computer System Interface iSCSI The required storage in these configurations can be an IBM Storwize V7000 or an IBM Flex System V7000 Storage Node Compute nodes can be POWER or x86 based or a combination of both The IBM FSM provides the system management for the PureFlex environment Ethernet and Fibre Channel combinations have the following characteristics gt POWER x86 or hybrid combinations of compute nodes 1 Gb or 10 Gb Ethernet adapters or LAN on Motherboard LOM x86 only 1 Gb or 10 Gb Ethernet switches 16 Gb or 8 Gb for x86 only Fibre Channel adapters 16 Gb or 8 Gb for x86 only Fibre Channel switches YY VvV Yy 22 IBM Flex System p270 Compute Node Planning and Implementation Guide FCoE configurations have the following characteristics gt POWER x86 or hybrid combinations of compute nodes gt 10 Gb Converged Network Adapters CNA or LOM x86 only gt 10 Gb Converged Network switch or s
542. t is on it reads the smart vital product data chip to obtain system information 124 IBM Flex System p270 Compute Node Planning and Implementation Guide Figure 4 30 Anchor card The vital product data chip includes information such as machine type model and serial number 4 13 External USB device support Use this information to determine which USB devices are supported for use with the p270 Compute Node 4 13 1 Supported IBM USB devices Table 4 17 shows the IBM USB devices that are supported for direct attach to Power Systems compute nodes Table 4 17 IBM USB devices supported for direct attach to Power Systems compute nodes Feature Description AlX and Linux VIOS clients VIOS clients code VIOS AIX and Linux IBM i me reo ceroxreneave cane e ws e Re Tier stocaroxreneaveascane e ws we Ro e 1 7eROXreneabedscane e ws we Ro eves rroen aon e ws we ne Chapter 4 Product information and technology 125 Feature Description AlX and Linux VIOS clients VIOS clients code VIOS AIX and Linux IBM i 1 5 TB RDX removable disk drive Yes Yes N a The AIX operating system supports the mksysb system backup and restore operations by using any of the USB removable media types The AIX operating system does not support the use of a USB device as a target for an AIX operating system installation The AIX operating system and VIOS only support writing to DVD RAM media but can
543. t with PWWNs from Compute Node 7 and Canister 1 of the V7000 Storage Node Member PWWNs in zones can be added directly or as aliases if defined Example 6 5 Creating a zone and zoneset Router config zone name v7k_canl_node7_ioal Router config zone member pwwn 50 05 07 68 05 08 30 70 Router config zone member pwwn 10 00 5c 78 24 52 44 43 Router config zone show zone zone name v7k_canl_node _ioal pwwn 50 05 07 68 05 08 30 70 pwwn 10 00 5c 8 24 52 44 43 Router config zone zoneset name CN4093 IOM2 20JUN13 Router config zoneset member v7k_canl_node7_ioal Router config zoneset show zoneset zoneset name CN4093_ IOM2 20JUN13 zone name v 7k_canl_node _ioal pwwn 50 05 07 68 05 08 30 70 pwwn 10 00 5c 8 24 52 44 43 Example 6 6 shows from the ISCLI activating then verifying the zoneset to ensure that the configuration is correct Example 6 6 Activating and verifying the zoneset Router config zoneset zoneset activate name CN4093 IOM2_ 20JUN13 Router config show zoneset active Active Zoneset CN4093 IOM2 20JUN13 has 1 zones zoneset name CN4093_ IOM2 20JUN13 zone name v7k_canl_node _ioal pwwn 50 05 07 68 05 08 30 70 pwwn 10 00 5c 8 24 52 44 43 Default Zone Deny After this operation is successfully completed the PWWN should be visible from the V7000 Storage Node where a host definition can be created and storage mapped It is important to remember that this entire process should be repeated for multipathing between ho
544. tallation Toolkit for PowerLinux live DVD is intended for IBM Power Systems TM servers and IBM BladeCenter R blade servers using IBM POWER7 R processors The IBM Installation Toolkit supports installation of the following Linux distributions Red Hat Enterprise Linux 5 8 and 5 9 Red Hat Enterprise Linux 6 3 and 6 4 SUSE Linux Enterprise Server 10 SP4 SUSE Linux Enterprise Server 11 SP1 and SP2 For more information on hardware support check http ibm biz BdxXsd To get community support post a message in the forum http ibm biz BdxXrC Welcome to yaboot version 1 3 14 Base 1 3 14 43 mcp7 2 Enter help to get some basic usage information boot Figure 12 9 IBM Installation Toolkit for PowerLinux first panel 12 Press Enter The panel that is shown in Figure 12 10 opens EKEE WELCOME TO IBM INSTALLATION TOOLKIT Machine IP address is 9 42 170 140 If you want to connect to Welcome Center from a remote browser you must start the Wizard mode first Web based applications will be displayed in your remote browser but all non web based applications will be displayed in the text mode display Please choose one of the options below 1 Wizard mode performs installation 2 Rescue mode goes to terminal Figure 12 10 IBM Installation Toolkit for PowerLinux second panel Chapter 12 Installing Linux 561 13 Open a browser and enter https IP_address as shown in Figure 12 10 on page 561 in our
545. tch Chapter 7 Power node management 185 Eth1 Virtual Enterprise Chassis connection to Intel node embedded two port 10 GbE controller Power Systems compute node System x compute node Etho Virtual connection to special GbE management network adapter CMMs in other Enterprise Management Chassis workstation Figure 7 1 Separate management and production data networks The yellow line in the Figure 7 1 shows the production data network The FSM also connects to the production network Eth1 so that it can access the Internet for product updates and other related information PureFlex System and IPv6 In a PureFlex System configuration all components on the management network are configured with static IPv6 addresses with the IBM prefix of fd8c 215d 178e cOde including eth0 on the FSM In addition the eth0 FSM interface does not get an IPv4 address Normal access to the FSM user interface is through an Pv4 address that is assigned to ethl 186 IBM Flex System p270 Compute Node Planning and Implementation Guide One of the key functions that the data network supports is discovery of operating systems on the various network endpoints Discovery of operating systems by the FSM is required to support software updates on an endpoint such as a compute node You can use the FSM Checking and Updating Compute Nodes wizard to discover operating systems as part of the initial setup HMC connectio
546. tem initialization is complete and all of your required PTFs are loaded you should install software license keys for your operating system and keyed products to use a keyed licensed enabled packaged product beyond the trial period Loading the license key and other required information is needed to maintain functionality Use the Work with License Information WRKLICINF command to display the installed keyed products to add license key data To add your license key information complete the following steps 1 Go to the Work with License Information display by entering WRKLICINF and pressing Enter 2 On the Work with License Information display enter a 1 in the option column next to the product identification number to add license key information for a program Press Enter 3 On the Add License Key Information ADDLICKEY display enter the required information and add the license key information Some fields might already contain the required information such as the product identifier license term and system serial number The 18 character license key is entered into the following fields In the first field enter characters 1 6 In the second field enter characters 7 12 Inthe last field enter characters 13 18 In the Usage Limit field enter the number of authorized users or the value NOMAX 11 8 1 License key repository The license key repository stores product license key information for each unique lic
547. tem performance 28 IBM Flex System p270 Compute Node Planning and Implementation Guide IBM Storwize V7000 The IBM Storwize V7000 that is shown in Figure 2 2 is one of the two storage options that is available in a PureFlex Express configuration This option is installed in the same rack as the chassis Other expansion units can be added in the same rack or an adjoining rack depending on the quantity that is ordered Figure 2 2 IBM Storwize V7000 The IBM Storwize V7000 consists of the following components disk and software options gt gt IBM Storwize V7000 Controller 2076 124 SSDs 200 GB 2 5 inch 400 GB 2 5 inch Hard disk drives HDDs 300 GB 2 5 inch 10K RPM 300 GB 2 5 inch 15K RPM 600 GB 2 5 inch 10K RPM 800 GB 2 5 inch 10K RPM 900 GB 2 5 inch 10K RPM 1 TB 2 5 inch 7 2K RPM 1 2 TB 2 5 inch 10K RPM Expansion Unit 2076 224 up to nine per V7000 Controller IBM Storwize V7000 Expansion Enclosure 24 disk slots Optional software IBM Storwize V7000 Remote Mirroring IBM Storwize V7000 External Virtualization IBM Storwize V7000 Real time Compression Chapter 2 IBM PureFlex System 29 IBM Flex System V7000 Storage Node IBM Flex System V7000 Storage Node as shown in Figure 2 3 is one of the two storage options that is available in a PureFlex Express configuration This option uses four compute node bays two wide x two high in the Flex cha
548. that is listed on the POE invoice or other documents must match the usage limit number on the Work with License Information that is displayed for the associated product 3 Move the cursor to the line that contains the product name whose usage limit is to be updated 4 Enter 2 for Change and press Enter 5 When the Change License Information display is shown update the usage limit prompt with the usage limit that is shown on the POE Also update the threshold prompt with CALC or USGLMT Do not leave the threshold set to O Note If message CPA9E1B Usage limit increase must be authorized Press help before replying C G is sent enter G 6 If the POE lists more products than the Work with License Information displays set the usage limits after you install those products 546 IBM Flex System p270 Compute Node Planning and Implementation Guide 11 9 Basic TCP IP configuration If you are setting up a new system you must establish a connection to the network and you must configure TCP IP by using IPv4 for the first time You must use the character based interface to configure TCP IP for the first time For example if you want to use System i Navigator from a PC that requires basic TCP IP configuration before System i Navigator runs you must first use the character based interface to perform the basic configurations When you configure your system by using the character based interface you need to frequently access the Co
549. the HMC web based UI and CLI is turned off by default and can be enabled only from the local HMC interface The default security setting is Secure SO HTTPS or SSH is required to connect to the HMC Chapter 7 Powernode management 197 7 4 3 HMC requirements Dual or redundant HMCs are supported however both must be at the same version and release HMC and FSM used together are not supported Note When dual HMCs are used to manage a Power compute node the redundancy is only at the HMC level Traditional Power based rack servers feature dedicated HMC ports that provide redundancy at the network level to the Flexible Service Processor FSP across two IP addresses Power compute nodes communicate through the active or primary CMM which provides only a single active network path to the FSP Both HMCs connect to the same IP address that is assigned to the FSP HMC support for Power compute nodes requires an HMC release version of V7R7 7 0 2 The minimum system firmware levels for the Power compute nodes that are required are shown in Table 7 1 Table 7 1 Minimum required Power compute node system firmware levels onena e a a For more information see IBM Power Systems HMC Implementation and Usage Guide SG24 7491 which is available at this website http www redbooks ibm com abstracts sg247491 html 198 IBM Flex System p270 Compute Node Planning and Implementation Guide 7 5 IBM IVM This section gives a brief overview of the s
550. the short host name gt Domain name An alphabetic name that the domain name server DNS can translate to the Internet Protocol IP address gt Console Description Short description for the HMC Chapter 7 Power node management 271 272 LAN Adapters tab The LAN Adapters tab as shown in Figure 7 92 shows a summarized list of all local area network LAN adapters that are installed in the HMC You can view details of each LAN adapter by clicking the wanted adapter in the list and then clicking Details which starts the LAN Adapter Details window in which you can change LAN adapter configuration and firewall settings ae Customize Network Settings FL Identification LAN Adapters Name Services Routing LAN Adapters Ethernet eth 00 10 18 4C B5 DE 9 42 171 90 Ethernet ethi 00 21 56 F9 Fe 46 0 0 0 0 Figure 7 92 LAN Adapters tab LAN Adapter Details window The LAN Adapter tab of this window includes the following tabs gt Basic Settings gt IPv6 Settings gt Firewall Settings IBM Flex System p270 Compute Node Planning and Implementation Guide Basic Settings The Basic Settings tab of the LAN Adapter Details window as shown in Figure 7 93 uses the example of eth1 to describe LAN adapter basic configuration ae LAN Adapter Details H E Basic Settings IP 6 Settings Firewall Settings Local Area Network Information LAN interface address 00 21 5E F9 F8 46 ethi private O open Med
551. ties or profile properties to O make changes after you complete this wizard Processing Settings Virtual Adapters To create a partition complete the following Optional Settings information Profile Summary System name Server 7954 24x 5N107782B Partin ID fy Partition name litsavIOS A Partition migration Mover service partition E Allow this partition to be vTPM capable Waming VTPM Trusted Key is the default key Figure 8 27 HMC Create Partition window 2 Click Next to continue The Partition Profile window opens as shown in Figure 8 28 on page 377 and requires that a profile name be provided 376 IBM Flex System p270 Compute Node Planning and Implementation Guide i https 9 42 171 90 hrme wel T2d87 Create Lpar Wizard Server 7954 24X SN107732B Processors Processing Settings Memory Settings i o Virtual Adapters Optional Settings Profile Summary Partition Profile A profile specifies how many processors haw much memory and which 1 0 devices and slots are to be allocated to the partition Every partition needs a default profile To create the default profile specify the following information System name Server 7954 24X 5N107782B Partition name itsoVIOS6A4 Partition ID 1 Profile name itsoVIOS64_new This profile can assign specific resources to the partition or all resources to the partition Click Next if you want to specify the resources used in the partition Select the
552. till blank 470 IBM Flex System p270 Compute Node Planning and Implementation Guide mkvdev fbo vadapter vhost0 vtoptO Available Ismap all Physloc Client Partition ID vhost0 U7954 24X 1047BEB V1 C5 0x00000002 VTD vtopt0 Status Available LUN 0x8100000000000000 Backing device Physloc Mirrored N A Figure 9 39 Using the mkvdev command to create a vtopt virtual target device The loadopt command is used to assign the backing file to the virtual target device or virtual optical device Figure 9 40 shows the loadopt command that is used to associate vtopt0 with the ISO image AIX7TL1SP1 The Ismap a11 command is used to verify the assignment loadopt disk AIX7TLISP1 vtd vtopt0 Ismap all SVSA Physloc Client Partition ID vhost0 U7954 24X 1047BEB V1 C5 0x00000002 VTD vtopt0 Status Available LUN 0x8100000000000000 Backing device var vio VMLibrary AIX7TLISP1 Physloc Mirrored N A Figure 9 40 Using the loadopt command to associate a backing file with a virtual target device The ISO image is now associated with a virtual optical device that is assigned to a virtual server or partition The unloadopt command can be used to unload or switch the ISO image on the virtual target device Figure 9 41 on page 472 shows the Isrep command that is used to review the current image name to the virtual target device name The unloadopt command is then used to remove the association Finally the 1srep command is used a
553. tings Columns aaa Device Type ere Hardware Topology Hierarchical view of components in your chasg Mode 01 node0i x240 Compute Mode B Normal Reports Generate Reports of hardware information Mode 02 hodel x240 Compute Mode F Mormal On 2 Mode BPs AC Goy47ee Mode 03 hodel3 x240 Compute Mode F Mormal On 3 Mode BSAC gga Mode 05 nodels FShi Compute Mode F Mormal On 5 Mode BP Ss1ACd 44835048 Mode 06 hodedb p270 Compute Mode F Mormal Off a Mode fe bee ed OOE1S02 Figure 7 21 Selecting Chassis Management Compute Nodes The Compute Nodes page also has a series of drop down menus and buttons which feature the following functions gt Power and Restart node specific Power On Power Off Shutdown OS and Power Off Restart Immediately Restart with Non maskable Interrupt NMI Restart System Management Processor Boot to SMS Menu gt Actions node specific Launch Compute Node Console Identify LED Chapter 7 Powernode management 215 gt Settings global across all installed nodes Policies e Enable Local power control e Enable Wake on LAN Serial Over LAN Enable Serial Over LAN gt Columns user interface display changes Device Name Device Type Health Status Power Bay Bay Type Machine Type Model O Compatibility WoL Local Power Control Compute Expansion Module Node specific options require that a
554. tion ATX Linux partition mode boot Continue to operating system Network Services H Performance Setup Server firmware start policy Running Auto Start Always On Demand Utilities Concurrent Maintenance Login Profile System power off policy Automatic i5 OS partition mode boot A Default Partition Environment Default Save settings Save settings and power on Figure 7 144 ASMI platform power on options Chapter 7 Power node management 309 8 The monitoring of the startup progress codes can be monitored in real time from the ASMI In the navigation area expand the section on System Information and click Real time Progress Indicator as shown in Figure 7 145 Advanced Sy User ID USERID Expand all menus E Collapse all menus Power Restart Control System Service Aids E System Info viaintenance History System Configuration Network Services Performance Setup On Demand Utilities Concurrent Maintenance Login Profile Figure 7 145 Starting the ASMI Real time Progress Indicator 310 IBM Flex System p270 Compute Node Planning and Implementation Guide 9 Anew window opens that displays the current status SRC or AIX progress code Figure 7 146 shows a sample of real time start messages and codes from a power off state through the VIOS startup w https 9 42 171 37 cgi bin cgifform 82 Not ronning w https 9 42 171 37 cgi bin cg
555. tion and profile To change i any of your choices click Back You can see the details of the physical 1O devices you chose by clicking Details You can modify the profile or partition by using the partition properties or profile properties after you complete this wizard Partition ID 1 Partition name itsoVIOS6A Partition environment Virtual I O Server Profile name itsoVIOS6A_new Desired memory 8 00 GB 0 00 MB Desired processors 4 Physical I O devices 4 Boot mode Virtual 1 0 adapters Figure 8 45 HMC Profile Summary The HMC work pane area under Systems Management Servers Server Name is updated with the new VIOS LPAR as shown in Figure 8 46 on page 398 This new LPAR can now be selected for other operations Chapter 8 Virtualization 397 hscroot Help Logoff systems Management gt Servers gt Server 7954 24X SN107782B le 2 H e Crer Processing Active ae Reference Select gt Name A D A Status A Units Memory GB a Profile gt Environment a Cod 208 d E itsoViIOSsA 1 Not Activated 0 0 Virtual VO Server 00000000 Max Page Size 50 Total 1 Filtered 1 Selected 0 500 4 mW j asks Server 7954 24X SN107782B i Properties Connections Serviceability Operations Hardware Information Capacity On Demand CoD Configuration Updates Create Partition ATX or Linus VIO Server IBM i System Plans Partition Availability Priority View Workload Management Groups Manage C
556. tion of multiple workloads gt Management integration across all resources Flex System Manager simplifies management of all resources within PureFlex 20 IBM Flex System p270 Compute Node Planning and Implementation Guide gt IBM Lab Services optional to accelerate deployment Skilled PureFlex and IBM i experts perform integration deployment and migration services onsite from IBM or can be delivered by a Business Partner 2 3 2 PureFlex Solution for SmartCloud Desktop Infrastructure The IBM PureFlex System Solution for SmartCloud Desktop Infrastructure SDI offers lower costs and complexity of existing desktop environments while securely manages a growing mobile workforce This integrated infrastructure solution is made available for clients who want to deploy desktop virtualization It is optimized to deliver performance fast time to value and security for Virtual Desktop Infrastructure VDI environments The solution uses IBM s breadth of hardware offerings software and services to complete successful VDI deployments It contains predefined configurations that are highlighted in the reference architectures that include integrated Systems Management and VDI management nodes PureFlex Solution for SDI provides performance and flexibility for VDI and includes the following features gt Choice of compute nodes for specific client requirements including x222 high density node gt Windows Storage Servers and Flex Sy
557. tions gt When the system is fully installed in the chassis Use this button to power the system on and off gt When the system is removed from the chassis Use this button to illuminate the light path diagnostic panel on the top of the front bezel as shown in Figure 4 4 on page 78 Chapter 4 Product information and technology 77 Figure 4 4 Light path diagnostic panel The LEDs on the light path panel indicate the following subsystems gt gt gt gt gt gt gt LP Light Path panel power indicator S BRD System board LED can indicate trouble with a processor or memory MGMT Anchor card error also referred to as the management card LED For more information see 4 12 Anchor card on page 124 D BRD Drive HDD or SSD board LED DRV 1 Drive 1 LED SSD 1 or HDD 1 DRV 2 Drive 2 LED SSD 2 or HDD 2 ETE Expansion connector LED If problems occur you can use the light path diagnostics LEDs to identify the subsystem that is involved To illuminate the LEDs with the compute node removed press the power button on the front panel This action temporarily illuminates the LEDs of the troubled subsystem to direct troubleshooting efforts towards a resolution Typically an administrator already obtained this information from the IBM Flex System Manager or Chassis Management Module CMM before removing the node However having the LEDs helps with repairs and troubleshooting if onsite as
558. tions based on these workloads nage The IBM Installation Toolkit Simplified Setup Tool configures the Linux 4pache MySOL and PHP Perl LAMP architectures for web serving functions When you select one of the workloads on this page the Mail server Simplified Setup Tool asks you to The Simplified Setup Tool configures the Postfix Dovecot and Cyrus enter information about your applications environment Using the information that you provide the File and print server Simplified Setup Tool updates the Use the Simplified Setup Tool to configure the Samba file and print server Wh relevant configuration files to best configured Samba software enables the system to share files and printers ar tune the workload for your make them accessible fram clients environment The Simplified Setup Tool asks you to confirm the Network infrastructure server changes before it applies them to The Simplified Setup Tool configures the domain name server ONS Squid the configuration files In some proxy server support and a firewall filter to secure your network cases configuration changes may apply to more than one workload Features You are informed of any lt comfiouration changes that imnnart Figure 12 30 Welcome window This concludes the installation of Linux using the IBM Installation Toolkit for PowerLinux 580 IBM Flex System p270 Compute Node Planning and Implementation Guide 12 2 Installing Red Hat Enterprise Linux
559. tions to select when VIOS is installed to enable IVM if the conditions are met the enablement is automatic When the VIOS installation is complete configure an IP address for the VIOS This address serves as access to the padmin user ID and the IVM web based user interface 7 10 2 Accessing IVM Access to the IVM requires the IP address to the VIOS server Setting the IP address for the VIOS is described in Using the IVM GUI on page 402 The web based user interface can be accessed from http or https protocol Open a browser and enter the following URL where system_name is the host name or IP address of the VIOS https system_name Chapter 7 Power node management 299 The initial IVM login page is shown in Figure 7 133 The padmin User ID and password are entered to access the IVM Integrated Virtualization Manager a ERE Welcome please enter your information x User ID Password Log in Please note After some time of inactivity the system will log you out automatically and ask you to log in again This product includes Eclipse technology http www eclipse org Required field Figure 7 133 IVM login window IVM specific commands are integrated in the VIOS padmin user ID CLI The IVM specific CLI commands in most cases are the same as HMC CLI commands These commands can be accessed during a normal padmin user ID login session 7 10 3 Power compute node basic man
560. titions To perform an action on a partition first select the partition or partitions and then select the task System Overview Total system memory 32 GB Total processing units 24 Memory available 26 62 GB Processing units available 71 6 Reserved firmware memory 1 38 GB Processor pool utilization 0 16 0 7 System attention LED Inactive Partition Details j B T ag 5 Create Partition Shutdown More Tasks r Select ID Name State Uptime Memory Processors ntithe Ocessin Utilized Processing Reference i Units Code itsoVIOS64 Running ae 24 z2 0 16 Figure 8 64 IVM View Modify Partitions view Complete the following steps to create another LPAR Click Create Partition The Create Partition Name window opens as shown in Figure 8 65 Create Partition Name Name Name Memory Processors Rhoma System name Server 7954 24X 5N107782B storage Type 5 Partition ID 2 Storage Optical Tape E Partition name litsolpar2 Summary Environment ATX or Linux fa To create a partition complete the following information Figure 8 65 IVM Create Partition Name window 2 Enter the following information in the Name window A Partition ID The number that is shown defaults to the first available but can be changed to an unused value In this example the default of 2 was used Partition Name This example used the name itsolpar2z The Environment o
561. titions LPARs and workload partitions WPARs Operating system support The IBM POWER 7 processor based systems support the following families of operating systems gt AIX gt IBMi gt Linux In addition the Virtual I O Server VIOS can be installed in special virtual servers that provide support to the other operating systems for using features such as virtualized I O devices PowerVM Live Partition Mobility LPM or PowerVM Active Memory Sharing For more information about LPM see PowerVM Live Partition Mobility SG24 7460 which is available at this website http www redbooks ibm com abstracts sg247460 html For more information about AMS see PowerVM Virtualization Active Memory Sharing redp4470 which is available at this website http www redbooks ibm com abstracts redp4470 html For general information about software that is available on IBM Power Systems servers see the IBM Power Systems Software website at http www ibm com systems power software The p270 supports the following operating systems and versions Virtual I O Server The supported versions are Virtual I O Server 2 2 2 3 or later IBM regularly updates the Virtual I O Server code For more information about the latest update see the Virtual I O Server website at http www 304 ibm com support customercare sas f vios home html IBM Flex System p270 Compute Node Planning and Implementation Guide AIX V6 1 The supported ve
562. tml 60 IBM Flex System p270 Compute Node Planning and Implementation Guide 3 4 Systems Management IBM Flex System uses the following tiered approach to overall system management gt Private management network within each chassis Firmware and management controllers for nodes and scalable switches Chassis Management Module for basic chassis management IBM Flex System Manager for advanced chassis management Upward integration with IBM Tivoli products YY vV Yy These tiers are described next 3 4 1 Private management network At a physical level the private management network is a dedicated 1 Gb Ethernet network within the chassis This network is only accessible by the management controllers in the compute nodes or switch elements the Chassis Management Modules and the IBM Flex System Manager management appliance This private network ensures a separation of the chassis management network from the data network The private management network is the connection for all traffic that is related to the remote presence of the nodes delivery of firmware packages and a direct connection to the management controller on each component 3 4 2 Management controllers At the next level chassis components have their own core firmware and management controllers Depending on the processor type of the compute nodes an Integrated Management Module 2 IMMv2 or Flexible Service Processor FSP serves as the management controller Additio
563. to four instruction threads to run simultaneously in each POWER7 processor core The processor supports the following instruction thread execution modes gt SMT1 Single instruction execution thread per core gt SMT2 Two instruction execution threads per core gt SMT4 Four instruction execution threads per core SMT4 mode enables the POWER7 processor to maximize the throughput of the processor core by offering an increase in processor core efficiency SMT4 mode is the latest step in an evolution of multithreading technologies that were introduced by IBM Chapter 4 Product information and technology 87 Figure 4 10 shows the evolution of simultaneous multithreading Multi threading Evolution 1995 Single thread out of order 1997 Hardware mutithread BUUOBULU UBUOUUU NUUBUEE UUUOUUUU0 HUUUBOU BUBUUOL HUUUBOU U UUUUU y N OO N op Z N amp Y lt U FXO i O ME oaa A FX1 O E o G G G WS FX BBO e O M L FPO O GO O GO O M BOO FX1 O M o O oO U oo FP1 O O M U A LU L L FOO E l EHH Lso O O O O M FP1 O O GCI wE i o Ls1 O O O M m L L L LS E l O M HW BRX LIB _ LI m LS1 O m m e N A cRL LO L O W U O A BRX LIBR ROE HH cRL O M eE E L o Oo Thread 0 Executing _ Thread 3 Executing E Thread 1 Executing L No Thread Executing E Thread 2 Executing Figure 4 10 Evolution of simultaneous multithreading T
564. tricted 1 0 mode E Assign all resources to this wirtual server Figure 11 3 Creating a Virtual Server Name panel 502 IBM Flex System p270 Compute Node Planning and Implementation Guide 3 As shown in Figure 11 4 in the Memory panel select memory as either shared or dedicated as required assign the required quantity in GB and then click Next Memory af Name gt Memory Select the memory mode and assigned memory for the virtual server Memory Mode amp Dedicated l Shared Dedicated Memory Total system memory 32 00 GB Memory available 276 63 GB Acsigned memory GB Figure 11 4 Creating a Virtual Server Memory panel Chapter 11 Installing IBMi 503 4 Inthe Processor page select the processing mode that you want dedicated or shared as shown in Figure 11 5 Define the quantity of processors for that mode and then click Next Processor assignment In dedicated processing mode each assigned processor uses one physical processor core In shared processing mode each assigned processor uses 0 05 physical processor cores This value can be changed on the virtual server s profile after the wizard completes Processor yf Name yf Memory Specify the processing mode and number of processors gt Processor In dedicated processing mode each assigned processor uses 1 physical processor In shared processing mode each assigned processor uses 0 05 physical processors Processing Mode
565. tual Server Server 7E95 424 SH1055008 P Processor Harme wf Memory Processor Specify the processing mode and number of processors Ere In dedicated processing mode each assigned processor uses 1 physical processor In shared processing mode each assigned processor uses 0 10 physical processors Storage selection Processing Mode Physical If Load source console H Dedicated Summary Shared Assigned Processors Maximum pool processors 16 0 Available processors 12 6 Accigned processors Back Mest gt Finish Cancel Figure 8 76 IBM i virtual server processor settings 424 IBM Flex System p270 Compute Node Planning and Implementation Guide 4 Create the virtual Ethernet adapter in the Ethernet window as shown in Figure 8 77 With the VIOS already defined the FSM defines a virtual Ethernet on the same VLAN as the SEA on the VIOS We keep that definition Click Next Create Virtual Server Server 7G95 424 SN1055008 Ethernet wf Mame yf Memory Configure the virtual network adapters for the virtual server Physical I O network adapters can be selected later in the Physical I O page of this wizard Two virtual Ethernet adapters will be created by default however you can add edit or remove adapters to suite your needs qf Processor gt Ethernet Virtual Ethernet Add Delete Select Adapter Port YLAN IC Back Mext gt Finish Cancel Figure 8
566. ueries the target or in this case a Power compute node and determines whether the object is in a state that can be updated Click Next to continue Install Wizard Start Target Checks vf Welcome Start the readiness and concurrency checks on the target systems Start Target gt Checks Processing the targets readiness and concurrency checks can take a few minutes depending on the number of targets selected Click Next to continue Figure 7 69 Readiness checking in the update wizard 252 IBM Flex System p270 Compute Node Planning and Implementation Guide When the readiness check completes the Target Check Results are displayed as shown in Figure 7 70 Typical information includes the duration of the update tasks and if the update is disruptive and requires a power cycle The table that is shown below the informational message indicates the current Applied temporary Committed permanent and Platform IPL levels Target Check Results Display the results from the readiness and concurrency checks on the selected targets Warning The following targets will be powered down during the operation Server 7954 44 SNLOF ESB 7954 448107 7E3B The update is disruptive to the system To prevent impacts from this operation you must quiesce or close any applications running on your operating systems for the affected systems l i Information J Estimated task duration is 33 minutes Current profile data backup file
567. ull Duplex 100 Mbps Half Duplex 100 Mbps Full Duplex or 1000 Mbps Full Duplex gt DHCP Server In an HMC private network the HMC expects that a DHCP server is present If a DHCP server is unavailable the HMC can be configured for that function When it is specified that the adapter be on an open network the DHCP function is locked and cannot be enabled gt IPv4 Address In a private network the IPv4 settings are locked and cannot be changed In an open network the following IPv4 settings can specified Turn off no IPv4 address Request IPv4 address from an external DHCP server Specify a static IP address The connection between the HMC and its managed systems can be implemented as a private or open network Flex System configurations In most instances the HMC adapter that is configured for connecting the Power compute nodes is open All compute and storage nodes and I O modules have their service processor IP addresses assigned at the CMM on a subnet that typically fits the HMC open network model 274 IBM Flex System p270 Compute Node Planning and Implementation Guide IPv6 Settings tab The IPv6 Settings tab of the LAN Adapter Details window as shown in Figure 7 94 uses the example of eth1 to describe LAN adapter IPv6 configuration ae LAN Adapter Details LM Basic Settings IPv Settings Firewall Settings Autoconfig Options Cl Autoconfigure IP addresses C Use DHCP 6 to configure IP setti
568. ur company contact IBM Service and Support Manager monitors tracks and captures system hardware errors and service information and reports serviceable problems directly to IBM Support using the IBM Electronic Service System location Agent tool Connection i f l The wizard helps you get started assisting you with Authorize IBM IDs E Entering your company contac and system location EE E Configuring the connection to IBM E Authorizing an IBM ID Show this welcome page next time Back Finish Figure 7 74 Getting Started with ESA wizard Welcome window Chapter 7 Power node management 257 3 Click Next to continue to the company contact information window as shown in Figure 7 75 Getting Started with Electronic Service Agent Your company contact af Welcome Provide information about the person that IBM Support may contact about a problem reported by Your company 4 o gt Electronic Service Agent contact System location Contact name Connection Hc i i er Authorize IBM IDs a ee eee Telephone number Extension Fax number Alternate fax number E mail Alternate e mail Help desk number Extension Pager number Street address Line 1 Line 2 Line 3 City State or province Country or region Postal code Alternate contact name Alternate telephone number Extension Figure 7 75 Getting started with ESA wizard company contact window 258 IBM Flex System p27
569. ur fully independent FC ports Figure 4 28 shows the IBM Flex System FC5054 4 port 16Gb FC Adapter Figure 4 28 FC5054 4 port 16Gb FC Adapter for IBM Flex System For more information about this adapter see the IBM Redbooks Product Guide that is available at this website http www redbooks ibm com abstracts tips1044 html 0pen 4 10 System management There are several advanced system management capabilities that are built into Power Systems compute nodes A Flexible Support Processor handles most of the server level system management It has features such as system alerts and Serial over LAN capability that are described in this section 118 IBM Flex System p270 Compute Node Planning and Implementation Guide 4 10 1 Flexible Support Processor A Flexible Support Processor FSP provides out of band system management capabilities such as system control runtime error detection configuration and diagnostic tests You often do not interact with the FSP directly Instead you interact by using tools such as FSM CMM the HMC and the IVM The FSP provides a Serial over LAN SOL interface which is available by using the CMM and the console command 4 10 2 Serial over LAN The Power Systems compute nodes do not have an on board video chip and do not support keyboard video and mouse KVM connections Server console access is obtained by a SOL connection only SOL provides a means to manage servers remotely by usi
570. ur system after the fixes are loaded enter Y Yes in the Automatic IPL field If an INZSYS was not performed enter N No Chapter 11 Installing IBMi 539 540 If you are not using an image catalog and have other fixes to install select Option 2 Multiple PTF volume sets in the Prompt for media field and install the other fixes Install Options for Program Temporary Fixes Type choices press Enter Device 4 4 Wok ww ace sw OPTO1 Automatic IPL N Prompt for media 2 Restart type SYS Other options Y F3 Exit F12 Cancel Figure 11 35 Install PTF window System E1277E3B Name SERVICE NONE 1 Single PTF volume set 2 Multiple PIF volume sets 3 Multiple volume sets and SERVICE SYS FULL IBM Flex System p270 Compute Node Planning and Implementation Guide Select Y for Other options The Other Install Options window opens as shown in Figure 11 36 Other Install Options System E1277E3B Type choices press Enter Omit PTFS N Y Yes N No Apply type 1 1 Set all PTFs delayed 2 Apply immediate set delayed PIFs 3 Apply only immediate PTFs PTF type 1 1 All PTFs 2 HIPER PTFs and HIPER LIC fixes only 3 HIPER LIC fixes only 4 Refresh Licensed Internal Code Copy PTFS N Y Yes N No F3 Exit F12 Cancel Figure 11 36 Initial PTF Other Options window Note By using the Omit function you can specify individual fixes that you do not want to install from the
571. ure To determine the cause of a missing object use the LICPGM menu options 10 and 50 534 IBM Flex System p270 Compute Node Planning and Implementation Guide BACKLEVEL The product is installed but its version release and modification is not compatible with the currently installed level of the operating system To correct this problem install a current release of this product If you have secondary languages install a new release of these languages as well by using the LICPGM menu option 21 Note If you use a licensed program that is listed as BACKLEVEL you run the risk of having an information mix up between release levels or some portions of the licensed program might not work properly An installed status value of COMPATIBLE is wanted BKLVOPT The product is installed but its version release and modification are not compatible with the currently installed level of the base product that is associated with the option To correct this problem install a current release of this option BKLVBASE The product is installed but its associated base product is not compatible with this option To correct this problem install a current release of the base product NOPRIMARY The product is installed but the language for the product is not the same as the primary language of the operating system To correct this problem install the primary language for the product by using the Restore Licensed Program RSTLICPGM comman
572. ureFlex FCoE Customization Service 000 eee aes 49 2 6 2 PureFlex Services forIBMi 0 00 cee eee 49 2 6 3 Software and hardware maintenance 0000 eee aes 50 2 7 IBM SmartCloud Entry for Flex system 0 000 eee 50 Chapter 3 Introduction to IBM Flex System 53 3 1 IBM Flex System Enterprise Chassis 0 0 00 cece eee 54 3 2 COMPUTE Nodes 5100 65 bac udikete fee eee ote See oe tee eee ees 56 3 3 VO MOGUIES 5 65 sous coz cones Gece eo Sty hem eee aE ene ai A Sedge 57 3 4 Systems Management 00 00 cece eee eee 61 3 4 1 Private management network 0 000 e eee eee ees 61 3 4 2 Management controllers 0 0c cee ees 61 3 4 3 Chassis Management Module 000 cee eee eee es 61 3 4 4 IBM Flex System Manager 00000 62 30 F OWE SUDDIICS 62 6 5 24 sceoutettivinet eene hee de eacessteke 63 320 COONNG poh i E EE T couse cel antes tise ee oa nea es oa eee ee 69 3 05 INOOG COOING j 2242u w as dire ke ate eel ha heh ae eed Pas 70 3 6 2 Switch and Chassis Management Module cooling 72 3 60 39 POWGr SUDDIY COOING resitet estat 2818 Y hese Po eeawiee eee es 72 Chapter 4 Product information and technology 73 Ad OVENI EW coo oot avota dulce aml ein ot oe che eos ee nelwea cae ne 74 4 1 1 Comparing the compute nodes 2 0 0 eee eee 75 A2 FLOM DANGC 0 24 08 32 542 Meood sete eater
573. use of SOL for installation of the VIOS and later for virtual console access to the VIOS operating system By default Flex System or BTO systems have SOL enabled PureFlex System configurations have SOL disabled as part of the manufacturing process Chapter 7 Power node management 217 When a Power compute node is managed by an FSM or HMC SOL must be disabled at the CMM to allow these platform managers to access the first virtual console session for a compute node SOL can be disabled for each individual node or globally for the entire chassis Disabling SOL for chassis To disable SOL globally for the entire chassis complete the following steps as shown in Figure 7 23 1 Click the Chassis Management gt Compute Nodes menu bar option as shown in Figure 7 23 Click the Settings tab Click the Serial Over LAN tab Clear the Serial Over LAN check box Click OK SYS The change takes effect when the window closes Service and Support Power and Restart Actions Settings Columns Device Mame Health Status Power Hay Machine Type Settings x These settings apply to all compute node bays fincluding the empty bays Serial Over LAN Oa Figure 7 23 Disable SOL for all compute nodes from the Chassis Management Module 218 IBM Flex System p270 Compute Node Planning and Implementation Guide Disabling SOL for an individual compute node To disable for an individual compute node complete the following step
574. ustom Groups Manage Partition Data Manage System Profiles Virtual Resources Figure 8 46 HMC server work pane update with new VIOS LPAR IVM GUI method IVM can have only a single VIOS LPAR This LPAR is created when the VIOS is installed on a Power compute node and owns all the physical I O resources A fraction of the total CPU and memory also is assigned to the VIOS LPAR during the installation of the VIOS After the VIOS is up and available in the network the IVM GUI is available from a workstation browser and can be used to modify the VIOS LPAR initial configuration or created client LPARs The section Using the IVM GUI on page 402 shows how to make changes to the initial VIOS installation configuration 398 IBM Flex System p270 Compute Node Planning and Implementation Guide 8 5 3 Modifying the VIOS profile The FSM virtual server wizard requests only values that are used as the desired values for memory and CPU allocations and derives minimum and maximum values that are based on the input The IVM VIOS installation process takes fractional values of the total installed CPU and memory resources available These values might not reflect the actual requirements and need modification The HMC GUI provides for the direct entry of the minimum desired and maximum values for memory and CPU Using the FSM GUI To change a VIOS profile by using the FSM user interface complete the following steps 1 Select the newly created
575. ute Node 1 4 Flex System components IBM PureSystems consists of no compromise building blocks that are based on reliable IBM technology that support open standards and offer confident roadmaps The IBM Flex System is designed for multiple generations of technology which supports your workload today while being ready for the future demands of your business 1 4 1 IBM Flex System Enterprise Chassis The IBM Flex System Enterprise Chassis offers compute networking and storage capabilities that far exceed products that are currently available in the market With the ability to handle up to 14 compute nodes and intermixing POWER7 POWER7 and Intel x86 architectures the Enterprise Chassis provides flexibility and tremendous compute capacity in a 10 U package Additionally the rear of the chassis accommodates four high speed networking switches Interconnecting compute networking and storage through a high performance and scalable mid plane the Enterprise Chassis can support interfaces with up to 40 Gb speeds Chapter 1 Introduction 5 The ground up design of the Enterprise Chassis reaches new levels of energy efficiency through innovations in power cooling and air flow Smarter controls and market leading designs allow the Enterprise Chassis to break free of one size fits all energy schemes The ability to support the demands of tomorrow s workloads is built in to a new I O architecture which provides choice and flexibility
576. values When fully used the maximum frequency varies depending on whether the user favors power savings or system performance DPS mode features the following possible settings gt Favor Power over Performance lf an administrator prefers energy savings and a system is fully used the system is designed to reduce the maximum frequency to approximately 95 of nominal values gt Favor Performance over Power If an administrator prefers performance over energy consumption the maximum frequency can be increased to up to approximately 110 of the nominal frequency to give extra performance Note The maximum frequency in DPS Favor Performance mode comes into effect when the system approaches full usage at the nominal clock speed To get a higher frequency independent of the usage of the system a processor option with a higher clock speed should be ordered The key is that the system must be at a high usage before the additional speed increase is delivered which generally is in a situation where there is already a high demand for processor resource or there is an increased response time because of a lack of processor resource System firmware continuously monitors the performance and usage of every processor core that belongs to the Compute Node Based on this usage and performance data the firmware dynamically adjusts the processor frequency and voltage which reacts within milliseconds to adjust workload performance and deliver power
577. ver Service Level Interface System Management Interface Tool symmetric multiprocessing System Management Services Simultaneous Multi Threading simple mail transfer protocol Abbreviations and acronyms 603 SOI SOL SPAR SPT SR SR IOV SRAM SRC SRM SS SSA SSD SSH SSIC STP TCB TCO TCP TCP IP TFTP TL TLB TPMD TR TSO TTY TX UDP UEFI UFP UI 604 silicon on insulator Serial over LAN Switch Partition System Planing Tool short range Single Root I O Virtualization static RAM System Resource Controller Storage Resource Management simple swap serial storage architecture solid state drive Secure Shell System Storage Interoperation Center Spanning Tree Protocol Transport Control Block total cost of ownership Transmission Control Protocol Transmission Control Protocol Internet Protocol Trivial File Transfer Protocol technology level translation lookaside buffer thermal and power management device Technology Refresh TCP Segmentation Offload teletypewriter transmit user datagram protocol Unified Extensible Firmware Interface Unified Fabric Port user interface UL UPS URL USB VAC VIO VIOS VLAG VLAN VLP VM VMC VNC VPD VPI VRRP VSP WPAR WW WWN WWPN XML Underwriters Laboratories uninterruptible power supply Uniform Resource Locator universal serial bus Volts alternating current Virtual I O Virtual I O Server Virtual
578. viceability itd Hardware Information Capacity On Demand Co E Configuration Updates E Create Partition ARK or Linux WIO Server System Plans Partition Availability Priority View Workload Management Groups Manage Custom Groups Manage Partition Data Figure 8 26 Highlighting the Manage Power Systems Resources plug in Creating the VIOS logical partition The lower part of the work pane area shows the available tasks for the selected managed system These tasks are the starting point for creating a VIOS LPAR on the selected managed system To create the LPAR complete the following steps 1 From the list that is shown under tasks expand Configuration Create Partition then click VIO Server to open the Create Partition window as shown in Figure 8 27 on page 376 Enter the Partition ID This example uses an ID of 1 Chapter 8 Virtualization 375 Enter the Partition name This example uses itsoVIOS6A If this VIOS is used for Live Partition Mobility select the Mover service partition option lf Trusted Virtual Platform Module vIPM is to be enabled select the Allow this partition to be vTPM capable option https 9 42 171 90 hme content taskid 110 amp refresh 253 Create Lpar Wizard Server 7954 24X SN107732B Create Partition ate Partition Partition Profile Processors c This wizard helps you create a new logical partition and a default profile for it You can use Wie eee the partition proper
579. vironment setup requires the creation of the two partitions which are set for a VIOS environment After the partition profiles are created with the appropriate environment setting and physical resources that are assigned to support independent disk and network I O the VIOS operating systems then can be installed When you are planning a dual VIOS environment on a computer node your hardware configuration requires two partitions which require a physical Ethernet connection and disk resources available The following examples describe several of the possible hardware configurations to support a dual VIOS environment These examples are not intended to be all inclusive With the p270 Compute Node a typical basic configuration for a VIOS is 16 GB of memory a single internal disk and two cores of CPU 150 IBM Flex System p270 Compute Node Planning and Implementation Guide To support a dual VIOS environment the following hardware is required as a minimum gt An I P capable adapter for each VIOS partition EN2024 4 port 1Gb Ethernet Adapter EN4054 4 port 10Gb Ethernet Adapter CN4058 8 port 10Gb Converged Adapter gt An FC capable adapter for each VIOS partition FC3172 2 port 8Gb FC Adapter FC5054 4 port 16Gb FC Adapter gt Storage to host VIOS lf only internal based storage is used 2x HDD or 2x SSD and the IBM Flex System Dual VIOS Adapter installed so one disk is assigned per VIOS If internal
580. w Setting Partition Memory This section defines the memory allocation for the LPAR in the Memory Settings window as show in Figure 8 31 on page 381 Complete the following steps to set the partition memory 1 Specify the minimum desired and maximum memory requirements processors for the partitions shown Chapter 8 Virtualization 379 380 The following minimum desired and maximum settings are similar to their processor counterparts Minimum memory Represents the absolute memory that is required to make the partition active If the amount of memory that is specified under minimum is not available on the managed server the partition cannot become active Desired memory Specifies the amount of memory beyond the minimum that can be allocated to the partition If the minimum is set at 2 GB and the desired is set at 8 GB the partition in question can become active with anywhere between 2 MB and 8 GB Maximum memory Represents the absolute maximum amount of memory for this partition This value can be a value greater than or equal to the number that is specified in Desired memory In this example the number of dedicated processes can be varied 2 GB 8 GB dynamically without disruption Changing the minimum or maximum values of a running LPAR is an LPAR profile change and requires a stop and start of the LPAR 2 After you make your memory selections select Next to open the I O window as shown in Figure
581. w workloads with the following simple VM management options gt Deploy application images across compute and storage resources gt Offer users self service for improved responsiveness gt Enable security through VM isolation project level user access controls gt Simplify deployment there is no need to know all the details of the infrastructure gt Protect your investment with support for existing virtualized environments gt Optimize performance on IBM systems with dynamic scaling expansive Capacity and continuous operation Improve efficiency with a private cloud that includes the following capabilities gt Delegate provisioning to authorized users to improve productivity gt Implement pay per use with built in workload metering gt Standardize deployment to improve compliance and reduce errors with policies and templates gt Simplify management of projects billing approvals and metering with an intuitive user interface gt Ease maintenance and problem diagnosis with integrated views of both physical and virtual resources For more information about IBM SmartCloud Entry on Flex System see this website http www ibm com systems flex smartcloud bto entry 52 IBM Flex System p270 Compute Node Planning and Implementation Guide Introduction to IBM Flex System IBM Flex System is a solution that consists of hardware software and expertise The IBM Flex System Enterprise Chassis the major hard
582. ware component is the next generation platform that provides new capabilities in many areas This chapter includes the following topics 3 1 IBM Flex System Enterprise Chassis on page 54 3 2 Compute nodes on page 56 3 3 I O modules on page 57 3 4 Systems Management on page 61 3 5 Power supplies on page 63 3 6 Cooling on page 69 YYYY YV Y Copyright IBM Corp 2013 All rights reserved 53 3 1 IBM Flex System Enterprise Chassis Figure 3 1 shows the front and rear views of the IBM Flex System Enterprise Chassis Figure 3 1 IBM Flex System Enterprise Chassis Front and rear The chassis provides 14 bays for standard width nodes four scalable I O switch modules and two Chassis Management Modules CMMs Current node configurations include standard width and double wide options The chassis supports other configurations such as double wide double high nodes such as the V7000 Storage Node Power and cooling can be scaled up as required in a modular fashion as more nodes are added Table 3 1 shows the specifications of the Enterprise Chassis Table 3 1 Enterprise Chassis specifications Machine type model System x ordering sales channel 8721 A1x or 8721 LRx Power Systems sales channel 7893 92X Form Formfactor 10 10 U rack mounted unit rack mounted unit Maximum number a 14 standard a bay or seven double wide two bays or three compute nodes that are doub
583. witches Configurations There are seven different configurations that are orderable within the PureFlex express offering These offerings cover various redundant and non redundant configurations with the different types of protocol and storage controllers Table 2 4 summarizes the PureFlex Express offerings Table 2 4 PureFlex Express Offerings Configuration 1a f2a ABA e Networking 10 GbE 10 GbE 10 GbE 1 GbE 1 GbE 10 GbE 10 GbE Ethernet Networking FCoE FCoE FCoE 16 Gb 16 Gb 16 Gb 16 Gb Fibre Channel Number of 1 2 2 4 4 4 4 Switches V7000 V7000 V7000 Storwize V7000 Storwize V7000 Storwize Storage node Storage Storage V7000 Storage V7000 Storage V7000 or Storwize Node Node Node Node V7000 1 Chassis with 2 Chassis management modules fans and power supple units PSUs None or 42 U or 25 U PDUs TF3 KVM Tray Optional Media DVD only DVD and Tape Enclosure optional V7000 Options Storage Options 24 HDD 22 HDD 2 SSD 20 HDD 4 SSD or Custom Storwize expansion limit to single rack in Express overflow storage rack in Enterprise nine units per controller Up to two Storwize V7000 controllers and up to nine IBM Flex System V7000 Storage Nodes V7000 VIOS AIX IBM i and Solutions Consultant Express on first Controller Content Nodes P260 p270 p460 x222 x240 x220 x440 P260 p270 p460 x222 x240 x220 x440 p460 x222 x240 x220 x440 Chapter 2 IBM PureFlex System 23 Cee Le CS CS CE a
584. witches 1 and 2 with switch 1 in chassis create a loop in a Layer 2 network see Topology 2 in Figure 5 1 on page 143 We must use Spanning Tree Protocol in that case as a loop prevention mechanism because a Layer 2 network cannot operate in a loop Assume that the link between enterprise switch 2 and chassis switch 1 is disabled by Spanning Tree Protocol to break a loop so traffic is going through the link between enterprise switch 1 and chassis switch 1 If there is a link failure Spanning Tree Protocol reconfigures the network and activates the previously disabled link The process of reconfiguration can take tenths of a second and the service is unavailable during this time Whenever possible plan to use trunking with VLAN tagging for interswitch connections which can help you achieve higher performance by increasing interswitch bandwidth You can also achieve higher availability by providing redundancy for links in the aggregation bundle STP modifications such as Port Fast Forwarding or Uplink Fast might help improve STP convergence time and the performance of the network infrastructure Additionally several instances of STP might run on the same switch simultaneously on a per VLAN basis that is each VLAN has its own copy of STP to load balance traffic across uplinks more efficiently For example assume that a switch has two uplinks in a redundant loop topology and several VLANs are implemented If single STP is used one o
585. with your POWER7 processor technology based servers or from other Linux distributors Chapter 5 Planning 133 Important For systems ordered with the Linux operating system IBM ships the most current version that is available from the distributor If you require another version than the one shipped by IBM you must obtain it by downloading it from the Linux distributors website Information concerning access to a distributor s website is on the product registration card that is delivered to you as part of your Linux operating system order For more information about the features and external devices that are supported by Linux see this website http www ibm com systems p os 1inux For more information about SUSE Linux Enterprise Server see this website http www novell com products server For more information about Red Hat Enterprise Linux Advanced Servers see this website http www redhat com rhel features Important Be sure to update your system with the latest Linux on Power service and productivity tools from the IBM website at http www14 software ibm com webapp set2 sas f lopdiags home html Full system partition planning In the full system partition installation you have several AIX version options as described in Operating system support on page 132 When you install AIX V6 1 TL8 and AIX V7 1 TL2 you can virtualize through WPARs as described in 10 2 Installing AIX on page 491 Older versions of
586. wo separate tasks or as one task The example that is presented here uses the single task approach Complete the following steps 1 From the Hosts view right click the wanted server to be updated as shown in Figure 7 64 on page 249 and select Release Management Acquire Updates to start the Acquire Updates wizard 248 IBM Flex System p270 Compute Node Planning and Implementation Guide anage Power Systems Resources Welcome Flex System Manager Version Power Systems Resources Comma E L Hosts H server 7954 24x sNio7z7e2 Performance Summary Search the table Search a Virtual Servers Select Hame Access State S Reference Code Prob La Operating Systems oo yp Related Resources fo Power Units Topology Perspectives Create Group Bh FShA Explorer Remove Add to Automation Hardware Information Invento rr Operations Power On Ot Import Updates by FTF Release Wanagement Security System Configuration System Status and Health Acquire Updates Senice and Suppor Show and Install Updates Figure 7 64 Acquiring firmware update for Power compute node Power Firmware Management Readiness Check Fr F FS YT F F F F F ral ii sa MT 2 Select the Import updates from the file system option and enter the complete path on the FSM to the update package then click OK as shown in Figure 7 65 Acquire Updates Select
587. work connectivity Network connectivity in Power Systems compute nodes is provided by the I O adapters that are installed in the nodes The adapters are functionally similar to the CFFh cards that are used in BladeCenter servers The Ethernet adapters that are currently supported by compute nodes are listed in Table 5 1 For more information about the supported expansion cards see 4 9 I O adapters on page 102 Table 5 1 Supported Ethernet adapters Feature Code Supported Ethernet adapters Ethernet I O Adapters 1762 IBM Flex System EN4054 4 port 10Gb Ethernet Adapter 1763 IBM Flex System EN2024 4 port 1Gb Ethernet Adapter Converged Ethernet I O Adapters EC24 IBM Flex System CN4058 8 port 10Gb Converged Adapter 5 2 1 Ethernet switch module connectivity There are various I O modules that can be used to provide network connectivity These modules include Ethernet switch modules that provide integrated switching capabilities for the chassis and pass through modules that make internal compute node ports available external to the chassis The use of the Ethernet switch modules might provide required or enhanced functions and simplified cabling However in some circumstances for example specific security policies or certain network requirements it is not possible to use integrated switching capabilities so pass through modules are required Make sure that the external interface ports of the switches that are selected are compati
588. wser Chapter 7 Power node management 199 The VIOS is automatically configured to own all of the I O resources The resources can be reconfigured as wanted after the VIOS is installed The server can be configured to provide service to other logical partitions LPARs through its virtualization capabilities However all other LPARs can have a mix of physical and virtual adapters for disk access network and optical devices The IVM does not interact with the service processor of the system A specific device named Virtual Management Channel VMC was developed on the VIOS to enable a direct hypervisor configuration without requiring more network connections This device is activated by default when the VIOS is installed as the first partition The VMC enables IVM to provide the following basic logical partitioning functions gt Creating and maintaining logical partitions in a managed system Displaying managed system resources and status Opening a virtual terminal for each partition Displaying virtual operator panel values for each partition Performing dynamic LPAR DLPAR operation Managing virtualization features Acting as a service focal point for the individual compute node YYYY YV Yy Because IVM runs on an LPAR there are limited service based functions and the CMM interface must be used For example power on the server by physically pushing the server power on button or remotely accessing CMM because IVM does not run whil
589. x System p270 Compute Node Planning and Implementation Guide 2 Enter the license accept command then enter the cfgassist command as shown in Figure 8 51 license accept The license has been accepted cfgassist Figure 8 51 VIOS first time login license accept and TPIP configuration 3 Start the process of configuring the IP address of the VIOS by selecting the VIOS TCP IP Configuration option as shown in Figure 8 52 Press Enter Config Assist for VIOS Move cursor to desired item and press Enter Set Date and TimeZone Change Passwords Set System Security VIOS TCP IP Configuration Install and Update Software Storage Management Devices Performance Role Based Access Control RBAC Shared Storage Pools Electronic Service Agent F1 Help F2 Refresh F3 Cancel F8 Image F9 Shel 1 FIO Exit Enter Do Figure 8 52 Selecting VIOS TCP IP Configuration Chapter 8 Virtualization 403 4 Select the wanted Ethernet interface which is typically en0 as shown in Figure 8 53 and then press Enter Config Assist for VIOS Available Network Interfaces Move cursor to desired item and press Enter TOP en0 00 00 Standard Ethernet Network Interface enl 00 01 Standard Ethernet Network Interface en2 04 00 Standard Ethernet Network Interface en3 04 01 Standard Ethernet Network Interface en4 Standard Ethernet Network Interface end Standard Ethernet Network Interface en6 Standard Ethernet Netw
590. xt window enter a machine name and the type of network connectivity you are using The system populates the remaining fields and opens the window that is shown in Figure 9 8 on page 447 446 IBM Flex System p270 Compute Node Planning and Implementation Guide Define a Machine Type or select values in entry fields Press Enter AFTER making all desired changes TOP NIM Machine Name Machine Type Hardware Platform Type Kernel to use for Network Boot Communication Protocol used by client Primary Network Install Interface Cable Type Network Speed Setting Network Duplex Setting NIM Network Network Type Ethernet Type Subnetmask Default Gateway Used by Machine Default Gateway Used by Master Host Name Network Adapter Hardware Address Network Adapter Logical Device Name IPL ROM Emulation Device CPU Id Machine Group Managing System Information WPAR Options Managing System OR LPAR Options Identity Management Source MORE 1 F1 Help F2 Refresh F3 Cancel F5 Reset F6 Command F7 Edit F9 Shel 1 FIO Exit Enter Do Figure 9 8 Adding a machine to the NIM environment Chapter 9 Operating system installation methods Entry Fields 7954AIXtest standalone chrp 64 bnc ent Network1 ent Standard 9 27 20 1 9 27 20 241 1 7954AIXtest 0 F4 List F8 Image 447 4 In the window that is shown in Figure 9 8 on page 447 enter the remainder of the information that i
591. xtension Pager number Street address Line 1 Line 2 Line 3 City State or province Country or region Postal code Alternate contact name Alternate telephone number Extension System location Telephone number Extension Country or region Street address City State or province Postal code Building Floor Room number Figure 7 81 Getting started with ESA wizard summary window 262 IBM Flex System p270 Compute Node Planning and Implementation Guide In the Summary page you can review all of the information that was provided to establish the settings for ESA If any changes are required click Back to return to the appropriate window or click Finish to accept the settings and complete the wizard Click Finish to return to the Service and Support Manager window The status should show Ready for Service and Support Manager as shown in Figure 7 82 Service and Support Manager Manage serviceable problems on your systems Problem Reporting Serviceable Problems for 9 Monitored Systems Ploctrosic Services Links Serviceable Problems A 0 systems with serviceable problems All Problems A 9 systems with no open serviceable IBM Support Portal problems Open a service request Recent Activity A 0 serviceable problems require attention 0 service requests being investigated by IBM 0 requests have been updated in the last 24 hours 0 serviceable problems opened in the last 24 hou
592. y to power on a compute node is to click Chassis Management from the main menu line and the Compute Nodes as shown in Figure 7 135 IBM Chassis Management Module USERID Settings System Status Multi Chassis Monitor Events Service and Support Chassis Management Mgt Module Management Search Chassis Properties and settings for the overall chassis Cc hassis a San UAN T Compute Nodes Properties and settings for compute nodes in the chassis Storage Nodes Properties and settings for storage nodes in the chassis Chassis Graphical View Chassis Table View Active Events 1 0 Modules Properties and settings for I O Modules in the chassis Fans and Cooling Cooling devices installed in your system Power Modules and Management Power devices consumption and allocation Component IP Configuration Single location for you to view and configure the various IP add Chassis Internal Network Provides internal connectivity between compute node ports and Hardware Topology Hierarchical view of components in your chassis Generate Reports of hardware information Figure 7 135 Starting CMM Compute Nodes management 302 IBM Flex System p270 Compute Node Planning and Implementation Guide 2 Onthe Compute Nodes page click the wanted node and then click the Power and Restart drop down menu Click Power On as shown in Figure 7 136 IBM Chassis Management Module System Status Mult Chassis Monitor Events Service and Supp
593. your PDU and UPS configurations see the following publications gt IBM Flex System Power Guide PRS440 http www ibm com support techdocs atsmastr nsf WebIndex PRS4401 gt IBM Flex System Interoperability Guide REDP FSIG O0 http www redbooks ibm com fsig The chassis power system is designed for efficiency by using data center power and consists of three phase 60 A Delta 200 VAC North America or three phase 32 A wye 380 415 VAC international The Chassis can also be fed from single phase 200 240 VAC supplies if required IBM Flex System p270 Compute Node Planning and Implementation Guide Power cabling for 32A at 380 415V three phase International As shown in Figure 5 6 one three phase 32 A wye PDU WW can provide power feeds for two chassis In this case an appropriate 3 phase power cable is selected for the Ultra Dense Enterprise PDU which then splits the phases and supplies one phase to each of the three PSUs within each chassis One three phase 32 A wye PDU can power two fully populated chassis within a rack A second PDU can be added for power redundancy from an alternative power source if the chassis is configured N N Figure 5 6 shows a typical configuration with a 32 A 3 phase wye supply at 380 415 VAC often termed WW or International N N IEC320 16A C19 C20 3m power cable 46M4002 1U9 C19 3 C13 Switched and monitored DPI PDU 40K9611 IBM DPI 32a Cord IEC 30
594. your server to work optimally as a network infrastructure server setting up the best configuration options for BIND ONS and Squid Figure 12 13 Workloads to be installed menu 566 Choose the workloads to be configured on the target system After the system is installed run the JAM instalation Toolkit for PowerlLinux Simplified Setup Tool to optimally configure your workloads Refer to the Minimum resource requirements for workloads topic at http A publib boulder ibm com Jnocenter Ansa nia Ir r0m i Andex Jsp 7topic 2F aay toc Frese IBM Flex System p270 Compute Node Planning and Implementation Guide 18 In the Installation sources selection page See Figure 12 14 choose CD DVDROM the virtual optical drive in the LPAR in our example and then click Next IBM Installation Toolkit for PowerLinux Installation sources selection Linux distribution source Select an installation source for Linus distribution source CO DWYD ROMI w the Linux distribution to be Use custom network URL installed on the target system IBM Installation Toolkit media source Select an IBM Installation Toolkit IBMIT media source cO D VD ROM rr media source to be Used to install Use custom network URL eo on the target To specify a custom network repository type the URL in the Use custom network URL field using URL notation For example nfs fet 2 34a th Refresh sources uit Prev Met Figure 12 14 Installati
595. ystem p270 Compute Node supports Power Capping and Power Saving options that can be enabled via the IBM Flex System CMM Power Capping enables a maximum power limit to be set for the entire Compute Node This can be used in situations where power capping is required to guarantee maximum power draw and therefore can be used to free up power capability to other Compute Nodes in the Flex System chassis Power Capping affects CPU and memory frequency Power Capping options can be found in the CMM GUI by clicking Chassis Management Compute Nodes and then clicking the node to show the Compute Node properties Select the Power tab next Figure 4 29 shows the Power Capping Option for Compute Nodes Power Capping Options Enable power capping 109 Maximum Power Limit range 400 1095 guaranteed range 775 1095 t t Setting the Maximum Power Limit value lower than its Maximum Allocated Power 1095W frees up power for reallocation within the power domain Note Reconfiguration of the Maximum Power Limit value may be required in order to ensure that the power allocated for the node is not greater than the maximum power limit allowed Therefore when disabling DPS mode ensure the node s configured Maximum Power Limit is not greater than the maximum power limit for the DPS disabled range Save Note The compute node must be powered on before you can set the above values Figure 4 29 Power Capping Options for Compute Nodes The following P
596. ystem support on page 127 4 15 Warranty and maintenance agreements on page 128 4 16 Software support and remote technical support on page 128 Vvvvvvrvrvrvrvrvvrvvv ivy Y Copyright IBM Corp 2013 All rights reserved 73 4 1 Overview This section introduces the IBM Flex System p270 Compute Node The system is shown in Figure 4 1 Figure 4 1 The IBM Flex System p270 Compute Node POWER7 based compute node The IBM Flex System p270 Compute Node 7954 24X is a standard wide Power Systems compute node with 2 POWER7 processor module sockets 16 memory slots 2 I O adapter slots and options for up to two internal drives for local storage and another SAS controller The IBM Flex System p270 Compute Node includes the following features gt Two dual chip modules DCM each consisting of two POWER7 chips to provide a total of 24 POWER7 processing cores gt 16 DDR3 memory DIMM slots gt Supports Very Low Profile VLP and Low Profile LP DIMMs gt Two P7IOC I O hubs gt A RAID capable SAS controller that supports up to two solid state drives SSDs or hard disk drives HDDs gt Optional second SAS controller on the IBM Flex System Dual VIOS Adapter to support dual VIO servers on internal drives gt Two I O adapter slots gt Flexible Service Processor FSP 74 IBM Flex System p270 Compute Node Planning and Implementation Guide gt IBM light path diagnostics gt USB 2 0 port Figure
597. zation TCP UDP IP stateless offload ROHS 6 compliant YYYY YYYY YY YYY YV Y Figure 4 22 on page 109 shows the IBM Flex System IB6132 2 port QDR InfiniBand Adapter Chapter 4 Product information and technology 113 Figure 4 25 1B6132 2 port QDR InfiniBand Adapter for IBM Flex System For more information about this adapter see the IBM Redbooks Product Guide that is available at this website http www redbooks ibm com abstracts tips0890 html 0pen 4 9 10 IBM Flex System FC3172 2 port 8Gb FC Adapter The IBM Flex System FC3172 2 port 8Gb FC Adapter from QLogic enables high speed access for IBM Flex System Enterprise Chassis compute nodes to connect to a Fibre Channel storage area network SAN This adapter is based on proven QLogic 2532 8 Gb ASIC design and works with any of the 8 Gb or 16 Gb IBM Flex System Enterprise Chassis Fibre Channel switch modules Table 4 16 lists the ordering part number and feature code Table 4 16 Ordering part number and feature code 1764 IBM Flex System FC3172 2 port 8Gb FC Adapter The IBM Flex System FC3172 2 port 8Gb FC Adapter has the following features gt Support for Fibre Channel protocol SCSI FCP SCSI and Fibre Channel Internet protocol FCP IP gt Support for point to point fabric connection F port fabric login gt Support for Fibre Channel service classes 2 and 3 gt Configuration and boot support in UEFI 114 IBM Flex System p270 Compute Node Planning and
Download Pdf Manuals
Related Search
Related Contents
USB&コンセント Operating manual of Controller SR1168 For split pressurized solar Ver ficha técnica Brunoise de fruits Experiment Transcripts for the Evaluation of the Rational Environment GERADOR User Manual Motorola WPCI810G Network Card User Manual Philips Portable Radio AE6565 Manual Linha Prime K 103 PID U Copyright © All rights reserved.
Failed to retrieve file