Home

HP IBRIX 9720/9730 Storage Administrator Guide

image

Contents

1. 95 Space TEQUES A Ad 95 Updating the Statistics tool configured 96 Changing the Statistics tool configura 96 Fusion Manager failover and the Statistics tool configuration 96 Checking the status of Statistics tool proCesSeS cooooooccccccononononcncccconanononcnononanonononcnnnnnnnnnnnnos 97 Controlling Statistics Tool processes ic 97 Troubleshooting the Statistics 100 aiii non iii 97 A O 98 Uninstalling the Statistics Tal is 98 10 Mainigiming o 99 Shutting down rub a ede dmn E 99 Shutting down the IBRIX SOU WE spain int 99 Powering off the system hard Wa scai ra A 100 Starting up the 5 VS EET TEE 101 Powering on the system horda 101 Powering on after a power talle dia pd 101 Starting the IBRIX Sol WONG ida 101 Powering file serving nodes Bl Oll Escenas lion lisina 101 Performing a rolling reboot 102 Starting and stopping procesada 102 Tuning file serving nodes and 9000 dit iii 102 Managing Segments AR 106 Migrating IM aia 107 Evacuating segments and removing storage from the 109 Removing a node from the lutitas 112 Maintaining HENO NS A 112 Cluster and user network interfaces nemen 112 Adding user network Anfel tc edilicia 112 Setting network interface options in the configuration 113 Preferring network eat uc 114 Unpreferring network WETICE necia lianas ici 115 Mak
2. IMPORTANT Itis important to keep regular backups of the cluster configuration See Backing up the Fusion Manager configuration page 61 for more information System components IMPORTANT All software included with the IBRIX 9720 9730 Storage is for the sole purpose of operating the system Do not add remove or change any software unless instructed to do so by HP authorized personnel For information about 9730 system components and cabling see IBRIX 9730 component and cabling diagrams page 206 For information about 9720 system components and cabling see The IBRIX 9720 component and cabling diagrams page 215 For a complete list of system components see the HP IBRIX 9000 Storage QuickSpecs which are available at http www hp com go StoreAll HP IBRIX software features 12 HP IBRIX software is a scale out network attached storage solution including a parallel file system for clusters an integrated volume manager high availability features such as automatic failover of multiple components and a centralized management interface IBRIX software can scale to thousands of nodes Based on a segmented file system architecture IBRIX software integrates O and storage systems into a single clustered environment that can be shared across multiple applications and managed from a central Fusion Manager IBRIX software is designed to operate with high performance computing applications that
3. Health checks ibrix haconfig ibrix health ibrix healthconfig Raw storage management ibrix pv ibrix vg ibrix 1v Fusion Manager operations ibrix fm and Fusion Manager tuning ibrix fm tune File system checks ibrix fsck Kernel profiling 3bxix profile Cluster configuration ibrix_clusterconfig Configuration database consistency ibrix dbck Shell task management ibrix shell The following operations can be performed only from the GUI Scheduling recurring data validation scans Scheduling recurring software snapshots Using the GUI The GUI is a browser based interface to the Fusion Manager See the release notes for the supported browsers and other software required to view charts on the dashboard You can open multiple GUI windows as necessary 16 Getting started If you are using HTTP to access the GUI open a web browser and navigate to the following location specifying port 80 http management console IP gt 80 fusion If you are using HTTPS to access the GUI navigate to the following location specifying port 443 https management console IP 443 fusion In these URLs management console IP is the IP address of the Fusion Manager user VIF The GUI prompts for your user name and password The default administrative user is ibrix Enter the password that was assigned to this user when the system was installed You can change the password using the Linux passwd command To allow other users to
4. 58 Associating events and pS rra lora ba 58 Defining MENS cas is 59 Configuring groups 59 Deleting elements of the SNMP configuration 60 Listing SNMP configuration intorma on scanners et 60 6 Configuring system DeERUS il 61 Backing up the Fusion Manager 61 Using NDMP temita 61 Configuring NDMP parameters the usina 62 NDMP process MARIE MEA rra A id 62 Viewing or canceling NDMP sessions ener enne nnne 62 Starting Stopping or restarting an NDMP Servera iia 63 Viewing or rescanning tape and media changer Aevices ococooooooccccccoooooononocccnonanononcncconnnnnos 63 NOMBRO 64 7 Creating host groups Tor 9000 16 65 How host groups A RRES 65 Creating a host Groups 65 Adding an 9000 client to a host 66 Adding a domain rule to a host Group ido 66 Viewing host Usas 66 Deleting host ro UP i n 66 Other host group Oper titi dies 67 Menitorina cluster Opera uso 68 Monitoring 9720 9730 harian din 68 Monitoring IR 68 Monitoring hardware componentes aia 71 Monitoring blade
5. NOTE Be sure to read all instructions before starting the upgrade procedure Standard online upgrade The management console must be upgraded first You can then upgrade file serving nodes and 9000 clients in any order Upgrading the management console Complete the following steps on the Management Server machine or blade 1 Disable automated failover on all file serving nodes ibrixhome bin ibrix server m U 2 Verify that automated failover is off ibrixhome bin ibrix server 1 In the output the HA column should display of 3 Move the installer dir gt ibrix directory used in the previous release installation to ibrix old For example if you expanded the tarball in root during the previous IBRIX installation on this node the installer is in root ibrix 4 Expand the distribution tarball or mount the distribution DVD in a directory of your choice Expanding the tarball creates a subdirectory named ibrix that contains the installer program For example if you expand the tarball in root the installer is in root ibrix 5 Change to the installer directory if necessary and run the upgrade ibrixupgrade f 6 Verify that the management console is operational etc init d ibrix fusionmanager status The status command should report that the correct services are running The output is similar to this Fusion Manager Daemon pid 18748 running 7 Check usr 10cal ibrix 10g fusionserver 1og for errors
6. This section describes how to change IP addresses change the cluster interface manage routing table entries and delete a network interface Changing the IP address for a Linux 9000 client After changing the IP address for a Linux 9000 client you must update the IBRIX software configuration with the new information to ensure that the Fusion Manager can communicate with the client Use the following procedure 1l Unmount the file system from the client 2 Change the client s IP address 3 Reboot the client or restart the network interface card 4 Delete the old IP address from the configuration database ibrix client d h CLIENT 5 Reregister the client with the Fusion Manager register client p console IPAddress c clusterIF n ClientName 6 Remount the file system on the client Changing the cluster interface If you restructure your networks you might need to change the cluster interface The following rules apply when selecting a new cluster interface e Fusion Manager must be connected to all machines including standby servers that use the cluster network interface Each file serving node and 9000 client must be connected to the Fusion Manager by the same cluster network interface A Gigabit or faster Ethernet port must be used for the cluster interface e 9000 clients must have network connectivity to the file serving nodes that manage their data and to the standbys for those servers This traffic can use the
7. Upgrading file serving nodes After the management console has been upgraded complete the following steps on each file serving node 1 From the management console manually fail over the file serving node ibrixhome bin ibrix server f p h HOSTNAME The node reboots automatically 2 Move installer dir ibrix directory used in the previous release installation to ibrix old For example if you expanded the tarball in root during the previous IBRIX installation on this node the installer is in root ibrix 3 Expand the distribution tarball or mount the distribution DVD in a directory of your choice Expanding the tarball creates a subdirectory named ibrix that contains the installer program For example if you expand the tarball in root the installer is in root ibrix 4 Change to the installer directory if necessary and execute the following command ibrixupgrade f Upgrading the IBRIX software to the 5 5 release 195 The upgrade automatically stops services and restarts them when the process is complete 5 When the upgrade is complete verify that the IBRIX software services are running on the node etc init d ibrix server status The output is similar to the following If the IAD service is not running on your system contact HP Support IBRIX Filesystem Drivers loaded ibrcud is running pid 23325 IBRIX IAD Server pid 23368 running 6 Verify that the ibrix and ipfs services are running lsmod grep ibrix
8. e HP P700m reports a POST error this is visible using the TFT monitor keyboard e server crashes when the cciss driver loads this is visible using the TFT monitor keyboard Sometimes this happens to all servers in the system e X920 systems no controllers are seen when you run exds_stdiag command The underlying causes of these problems differ However the recovery process is similar in all cases Do not replace the HP P700m until you have worked through the process described here In general terms the solution is to reset the SAS switches and if that fails reboot each X9700c controller until you locate a controller that is interfering with the SAS fabric If your system is in production follow the steps below to minimize downtime on the system 1 Log in to the Onboard Administrator and run the show bay info all command Compare entries for the affected blade and working blades If the entries look different reboot each Onboard Administrator one at a time Re seat or replace the P700m in the affected server blade 2 Runexds stdiag lfexds stdiag detects the same capacity blocks and 9720c controllers as the other server blades then the procedure is completed otherwise continue to the next step 3 f all servers are affected shut down all servers if a subset of servers is affected shut down the subset Using OA log into the SAS switch 1 and reset it Wait for it to reboot Reset SAS switch 2 Wa
9. ibrix server 1 3 the current active management console move the installer dir ibrix directory used in the previous release installation to ibrix old For example if you expanded the tarball in root during the previous IBRIX installation on this node the installer is in root ibrix 4 the current active management console expand the distribution tarball or mount the distribution DVD in a directory of your choice Expanding the tarball creates a subdirectory named ibrix that contains the installer program For example if you expand the tarball in xoot the installer is in root ibrix 5 Change to the installer directory on the active management console if necessary Run the following command auto ibrixupgrade The upgrade script performs all necessary upgrade steps on every server in the cluster and logs progress in the upgrade log file The log file is located in the installer directory Manual upgrades Upgrade paths There are two manual upgrade paths a standard upgrade and an agile upgrade e standard upgrade is used on clusters having a dedicated Management Server machine or blade running the management console software agile upgrade is used on clusters having an agile management console configuration where the management console software is installed in an active passive configuration on two cluster nodes To determine whether you have an agile management console configuration
10. pdsh a etc init d ibrix server status dshbak 10 Wait for the Fusion Manager to report that all file serving nodes are down 4 ibrix server 1 Tl Shut down all nodes other than the node hosting the active Fusion Manager pdsh HOSTNAME shutdown t now now For example 4 pdsh w x850s3 shutdown t now now 4 pdsh w x850s2 shutdown t now now 12 Shut down the node hosting the active agile Fusion Manager shutdown t now now 13 Use ping to verify that the nodes are down For example ping x850s2 PING x850s2 13domain 13lab com 12 12 80 102 56 84 bytes of data x850s1 13domain 131ab com 12 12 82 101 icmp seq 2 Destination Host Unreachable If you are unable to shut down a node cleanly use the following command to power the node off using the iLO interface ibrix server P off h HOSTNAME 14 Shut down the Fusion Manager services and verify 4 etc init d ibrix fusionmanager stop etc init d ibrix fusionmanager status 15 Shut down the node hosting the active Fusion Manager shutdown t now now Broadcast message from root pts 4 Mon Mar 12 17 10 13 2012 The system is going down to maintenance mode NOW When the command finishes the server is powered off standby Powering off the system hardware After shutting down the IBRIX software power off the system hardware as follows 1 Power off the 9100c controllers 2 Power off the 9200cx disk capacity block s 3 Power off the file serv
11. 1 If you are adding a new file serving node to the cluster enable synchronization for the node See Enabling collection and synchronization page 92 for more information 2 Add the file system to the Statistics tool Run the following command on the node hosting the active Fusion Manager usr local ibrix stats bin stmanage loadfm The new configuration is updated automatically on the other nodes in the cluster You do not need to restart the collection process collection continues automatically Changing the Statistics tool configuration You can change the configuration only on the management node To change the configuration add a configuration parameter and its value to the etc ibrix stats conf file on the currently active node Do not modify the etc ibrix statstool conf and etc ibrix statstool local conf files directly You can set the following parameters to specify the number of reports that are retained Parameter Report Type to retain Default Retention Period age report hourly Hourly report 1 day age report daily Daily report 7 days age report weekly Weekly report 14 days age report other User generated report 7 days For example for daily reports the default of 7 days saves seven reports To save only three daily reports set the age report daily parameter to 3 days age report daily 3d NOTE You do not need to restart processes after changing the configuration The updated configuration is collected automatica
12. 10 10 1 500 gratarpinterval 30 30 1 33 hardwareStatuslnterval 30 30 1 9999 hardwareStatusOn false no M healthOn true yes healthReport 10 10 1 9999 iobControlinterval 60 60 1 99 logLevel 2 2 0 4 The Module Tunings dialog box adjusts various advanced parameters that affect server operations General Tunings Module Tunings o IAD Tunings Module Tunings The following are advanced system module tunings e Servers a Summary Use default values for all Module tune options defaults defined in parenthesis o sync 4 bo lw commit watermark 50 50 0 100 create_sleep_time 10 10 deleg Iru high vm 20000 20000 Iru love vm 15000 15000 disconnected op timeout 1 0 900 do_async_read 1 1 0 1 do async 2 2 0 3 flushd timeout 500 500 100 6000 high_active_threads 6 5 1 80 On the Servers dialog box select the servers to which the tunings should be applied 104 Maintaining the system Modify Server s Wizard ees General Tunings Servers IAD Tunings Module Tunings Inthe following grid select the servers to apply any tuning changes to Servers 5 3 Select servers to apply changes to Server 106951 ib69s2 Tuning file serving nodes from the All Fusion Manager commands for tuning hosts include the h HOSTLIST option which supplies one or
13. 4 Double click the first server name Log in as normal NOTE By default the first port is connected with dongle to the front of blade 1 that is server 1 If server 1 is down move the dongle to another blade Using the serial link on the Onboard Administrator IF you are connected to a terminal server you can log in through the serial link on the Onboard Administrator Booting the system and individual server blades Before booting the system ensure that all of the system components other than the server blades the capacity blocks or performance modules and so on are turned on By default server blades boot whenever power is applied to the system performance chassis c Class Blade enclosure If all server blades are powered off you can boot the system as follows 1 2 3 Press the power button on server blade 1 Log in as root to server 1 Power on the remaining server blades ibrix server P on h hostname NOTE Alternatively press the power button on all of the remaining servers There is no need to wait for the first server blade to boot Management interfaces Cluster operations are managed through the IBRIX Fusion Manager which provides both a GUI and a Most operations can be performed from either the GUI or the The following operations can be performed only from the CLI SNMP configuration ibrix snmpagent ibrix snmpgroup ibrix snmptrap ibrix snmpuser ibrix snmpview
14. 93 Storage booting 16 components 12 features 12 installation 14 logging in 15 shut down hardware 100 software 12 startup 101 storage remove from cluster 109 Subscriber s Choice HP 177 symbols on equipment 234 system recovery 166 system startup after power failure 101 T technical support HP 176 service locator website 177 troubleshooting 143 escalating issues 146 U upgrade 0 sh utility 185 upgrades 6 0 file systems 186 firmware 136 IBRIX 5 5 software 193 IBRIX software 122 179 IBRIX software 5 6 release 189 Linux 9000 clients 131 184 pre 6 0 file systems 185 186 Windows 9000 clients 132 184 user network interface add 112 configuration rules 115 defined 112 identify for 9000 clients 113 modify 113 prefer 114 unprefer 115 V Virtual Connect domain configure 162 virtual interfaces 35 bonded create 36 client access 37 configure standby servers 36 guidelines 35 W warning rack stability 177 warnings loading rack 234 weight 234 websites HP 177 HP Subscriber s Choice for Business 177 weight warning 234 Windows 9000 clients upgrade 132 184 255
15. File system unmount issues page 134 On 9720 systems delete the existing vendor storage ibrix vs d n EXDS The vendor storage will be registered automatically after the upgrade Performing the upgrade This upgrade method is supported only for upgrades from IBRIX software 5 6 x to the 6 1 release Complete the following steps 1 Obtain the latest HP IBRIX 6 1 ISO image from the IBRIX 9000 software dropbox Contact HP Support to register for the release and obtain access to the dropbox Mount the ISO image and copy the entire directory structure to the root ibrix directory on the disk running the OS Change directory to root ibrix on the disk running the OS and then run chmod R 777 on the entire directory structure Run the following upgrade script auto ibrixupgrade The automatically stops necessary services and restarts when the upgrade is complete The upgrade script installs the Fusion Manager on all file serving nodes The Fusion Manager is in active mode on the node where the upgrade was run and is in 182 Cascading Upgrades passive mode on the other file serving nodes If the cluster includes a dedicated Management Server the Fusion Manager is installed in passive mode on that server Upgrade Linux 9000 clients See Upgrading Linux 9000 clients page 131 If you received a new license from HP install it as described in the Licensing chapter in this guide Aft
16. Fn12 TLB entry has an invalid size Fn13 No virtual address space available 14 table 15 out of entries Fn20 Unknown DCR register Fn21 Stack pointer is NULL Fn22 Failed to create a thread Fn24 Call to an OS service failed Fn25 String to be printed 15 too long Fn26 Bad status seen during OpProc state change Fn27 Inject Faults command was received Fn28 Valid RIS was not received from the other controller Fn29 Fatal error in the CLl code Fn30 DMA transfer failed Fn31 DMA request allocation failed Fn32 DMA CDB is invalid Fn40 A caller specified a non existent PCI core Fn41 Number of PCI devices exceeds maximum Fn51 Failed to clear NVRAM set defaults flag Fn52 A fatal exception occurred Fn53 Firmware image failed to load Fn54 Firmware failed to initialize memory Fn60 SAS Failure when reposting host credit Fn l SAS An unexpected IOC status was returned Fn 2 SAS A DevHandle value was reused 9730 controller error messages 151 Lockup code Description Fn67 SAS JBOD hotplug not supported Fn68 SAS target mode resources not allocated Fn 9 SAS too many initiators Fn70 Invalid firmware cloned HnOO DMA operation failed Hn01 XOR diagnostics failed Hn02 Problem with the DMA hardware Hn10 Remote device space exceeded maximum Exceeded total PCI address space Hn12 Incorrec
17. For example assume the location for a drive in listis Port 52 Box 1 Bay 7 To find the drive go to Bay 7 The port number specifies the switch number and switch port For port 52 the drive is connected to port 2 on switch 5 For location Port 72 Box 1 Bay 6 the drive is connected to port 2 on switch 7 in bay 6 o Properties e The Fan panel provides the following information about the fans in a sub enclosure Status o Monitoring 9720 9730 hardware 81 UUID Properties e SEP The SEP panel provides the following information about the storage enclosed processors in the sub enclosure Status o o Serial Number o Model o Firmware Version e Temperature Sensor The Temperature Sensor panel provides the following information about the temperature sensors in the sub enclosure Status o Properties Monitoring pools for a storage cluster The Management Console lists a Pool node for each pool in the storage cluster Select one of the Pool nodes to display information about that pool 82 Monitoring cluster operations ch_09USE133EAON_ 51 summary C38 Servers ES Ci Storage Cluster Pool Pool Pool n Pool OT az E Pool Oz o Pool Pool Storage Controller Battery IO Cache Module E os Storage Controller When you select the Poo
18. If you are connected to iLO and using the virtual console you will lose the iLO connection when the platform scripts are executed After a short period of time you can again connect to the iLO and bring up the virtual console When the configuration is complete a message reporting the location of the log files appears e logs are available at usr local ibrix autocfg logs e ThelBRIX 9730 configuration logs are available at var log hp platform install X9730_install log Completing the restore 1 2 Ensure that you have root access to the node The restore process sets the root password to hpinvent the factory default Verify information about the node you restored ibrix server f p M N h SERVERNAME Review vendor storage information Run the following command from the node hosting the active Fusion Manager ibrix vs i The command reports status UUIDs firmware versions and other information for servers storage components drive enclosures and components volumes and drives It also shows the LUN mapping Run the following command on the node hosting the active Fusion Manager to tune the server blade for optimal performance ibrix host tune S h hostname of new server blade o rpc max timeout 64 san timeout 120 On all surviving nodes remove the ssh key for the hostname that you just recovered from the file xoot ssh known hosts The key will exist only on the nodes that previously accessed th
19. Skip the Software Entitlement ID field is not currently used Central Management Server IP Read Community String Write Community String System Name System Location System Contact Software Entitlement ID Choose Country Required Value The time required to enable Phone Home depends on the number of devices in the cluster with larger clusters requiring more time To configure Phone Home settings from the use the following command ibrix phonehome c i IP Address of the Central Management Server z Software Entitlement Id r Read Community w Write Community t System Contact n System Name o System Location For example Configuring HP Insight Remote Support on IBRIX 9000 systems 27 ibrix phonehome c i 99 2 4 75 P US r public w private t Admin n SYS01 US o Colorado Next configure Insight Remote Support for the version of HP SIM you are using e HP SIM 71 and IRS 5 7 See Configuring Insight Remote Support for HP SIM 71 and IRS 5 7 page 28 e HP SIM 6 3 and IRS 5 6 See Configuring Insight Remote Support for HP SIM 6 3 and IRS 5 6 page 30 Configuring Insight Remote Support for HP SIM 71 and IRS 5 7 28 To configure Insight Remote Support complete these steps l Configure Entitlements for the servers and chassis in your system 2 Discover devices on HP SIM Configuring Entitlements for servers and chassis Expand Phone Home in the lower Navigat
20. m TRAPSINK For example to associate all Alert events and two Info events with a trapsink at IP address 192 168 2 32 enter ibrix event c y SNMP e ALERT server registered filesystem created m 192 168 2 32 Use the ibrix event d command to dissociate events and trapsinks ibrix event d y SNMP e ALERT INFO EVENTLIST m TRAPSINK Defining views A MIB view is a collection of paired OID subtrees and associated bitmasks that identify which subidentifiers are significant to the view s definition Using the bitmasks individual OID subtrees can be included in or excluded from the view An instance of a managed object belongs to a view if e OID of the instance has at least as many sub identifiers as the OID subtree in the view e Each sub identifier in the instance and the subtree match when the bitmask of the corresponding sub identifier is nonzero The Fusion Manager automatically creates the excludeA11 view that blocks access to all OIDs This view cannot be deleted it is the default read and write view if one is not specified for a group with the ibrix snmpgroup command The catch all OID and mask are OID 1 Mask 1 Consider these examples where instance 1 3 6 1 2 1 1 matches instance 1 3 6 1 4 1 matches and instance 1 2 6 1 2 1 does not match OID 1 3 6 1 4 1 18997 Mask 1 1 1 1 1 1 1 OID 1 3 6 1 2 1 Mask 1 1 0 1 0 1 To add a pairing of an OID subtree value and a mask value to a new o
21. m odpadem M sto toho byste m li chr nit lidsk zdrav a ivotn prost ed t m e jej p ed te na k tomu ur en sb rn pracovi t kde se zab vaj recyklac elektrick ho a elektronick ho vybaven Pro v ce informac kontaktujte spole nost zab vaj c se sb rem a svozem domovn ho odpadu Danish recycling notice Bortskaffelse af brugt udstyr hos brugere i private hjem i EU Dette symbol betyder at produktet ikke m bortskaffes sammen med andet husholdningsaffald Du skal i stedet den menneskelige sundhed og milj et ved at afl evere dit brugte udstyr p et dertil beregnet indsamlingssted for af brugt elektrisk og elektronisk udstyr Kontakt n rmeste renovationsafdeling for yderligere oplysninger Dutch recycling notice Inzameling van afgedankte apparatuur van particuliere huishoudens in de Europese Unie Dit symbool betekent dat het product niet mag worden gedeponeerd bij het overige huishoudelijke afval Bescherm de gezondheid en het milieu door afgedankte apparatuur in te leveren bij een hiervoor bestemd inzamelpunt voor recycling van afgedankte elektrische en elektronische apparatuur Neem voor meer informatie contact op met uw gemeentereinigingsdienst Recycling notices 243 Estonian recycling notice ravisatavate seadmete likvideerimine Euroopa Liidu eramajapidamistes See mark n itab et seadet tohi visata olmepr gi hulka Inimeste tervise ja keskkonna s stmise nimel tuleb ravisata
22. not be liable for technical or editorial errors or omissions contained herein Acknowledgments Microsoft and Windows are U S registered trademarks of Microsoft Corporation UNIX is a registered trademark of The Open Group Warranty WARRANTY STATEMENT To obtain a copy of the warranty for this product see the warranty information website http www hp com go storagewarrant Revision History Edition Date Software Description Version 1 December 2009 5 3 1 Initial release of the IBRIX 9720 Storage 2 April 2010 5 4 Added network management and Support ticket 3 August 2010 5 4 1 Added Fusion Manager backup migration to an agile Fusion Manager configuration software upgrade procedures and system recovery procedures 4 August 2010 5 4 1 Revised upgrade procedure 5 December 2010 5 5 Added information about NDMP backups and configuring virtual interfaces and updated cluster procedures 6 March 2011 5 5 Updated segment evacuation information 7 April 2011 5 6 Revised upgrade procedure 8 September 2011 6 0 Added or updated information about the agile Fusion Manager Statistics tool Ibrix Collect event notification capacity block installation NTP servers upgrades 9 June 2012 6 1 Added or updated information about 9730 systems hardware monitoring segment evacuation HP Insight Remote Support software upgrades events Statistics tool 10 December 2012 6 2 Added or updated informa
23. the settings to all servers This step resyncs the local user database 2 Runthe following command ibrix httpconfig R h HOSTNAME 3 Verify that HTTP services have been restored Use the GUI or CLI to identify a share served by the restored node and then browse to the share All Vhosts and HTTP shares should now be restored on the node Restore FTP services Complete the following steps 1 Take the appropriate actions e If Active Directory authentication is used join the restored node to the AD domain manually e If Local user authentication is used create a temporary local user on the GUI and apply the settings to all servers This step resynchronizes the local user database 2 Runthe following command ibrix ftpconfig R h HOSTNAME 3 Verify that HTTP services have been restored Use the GUI or CLI to identify a share served by the restored node and then browse to the share All Vhosts and FTP shares should now be restored on the node Troubleshooting LO remote console does not respond to keystrokes You need to use a local terminal when performing a recovery because networking has not yet been set up Occasionally when using the iLO integrated remote console the console will not respond to keystrokes To correct this situation remove and reseat the blade The iLO remote console will then respond properly Alternatively you can use a local KVM to perform the recovery Troubleshooting 175 17 Support and oth
24. then the fault has been corrected and the system has returned to normal and you can proceed to step 11 e the seven segment display continues to show an Hn 67 or Cn 02 code continue to the next step 8 One of the I O modules may be failed even though amber LED is not on Replace O modules one by one as follows a Remove the left top or bottom as identified in step 4 I O module and replace it with a new module as follows a Detach the SAS cable connecting the module to the X9700c controller b Ensure that the disk drawer is fully pushed in and locked c Remove the I O module d Replace with a new I O module it will not engage with the disk drawer unless the drawer is fully pushed in e Re attach the SAS cable Ensure it is attached to the IN port the bottom port b Re seat the appropriate X9700c controller as described below in the section Re seating an X9700c controller page 157 c Wait for the controller to boot e Ifthe seven segment display shows on then the fault has been corrected and the system has returned to normal and you can proceed to step 11 e the seven segment continues to shows an Hn 67 or Cn 02 code continue to the next step If the fault does not clear remove the left I O module and reinsert the original I O module e Re seat the appropriate X9700c controller as described below in the section Re seating an X9700c controller page 157 f Wait
25. tunings network interface preferences and allocation policies that have been set on their new host group either restart IBRIX software services on the clients or execute the following commands locally e ibrix lwmount a to force the client to pick up mounts or allocation policies e ibrix lwhost a to force the client to pick up host tunings To delete a host group using the CLI ibrix hostgroup d g GROUPNAME Other host group operations Additional host group operations are described in the following locations e Creating or deleting a mountpoint and mounting or unmounting a file system see Creating and mounting file systems in the HP IBRIX 9000 Storage File System User Guide e Changing host tuning parameters see Tuning file serving nodes and 9000 clients page 102 Preferring a network interface see Preferring network interfaces page 114 e Setting allocation policy see Using file allocation in the HP IBRIX 9000 Storage File System User Guide Other host group operations 67 8 Monitoring cluster operations Monitoring 9720 9730 hardware The GUI displays status firmware versions and device information for the servers chassis and system storage included in 9720 and 9730 systems The Management Console displays a top level status of the chassis server and storage hardware components You can also drill down to view the status of individual chassis server and storage sub components Monitoring
26. 10 technology comprises the Flex 10 NICs and the Flex 10 Virtual Connect modules in interconnect bays 1 and 2 of the performance chassis Each Flex 10 NIC 15 configured to represent four physical interfaces NIC devices also called FlexNICs with a total bandwidth of 10ps FlexNICs are configured as follows on an IBRIX 9720 Storage Device Built in NIC Port Speed Purpose eth l l physical lGbps Management network ethl 2 1 physical 9Gbps Site network eth2 1 2 virtual 9Gbps Site network eth3 2 2 virtual lGbps Management network ethd 1 3 virtual not used eth5 2 3 virtual not used eth T 4 virtual not used eth 2 4 virtual not used Performance blocks Class Blade enclosure 221 The IBRIX 9720 Storage automatically reserves ethO and eth3 and creates a bonded device bondO This is the management network Although ethO and eth3 are physically connected to the Flex 10 Virtual Connect VC modules the VC domain is configured so that this network is not seen by the site network With this configuration eth and eth2 are available for connecting each server blade to the site network To connect to the site network you must connect one or more of the allowed ports as uplinks to your site network These are the ports marked in green in Virtual Connect Flex 10 Ethernet module cabling Base cabinet page 225 If you connect several ports to the same switch in site network all ports must use the same media t
27. 1407 colectes at 201 1 05 06 15 49 04 Cofected 201 1 05 05 15 49 04 Muros Ke B tes 402 colected 2011 06 06 15 34 26 Collected 2011 05 06 15 34 25 Mand 4075 KB re Cole 3 collected 2011 05 08 14 47 54 Collecte 2011 04 05 14 47 44 Mirai 191 NDNP Bach a A collected at 2011 05 06 14 4 Downloaded 2011 05 06 14 4424 Manus 1906 at Cobec Cute Cobecton M Merete 3 The data is stored locally on each node in a compressed archive file nodename filename timestamp tgz under local ibrixcollect Enter the name of the zip file that contains the collected data The default location to store this zip file is located on the active Fusion Manager node at local ibrixcollect archive Collecting information for HP Support with Ibrix Collect 143 Save n Socallibrtacollectarchive the name for fle whch wi coman the collected logs and The fe will be s defaut colectiarchwe in he ac tusion manager 4 Click Okay To collect logs and command results using the CLI use the following command ibrix collect c n NAME NOTE Only one manual collection of data is allowed at a time NOTE When a node restores from a system crash the vmcore under var crash lt timestamp gt directory is processed Once processed the directory will be renamed var crash lt timestamp gt PROCESSED HP Support may request that
28. 181 10 12 13 active Fusion Manager node disable automated failover on all file serving nodes lt ibrixhome gt bin ibrix server m U Run the following command to verify that automated failover is off In the output the HA column should display o ibrixhome bin ibrix server 1 Unmount file systems on Linux 9000 clients ibrix umount f MOUNTPOINT Stop the SMB NFS and NDMP services on all nodes Run the following commands on the node hosting the active Fusion Manager ibrix server s t cifs c stop ibrix server s t nfs c stop ibrix server s t ndmp c stop If you are using SMB verify that all likewise services are down on all file serving nodes ps ef grep likewise Use kill 9 to stop any likewise services that are still running If you are using NFS verify that all NFS processes are stopped ps ef grep nfs If necessary use the following command to stop NFS services etc init d nfs stop Use kill 9 to stop any NFS processes that are still running If necessary run the following command on all nodes to find any open file handles for the mounted file systems lsof mountpoint Use kill 9 to stop any processes that still have open file handles on the file systems Unmount each file system manually ibrix umount f FSNAME Wait up to 15 minutes for the file systems to unmount Troubleshoot any issues with unmounting file systems before proceeding with the upgrade See
29. 181 00 00 00 00 11 ib50 82 bond0 2 No ib50 81 bond0 2 User 00 00 00 00 11 150 82 bondO Cluster Up LinkUp 172 16 0 82 00 00 00 00 12 172 16 0 254 No ib50 82 bondO 1 User Up LinkUp 172 16 0 182 00 00 00 00 12 ib50 81 bond0 2 No ib50 82 bond0 2 User 00 00 00 00 12 ib50 81 Active Nonedit bond0 0 Cluster Up LinkUp ActiveFM 172 16 0 281 3 To assign the IFNAME default route for the parent cluster bond and the user VIFS assigned to FSNs for use with SMB NFS enter the following ibrix nic command at the command prompt ibrix nic r n IFNAME h HOSTNAME A R ROUTE IP 4 Configure backup monitoring as described in Configuring backup servers page 36 Creating a bonded VIF NOTE The examples in this chapter use the unified network and create a bonded VIF on bondo IF your cluster uses a different network layout create the bonded VIF on a user network bond such as bondl Use the following procedure to create a bonded VIF bondo 1 in this example 1 If high availability automated failover is configured on the servers disable it Run the following command on the Fusion Manager ibrix server m U 2 Identify the bondo 1 VIF ibrix nic a n bond0 1 h nodel node2 node3 node4 3 Assign an IP address to the bond1 1 VIFs on each node In the command I specifies the IP address M specifies the netmask and B specifies the broadcast address ibrix nic c n bond0 ibrix nic c n bond0 i
30. 2 If an issue occurs with bondo 2 on a server the server including its segment ownership and FM services will fail over to the backup server and that server will now handle SMB requests going through bondo 2 You can also fail over just the NIC to its standby NIC on the backup server HBA monitoring This method protects server access to storage through an HBA Most servers ship with an HBA that has two controllers providing redundancy by design Setting up IBRIX HBA monitoring is not commonly used for these servers However if a server has only a single HBA you might want to monitor the HBA then if the server cannot see its storage because the single HBA goes offline or faults the server and its segments will fail over You can set up automatic server failover and perform a manual failover if needed If a server fails over you must fail back the server manually When automatic HA is enabled the Fusion Manager listens for heartbeat messages that the servers broadcast at one minute intervals The Fusion Manager initiates a server failover when it fails to receive five consecutive heartbeats Failover conditions are detected more quickly when NIC HA is also enabled server failover is initiated when the Fusion Manager receives a heartbeat message indicating that a monitored NIC might be down and the Fusion Manager cannot reach that NIC HBA monitoring is enabled the Fusion Manager fails over the server when a heartbeat message indi
31. 3 and IRSA 5 6 e HP SIM 71 and IRSA 5 7 IMPORTANT Insight Remote Support Standard IRSS is not supported with IBRIX software 6 1 and later For product descriptions and information about downloading the software see the HP Insight Remote Support Software web page htto www hp com go insightremotesupport For information about HP SIM http www hp com products systeminsightmanager For IRSA documentation http www hp com q9o insightremoteadvanced docs IMPORTANT You must compile and manually register the IBRIX MIB file by using HP Systems Insight Manager 1 Download ibrixMib txt from usr local ibrix doc 2 Rename the file to ibrixMib mib 3 In HP Systems Insight Manager complete the following steps a Unregister the existing MIB by entering the following command lt BASE gt mibs gt mxmib d ibrixMib mib b Copy the ibrixMib mib file to the lt BASE gt mibs directory and then enter the following commands lt BASE gt mibs gt mcompile ibrixMib mib lt BASE gt mibs gt mxmib a ibrixMib cfg For more information about the MIB see the Compiling and customizing MIBs chapter in the HP Systems Insight Manager User Guide which is available at http www hp com go insightmanagement sim Click Support amp Documents and then click Manuals Navigate to the user guide 24 Getting started Limitations Note the following e For IBRIX systems the HP Insight Remote Support implementation is limited to
32. 9000 clients the servers hosting the VIF should be configured in backup pairs However 9000 clients do not support backup NICs Instead 9000 clients should connect to the parent bond of the user VIF or to a different VIF e Ensure that your parent bonds for example bondO have a defined route 1 Check for the default Linux OS route gateway for each parent interface bond that was defined during the HP IBRIX 9000 installation by entering the following command at the command prompt route The output from the command is the following Destination Gateway Genmask Flags Metric Ref Use Iface 15 226 48 0 E 255 255 255 0 U 0 0 bondi 172 16 0 0 i 255 255 248 0 U 0 0 0 bondo 169 254 0 0 255 255 0 0 U 0 0 0 bondi default 15 226 48 1 0 0 0 0 UG 0 0 0 bondi Kernel IP routing table The default destination is the default gateway route for Linux The default destination which was defined during the HP IBRIX 9000 installation had the operating system default gateway defined but not for IBRIX 2 Display network interfaces controlled by IBRIX by entering the following command at the command prompt 4 ibrix nic 1 Notice if the ROUTE column is unpopulated for IFNAME Network and VIF guidelines 35 root ib50 80 brix_nic l HOST IFNAME TYPE STATE IP_ADDRESS MAC ADDRESS BACKUP_HOST BACKUP_IF ROUTE VLAN_TAG LINKMON ib50 81 bondO Cluster Up LinkUp 172 16 0 81 00 00 00 00 11 172 16 0 254 No ib50 81 bond0 1 User Up LinkUp 172 16 0
33. Backup Imvm2 TapeDrive HP Ultrium 3 SCSE028AMWPG02 idevinst1 Active Sessions sil Session History Imvm2 TapeDrive HP Ultrium_3 SCSI 0294MWPQO2Z idevisg2 Tape Devices Imvm2 TapeDrive HP Ultrium 3 2 idevinst2 License Imvm2 TapeDrive HP Ultrium 3 idevisg3 Imvm2 TapeDrive HP Uitrium_3 SCSiO294MAPQ04 Idevinst3 If you add a tape or media changer device to the SAN click Rescan Device to update the list If you remove a device and want to delete it from the list reboot all of the servers to which the device is attached To view tape and media changer devices from the CLI use the following command ibrix tape 1 Using NDMP backup applications 63 To rescan for devices use the following command ibrix tape r NDMP events An NDMP Server can generate three types of events INFO WARN and ALERT These events are displayed on the GUI and can be viewed with the event command INFO events Identifies when major NDMP operations start and finish and also report progress For example 7012 Level 3 backup of mnt ibfs7 finished at Sat Nov 7 21 20 58 PST 2011 7013 Total Bytes 38274665923 Average throughput 236600391 bytes sec WARN events Indicates an issue with NDMP access the environment or NDMP operations Be sure to review these events and take any necessary corrective actions Following are some examples 0000 Unauthorized NDMP Clie
34. C BRK spare para el RC 212 HP 9730 Performance Chassis QZ729A cccceseecceeeeeecceeeeeesssssssseueseeseseeseceeeeeeeeeeeeees 212 HP IBRIX 9730 140 TB MLStorage 2xBL Performance Module 02730 212 HP 9730 210 TB ML Storage 2xBL Performance Module 2731 213 HP X9730 140 TB 6G ML Storage 2xBL Performance Module 02732 213 HP X9730 210TB 6G ML Storage 2xBL Performance Module 02733 214 D The IBRIX 9720 component and cabling 215 Base and expansion cabinets is 215 Front view of a base cabinas 215 Back view of a base cabinet with one capacity block ooooooocccnoocooococccccooononnononononannnnnnoos 216 Front view of full base cabinet isi ente 217 Back view of full base cabe acaparado 218 Front view of an expansion cabinet ciertas 219 Back view of an expansion cabinet with four capacity 220 Performance blocks c Class Blade enclosure sssssssssssescccccccceceeeeeesesesssssseseeeescocceccceeess 220 Front view of eclass Blade eiicloSUfBie eiiis i er PUn a Etna 220 Rear view of a cClass Blade enclosuteaius cie edet tiet oue aeu vea 221 E T TT 221 olore re allo e el
35. Customer Entered serial number Customer Entered product number System Country code Choose a country Enttiement type Enttiement D Obigation D Custom Detvery System Site Information Ste None selected Customer Contact Pray customer catt Pone veces ESTEE Seccodary customer contact None selected Primary service contact None selected Verifying device entitlements To verify the entitlement information in HP SIM complete the following steps 1 Go to Remote Support Configuration and Services and select the Entitlement tab 2 Check the devices discovered NOTE If the system discovered on HP SIM does not appear on the Entitlement tab click Synchronize RSE 3 Select Entitle Checked from the Action List Click Run Action 5 When the entitlement check is complete click Refresh ps NOTE If the system discovered on HP SIM does not appear on the Entitlement tab click Synchronize RSE The devices you entitled should be displayed as green in the ENT column on the Remote Support System List dialog box Remote Support System List Action Status Message Product Serial System Name 1 10 24 66 2560260174 AJ805A 10 24 76 SGH149X2N3 583914 B21 10 2 59 126 583914 21 USE925N3VM 504633 001 IN If a device is red verify that the customer entered serial number and part number are correct a
36. Fusion Manager by issuing the following commands on the active FM server a Enter the following command to set all instances of Fusion Manager to the nofmfailover mode ibrix fm m nofmfailover A b restart Fusion Manager enter the following command Service ibrix fusionmanager restart c Enter the following command to set all instances of Fusion Manager to the passive mode ibrix fm m passive A Upgrading Linux 9000 clients Be sure to upgrade the cluster nodes before upgrading Linux 9000 clients Complete the following steps on each client 1 Download the latest HP 9000 client 6 2 package 2 Expand the tar file 3 Run the upgrade script ibrixupgrade f The upgrade software automatically stops the necessary services and restarts them when the upgrade is complete 4 Execute the following command to verify the client is running IBRIX software etc init d ibrix client status IBRIX Filesystem Drivers loaded IBRIX IAD Server pid 3208 running The IAD service should be running as shown in the previous sample output If it is not contact HP Support Upgrading Linux 9000 clients 131 Installing a minor kernel update on Linux clients The 9000 client software is upgraded automatically when you install a compatible Linux minor kernel update If you are planning to install a minor kernel update first run the following command to verify that the update is compatible with the 9000 client software usr local
37. If you are running IBRIX 5 6 upgrade to 6 1 before upgrading to 6 2 If you are upgrading from Upgrade to Where to find additional information IBRIX version 5 4 IBRIX version 5 5 Upgrading the IBRIX software to the 5 5 release page 193 IBRIX version 5 5 IBRIX version 5 6 Upgrading the IBRIX software to the 5 6 release page 189 IBRIX version 5 6 IBRIX version 6 1 Upgrading the IBRIX software to the 6 1 release page 179 Upgrading the IBRIX software to the 6 1 release This section describes how to upgrade to the latest IBRIX software release The Fusion Manager and all file serving nodes must be upgraded to the new release at the same time Note the following Upgrades to the IBRIX software 6 1 release are supported for systems currently running IBRIX software 5 6 x and 6 x NOTE If your system is currently running IBRIX software 5 4 x first upgrade to 5 5 x then upgrade to 5 6 x and then upgrade to 6 1 See Upgrading the IBRIX software to the 5 5 release page 193 If your system is currently running IBRIX software 5 5 x upgrade to 5 6 x and then upgrade to 6 1 See Upgrading the IBRIX software to the 5 6 release page 189 Verify that the root partition contains adequate free space for the upgrade Approximately AGB is required Be sure to enable password less access among the cluster nodes before starting the upgrade Do not change the active passive Fusion Manager configuration durin
38. On the run the following command ibrix phonehome d Troubleshooting Insight Remote Support Devices are not discovered on HP SIM Verify that cluster networks and devices can access the CMS Devices will not be discovered properly if they cannot access the CMS The maximum number of SNMP trap hosts has already been configured IF this error is reported the maximum number of trapsink IP addresses have already been configured For OA devices the maximum number of trapsink IP addresses is 8 Manually remove a trapsink IP address from the device and then rerun the Phone Home configuration to allow Phone Home to add the CMS IP address as a trapsink IP address A cluster node was not configured in Phone Home IF a cluster node was down during the Phone Home configuration the log file will include the following message SEVERE Sent event server status down Server server name down When the node is up rescan Phone Home to add the node to the configuration See Updating the Phone Home configuration page 33 Fusion Manager IP is discovered as Unknown Verify that the read community string entered in HP SIM matches the Phone Home read community string Also run snmpwalk on the VIF IP and verify the information Configuring HP Insight Remote Support on IBRIX 9000 systems 33 snmpwalk v 1 c read community string FM VIF IP 1 3 6 1 4 1 18997 Critical failures occur when discovering 9720 OA The 3GB SAS switc
39. Route Standby Server Standby Interface bond 2 Inactive Standby User Inactive Standby a 0 1 10 30 69 151 User Up LinkUp ib69s2 bondO 1 nes 10 30 6591 Cluster Up LinkUp 2 CIFS bond0 0 10 30 69 131 Cluster Up LinkUp Active Power amp Events The NICs panel for the ib69s2 the backup server shows that bond0 2 is an inactive standby NIC and bondo 2 is an active NIC Configuring High Availability on the cluster 45 IC E 8 Summary Name IP Type State Route Standby Server Standby Interface see ua pm bond 10 30 69 2 Cluster Up LinkUp 10 30 0 4 A 10 30 69 161 User Up LinkUp ib69s1 0 2 a Mountpoints 5 bond 1 Inactive Standby User Inactive Standby N A crs Power 82 Events Changing the HA configuration To change the configuration of a NIC select the server on the Servers panel and then select NICs from the lower Navigator Click Modify on the NICs panel The General tab on the Modify NIC Properties dialog box allows you change the IP address and other NIC properties The NIC HA tab allows you to enable or disable HA monitoring and failover on the NIC and to change or remove the standby NIC You can also enable link state monitoring if it is supported on your cluster See To view the power source for a server select the server on the Servers panel and then select Power from the lower
40. SIM uses the SNMP protocol to discover and identify IBRIX systems automatically On HP SIM open Options gt Discovery gt New and then select Discover a group of systems On the Edit Discovery dialog box enter the discovery name and the IP addresses of the devices to be monitored For more information see the HP SIM 6 3 documentation NOTE Each device in the cluster should be discovered separately New Discovery Decovers group of systems a single system Requred fei Mame x9320 node Schedule y Automatcaby execute Gacovery every 1 days M 1130 AM v Ping inclusion ranges system hosts names and or hosts Hep wth syetax 10 2 4 76 30 Getting started Enter the read community string on the Credentials gt SMTP tab This string should match the Phone Home read community string If the strings are not identical the device will be discovered as Unknown Credentials ConBgure Repait SNMP 5 SNMP Credentials New Discovery Task 1 Use these credentials Read community sting labaystem LL WM these credentials tal try others that may apply This may empact performance Leam more The following example shows discovered devices on HP SIM 6 3 File serving nodes are discovered as ProLiant server HS Summary 0 Critica V susor d 0 Minor 2 Normal o Disabled 0 Unknown Total 7 242 Server 10 259 104 ProLiant DL380 G6 Y Y Q 242 S
41. SPS BD POWER UID W CABLE 399054 001 Optional X9700c SPS BD RISER X9700c 399056 001 Optional SPS PWR ON OFF BOARD W CABLE 399055 001 Optional SPS BD 7 SEGMENT DISPLAY 399057 001 Optional X9700c SPS POWER SUPPLY X9700c 405914 001 Mandatory SPS CA EXT MINI SAS 2M 408767 001 Mandatory SPS CA EXT MINI SAS 4M 408768 001 Mandatory SPS FAN SYSTEM 9700 413996 001 Mandatory SPS RACKMOUNT KIT 432461 001 Optional SPS BATTERY MODULE 9700 436941 001 Mandatory SPS PWR SUPPLY X9700cx 441830 001 Mandatory SPS BD BACKPLANE II X9700c 454574 001 Optional SPS HW PLASTICS KIT 441835 001 Mandatory SPS POWER SUPPLY 1200W 449423 001 Optional SPS POWER BLOCK W POWER B P 455974 001 Optional BDS X9700cx SPS HDD B P W CABLES 8 455976 001 No DRAWER ASSY 9700 SPS BD LED PANEL W CABLE 455979 001 Optional X9700cx SPS BD CONTROLLER 9100C 489833 001 Optional X9700c d d SAS 3 5 461289 001 Mandatory SPS POWER UID BEZEL ASSEMBLY 466264 001 No SPS BD 2 PORT W 1 5 EXPAND 498472 001 Mandatory X9700cx X9700 164TB Capacity Block X9700c and X9700cx AW598B 231 Description Spare part number Customer self repair SPS DRV HD 2 TB 7 2K DP SAS 3 5 508010 001 Mandatory HP M6412C DISK ENCLOSURE 530834 001 No SPS CHASSIS X9700c 530929 001 Optional ACCESS PANEL 531224 001 Mandatory 232 IBRIX 9720 spare parts list F Warnings and precautions Electrostatic discharge
42. Statistics tool processes page 97 for information about starting and stopping the processes e the Fusion Manager deamon is not running during the installation Statstool is installed as passive When Fusion Manager acquires an active passive state the Statstool management console automatically changes according to the state of Fusion Manager Enabling collection and synchronization 92 To enable collection and synchronization configure synchronization between nodes Run the following command on the active Fusion Manager node specifying the node names of all file serving nodes usr local ibrix stats bin stmanage setrsync nodel name nodeN name For example d stmanage setrsync ibr 3 31 1 ibr 3 31 2 ibr 3 31 3 NOTE Do not run the command on individual nodes All nodes must be specified in the same command and can be specified in any order Be sure to use node names not IP addresses To test the rsync mechanism see Testing access page 97 Using the Statistics tool Upgrading the Statistics tool from IBRIX software 6 0 The statistics history is retained when you upgrade to version 6 1 or later The Statstool software is upgraded when the IBRIX software is upgraded using the ibrix upgrade and auto ibrixupgrade scripts Note the following e If statistics processes were running before the upgrade started those processes will automatically restart after the upgrade completes successfully If processe
43. TE 222 X9700c array controller with 12 disk drives eerte dida 223 Front view ofan 06 223 Rear view of an X9 FOG i n 223 X9700cx dense JBOD with 70 disk diva 223 Front view of UA AA A 224 Rear view NZD UE AAA A 224 Bl A 224 Capacity block cabling Base and expansion cabinets 224 Virtual Connect Flex 10 Ethernet module cabling Base 225 SAS switch cabling Base Cobain 226 SAS switch cabling Expansion cae viaria 226 E The IBRIX 9720 spare parts 220 The IBRIX 9720 Storage Base AW S4B Alsina 228 X9700 Expansion Patek tria ra 228 X9700 Server Chassis liada 229 X9700 Blade Server ciar ii veia 229 X9700 8218 Capacity Block X9700c and X9700ex 551 teretes 230 9700 164TB Capacity Block X9700c and X9700cx 5988 231 FWaminas ond ieee nen ass Z8 Electrostatic discharge Ino Oe ii rin ete baa 233 Preventing electrostatic discusion 233 Grounding meno OTT 233 gt A OTT 233 EM Ms o NITET 234 Weight WII ia 234 Rack warnings and Precio abra x s
44. Up 10 10 18 4 Thu Oct 25 13 59 40 MDT 2012 Report Result Type State Module Up time Last Update Network Thread Protocol PASSED Server Up Loaded 1699630 0 Thu Oct 25 13 59 40 MDT 2012 10 10 18 4 64 true CPU Information Cpu System User Util Nice Load 1 3 15 min Network Bps Disk 0 0 0 0 0 09 0 05 0 01 1295 1024 Memory Information Mem Total Mem Free Buffers Cached KB Swap Total KB Swap Free KB 8045992 4190584 243312 2858364 14352376 14352376 Version OS Information Fs Version IAD Version OS OS Version Kernel Version Architecture Processor 6 2 338 internal rev 130683 in SVN 6 2 338 GNU Linux Red Hat Enterprise Linux Server release 5 5 Tikanga 2 6 18 194 el5 x86 64 x86 64 Remote Hosts Host Type Network Protocol Connection State Monitoring cluster health 89 bv18 03 Server 10 10 18 3 true S SET S READY S SENDHB bv18 04 Server 10 10 18 4 true S_NEW Check Results Check bvig 04 can ping remote segment server hosts Check Description Result Result Information Remote server bv18 03 pingable PASSED Check Iad s monitored nics are pingable Check Description Result Result Information User nic bv18 04 bond1 2 pingable from host bv18 03 PASSED Check Physical volumes are readable Check Description SCS Result Result Information Physical volume 0wndzX STuL wSIi wc7w 12hv JZ2g Lj2JTf readable PASSED dev mpath mpath2 Physical volume aoA402 Ilek G9B2 HHyR H5Y8 eexU P6knhd readab
45. Use a portable field service kit with a folding staticdissipating work mat IF you do not have any of the suggested equipment for proper grounding have an HP authorized reseller install the part NOTE For more information on static electricity or assistance with product installation contact your HP authorized reseller Grounding methods There are several methods for grounding Use one or more of the following methods when handling or installing electrostatic sensitive parts e Use a wrist strap connected by a one cord to a grounded workstation or computer chassis Wrist straps are flexible straps with a minimum of 1 megohm 10 percent resistance in the ground cords To provide proper ground wear the strap snug against the skin e Use heel straps toe straps or boot straps at standing workstations Wear the straps on both feet when standing on conductive floors or dissipating floor mats e Use conductive field service tools e Use a portable field service kit with a folding static dissipating work mat If you do not have any of the suggested equipment for proper grounding have an HP authorized reseller install the part NOTE For more information on static electricity or assistance with product installation contact your HP authorized reseller Electrostatic discharge information 233 Equipment symbols If the following symbols are located on equipment hazardous conditions could exist A WARNING MA Any enclosed
46. a bonded VIF and backup nodes for a unified network topology using the 10 10 x y subnet VLAN tagging is configured for hosts ib142 129 and ib142 131 on the 51 subnet Add the 51 interface with the VLAN tag ibrix nic a n bond0 51 h ib142 129 ibrix nic a n bond0 51 h ib142 131 Assign an IP address to the bondo 51 VIFs on each node ibrix nic c n bond0 51 h ib142 129 I 15 226 51 101 M 255 255 255 0 ibrix nic c n bond0 51 h ib142 131 I 15 226 51 102 M 255 255 255 0 Add the 51 2 VIF on top of the interface ibrix nic a n bond0 51 2 h ib142 131 ibrix nic a n bond0 51 2 h ib142 129 Configure backup nodes 4 ibrix nic b H ib142 129 bond0 51 ib142 131 bond0 H ibrix nic b H ib142 131 bond0 51 ib142 129 bond0 51 Create the user FM VIF ibrix fm c 15 226 51 125 d bond0 51 1 n 255 255 255 0 v user ul Hn NON For more information about VLAG tagging see the HP IBRIX Storage Network Best Practices Guide Support for link state monitoring Do not configure link state monitoring for user network interfaces or VIFs that will be used for SMB or NFS Link state monitoring is supported only for use with SCSI storage network interfaces such as those provided with 9300 Gateway systems 38 Configuring virtual interfaces for client access 4 Configuring failover This chapter describes how to configure failover for agile management consoles file serving nodes ne
47. again Storage connection For servers with HBA protected Fibre Channel access failure of the HBA triggers failover of the node to a designated standby server High availability and redundancy 13 2 Getting started This chapter describes how to log in to the system boot the system and individual server blades change passwords and back up the Fusion Manager configuration lt also describes the IBRIX software management interfaces O IMPORTANT Follow these guidelines when using your system Do not modify any parameters of the operating system or kernel or update any part of the IBRIX 9720 9730 Storage unless instructed to do so by HP otherwise the system could fail to operate properly File serving nodes are tuned for file serving operations With the exception of supported backup programs do not run other applications directly on the nodes Setting up the IBRIX 9720 9730 Storage An HP service specialist sets up the system at your site including the following tasks Installation steps Before starting the installation ensure that the product components are in the location where they will be installed Remove the product from the shipping cartons confirm the contents of each carton against the list of included items check for any physical damage to the exterior of the product and connect the product to the power and network provided by you Review your server network and storage environment relevant to the HP E
48. are further broken down into categories describing the failover status of the node and the status of monitored NICs and HBAs State Description Normal Up Operational Alert Error Up Alert Server has encountered a condition that has been logged An event will appear in the Status tab of the GUI and an email notification may be sent Up InFailover Server is powered on and visible to the Fusion Manager and the Fusion Manager is failing over the server s segments to a standby server Up FailedOver Server is powered on and visible to the Fusion Manager and failover is complete Down InFailover Server is powered down or inaccessible to the Fusion Manager and the Fusion Manager is failing over the server s segments to a standby server Down FailedOver Server is powered down or inaccessible to the Fusion Manager and failover is complete Down Server is powered down or inaccessible to the Fusion Manager and no standby server is providing access to the server s segments The STATE field also reports the status of monitored NICs and HBAs If you have multiple HBAs and NICs and some of them are down the state is reported as HBAsDown or NicsDown 86 Monitoring cluster operations Monitoring cluster events IBRIX software events are assigned to one of the following categories based on the level of severity e Alerts A disruptive event that can result in loss of access to file system data For example a
49. are the left and right pull out drawers respectively The following diagram shows the numbering in an array box 1 Box 1 X9700c 2 Box 2 X9700cx left drawer as viewed from the front 3 Box 3 X9700cx right drawer as viewed from the front An array normally has two controllers Each controller has a battery backed cache Each controller has its own firmware Normally all servers should have two redundant paths to all arrays 222 The IBRIX 9720 component and cabling diagrams X9700c array controller with 12 disk drives Front view of an X9700c 1 Bay 1 2 Bay 2 3 Bay 3 4 Bay 4 Rear view of an X9700c 1 Battery 1 2 Battery 2 3 SAS expander port 1 4 UID 5 Power LED 6 System fault LED 7 On Off power button 8 Power supply 2 5 Power LED 6 System fault LED 7 UID LED 8 Bay 12 9 Fan 2 10 X9700c controller 2 11 SAS expander port 2 12 SAS port 1 13 X9700c controller 1 14 Fan 1 15 Power supply 1 X9700cx dense JBOD with 70 disk drives NOTE This component is also known as the HP 600 Modular Disk System For an explanation of the LEDs and buttons on this component see the HP 600 Modular Disk System User Guide at http Peres eam manuals Under Storage click Disk Storage Systems then under Disk Enclosures click HP 600 Modular Disk System Capacity blocks 223 Front view of an X9700cx 1 Drawer 1 2 Drawer 2 Rear view
50. bergeben Weitere Informationen erhalten Sie von Ihrem Entsorgungsunternehmen f r Hausm ll Greek recycling notice Anoppiwn xpnorou Eupwnaikn Evwon Auro ro onpaive Sev np ne va anoppiwete ro npol v pe anoppippara va npootat yere mv avOpwnivn uyela nepiPadAov napa i ovrag rov amp onAiop ejouciodormuevo onpeio yia THY NAEKTPIKOU 0 pe umnpecla TNG Hungarian notice hullad k anyagok megsemmis t se az Eur pai Uni h ztart saiban Ez a szimb lum azt jelzi hogy a k sz l ket nem szabad a h ztart si hullad kkal egy tt kidobni Ehelyett leselejtezett berendez seknek az elektromos vagy elektronikus hullad k tv tel re kijel lt helyen t rt n beszolg ltat s val meg vja az emberi eg szs get s a k rnyezetet Tov bbi inform ci t a helyi k ztisztas gi v llalatt l kaphat 244 Regulatory compliance notices Italian recycling notice Smaltimento di apparecchiature usate da parte di utenti privati nell Unione Europea Questo simbolo avvisa di non smaltire il pr
51. carry traffic between file serving nodes and clients HP recommends that you configure one or more user network interfaces for this purpose To provide high availability for a user network you should configure a bonded virtual interface VIF for the network and then set up failover for the VIF This method prevents interruptions to client traffic If necessary the file serving node hosting the VIF can fail over to its backup server and clients can continue to access the file system through the backup server IBRIX systems also support the use of VLAN tagging on the cluster and user networks See Configuring VLAN tagging page 38 for an example Network and VIF guidelines To provide high availability the user interfaces used for client access should be configured as bonded virtual interfaces VIFs Note the following e Nodes needing to communicate for file system coverage or for failover must be on the same network interface Also nodes set up as a failover pair must be connected to the same network interface e Use a Gigabit Ethernet port or faster for user networks e NFS SMB FTP and HTTP clients can use the same user VIF The servers providing the VIF should be configured in backup pairs and the NICs on those servers should also be configured for failover See Configuring High Availability on the cluster in the administrator guide for information about performing this configuration from the GUI e linux and Windows
52. entire IBRIX software file systems or portions of a file system You can use any supported NDMP backup application to perform the backup and recovery operations In NDMP terminology the backup application is referred to as a Data Management Application or DMA The DMA is run on a management station separate from the cluster and communicates with the cluster s file serving nodes over a configurable socket port The NDMP backup feature supports the following e protocol versions and 4 e Two way NDMP operations e Three way NDMP operations between two network storage systems Each file serving node functions as an NDMP Server and runs the NDMP Server daemon ndmpd process When you start a backup or restore operation on the DMA you can specify the node and tape device to be used for the operation Following are considerations for configuring and using the NDMP feature e When configuring your system for NDMP operations attach your tape devices to a SAN and then verify that the file serving nodes to be used for backup restore operations can see the appropriate devices e When performing backup operations take snapshots of your file systems and then back up the snapshots e When directory tree quotas are enabled an NDMP restore to the original location fails if the hard quota limit is exceeded The NDMP restore operation first creates a temporary file and then restores a file to the temporary file After this succeeds the r
53. events The following command associates all types of events to admin hp com ibrix event c m admin hp com The next command associates all Alert events and two Info events to admin hp com ibrix event c e ALERT server registered filesystem space full m adminehp com Configuring email notification settings To configure email notification settings specify the SMTP server and header information and turn the notification process on or off ibrix event m on off s SMTP f from r reply to t subject The server must be able to receive and send email and must recognize the From and Reply to addresses Be sure to specify valid email addresses especially for the SMTP server If an address is not valid the SMTP server will reject the email The following command configures email settings to use the mail hp com SMTP server and turns on notifications ibrix event m on s mail hp com f FMehp com r MISehp com t Clusterl Notification NOTE The state of the email notification process has no effect on the display of cluster events in the GUI Dissociating events and email addresses To remove the association between events and email addresses use the following command ibrix event d e ALERT WARN INFO EVENTLIST m EMAILLIST For example to dissociate event notifications for admin hp com ibrix event d m admin hp com To turn off all Alert notifications for admin hp com ibrix event d e ALERT m admin hp com To turn off t
54. fields Field Description Host Server on which the HBA is installed Node WWN This HBA s WWNN Port WWN This HBA s WWPN Port State Operational state of the port Backup Port WWN WWPN of the standby port for this port standby paired HBAs only Monitoring Whether HBA monitoring is enabled for this port Checking the High Availability configuration Use the ibrix haconfig command to determine whether High Availability features have been configured for specific file serving nodes The command checks for the following features and provides either a summary or a detailed report of the results e Programmable power source e Standby server or standby segments e Cluster and user network interface monitors e Standby network interface for each user network interface e port monitoring e Status of automated failover on or off For each High Availability feature the summary report returns status for each tested file serving node and optionally for their standbys e Passed The feature has been configured e Warning The feature has not been configured but the significance of the finding is not clear For example the absence of discovered HBAs can indicate either that the HBA monitoring feature was not configured or that HBAs are not physically present on the tested servers e Failed The feature has not been configured The detailed report includes an overall result status for all tested file serving node
55. file serving nodes owning segments in the file system with at least one thread running on each node For a system with multiple controllers the utility will run a thread for each controller if possible e Files up to 3 8 TB in size can be upgraded To enable snapshots on larger files they must be migrated after the upgrade is complete see Migrating large files page 185 e n general the upgrade takes approximately three hours per TB of data The configuration of the system can affect this number Running the utility Typically the utility is run as follows to upgrade a file system upgrade60 sh file system For example the following command performs a full upgrade on file system 1 upgrade60 sh fsl Progress and status reports The utility writes log files to the directory usr local ibrix log upgrade60 on each node containing segments from the file system being upgraded Each node contains the log files for its segments Log files are named host segment date upgrade log For example the following log file is for segment ilv2 on host ib4 2 ib4 2 ilv2 2012 03 27 11 01 upgrade log Restarting the utility IF the upgrade is stopped or the system shuts down you can restart the utility and it will continue the operation To stop an upgrade press Ctrl C on the command line or send an interrupt signal to the process There should be no adverse effects to the file system however certain blocks that we
56. for the controller to boot e the seven segment display shows on then the fault has been corrected the system has returned to normal and you can proceed to step 11 e If the seven segment display continues to shows an Hn 67 or Cn 02 code continue to the next step the fault does not clear remove the right I O module and replace with the new I O module h the appropriate X9700c controller as described below in the section Re seating an X9700c controller page 157 9 If the seven segment display now shows on run exds_stdiag command and validate that both controllers are seen by exds_stdiag 10 If the fault has not cleared at this stage there could be a double fault that is failure of two modules Alternatively one of the SAS cables could be faulty Contact HP Support to 156 Troubleshooting help identify the fault or faults Run the exds_escalate command to generate an escalate report for use by HP Support as follows exds escalate 11 At this stage an X9700cx I O module has been replaced Change the firmware of the I O modules to the version included in the 9720 Storage a Identify the serial number of the array using the command exds stdiag b X9700cx I O module firmware update command opt hp mxso firmware exds9100cx scexe s The command will pause to gather the system configuration which can take several minutes on a large system It then displays
57. gateway provides a route between networks If your default gateway is on a different subnet than bondo skip this field Server Setup Server Networking Configuration Hostname IP Address Netmask Default Gateway Optional ULAN Tag ID Optional The Configuration Summary lists your configuration Select Ok to continue the installation Recovering a 9720 or 9730 file serving node 169 6 This step applies only to 9730 systems If you are restoring a blade on an IBRIX 9720 system go to step 8 The 9730 blade being restored needs OA VC information from the chassis It can obtain this information directly from blade 1 or you can enter the OA VC credentials manually The wizard now checks and verifies the following OA and VC firmware VC configuration hpspAdmin user accounts on the ilOs Chassis configuration The firmware on the SAS switches and notifies you if an update is needed The SAS configuration The storage firmware and notifies you if an update is needed 170 Recovering the 9720 9730 Storage e Storage configuration e Networking on the blade On the Join a Cluster Step 2 dialog box enter the requested information NOTE On the dialog box Register IP is the Fusion Manager management console IP not the IP you are registering for this blade Join a Cluster Step 2 DNS IP s DNS Domain NTP server and Register IP Register IP Primary DNS Server Secondary DNS Server Tertiary D
58. gathered For example if you start the tool at 9 40 a m and ask for a report from 9 00 a m to 9 30 a m the report cannot be generated because data was not gathered for that period e Reports are generated an hourly basis It may take up to an hour before a report is generated and made available for viewing NOTE If the system is currently generating reports and you request a new report at the same time the GUI issues an error Wait a few moments and then request the report again Deleting reports To delete a report log into each node and remove the report from the local statstool histstats reports directory Maintaining the Statistics tool Space requirements The Statistics tool requires about 4 MB per hour for a two node cluster To manage space take the following steps e Maintain sufficient space 4 CB to 8 GB for data collection in the usr local statstool histstats directory e Monitor the space in the local statstool histstats reports directory For the default values see Changing the Statistics tool configuration page 96 Maintaining the Statistics tool 95 Updating the Statistics tool configuration When you first configure the Statistics tool the configuration includes information for all file systems configured on the cluster If you add a new node or a new file system or make other additions to the cluster you must update the Statistics tool configuration Complete the following steps
59. geautoriseerde technici mogen het apparaat repareren French laser notice A AVERTISSEMENT cet appareil peut tre quip d un laser class en tant que Produit laser de classe 1 et conforme la r glementation de la FDA am ricaine et la norme 60825 1 de l IEC Ce produit n met pas de rayonnement dangereux L utilisation de commandes de r glages ou de proc dures autres que ceux qui sont indiqu s ici ou dans le manuel d installation du produit laser peut exposer l utilisateur des rayonnements dangereux Pour r duire le risque d exposition des rayonnements dangereux Ne tentez pas d ouvrir le boitier renfermant l appareil laser ne contient aucune pi ce dont la maintenance puisse tre effectu e par l utilisateur Tout contr le r glage ou proc dure autre que ceux d crits dans ce chapitre ne doivent pas tre effectu s par l utilisateur Seuls les Mainteneurs Agr s HP sont habilit s r parer l appareil laser German laser notice AN VORSICHT Dieses enth lt m glicherweise einen Laser der nach den US amerikanischen FDA Bestimmungen und nach IEC 60825 1 als Laserprodukt der Klasse 1 zertifiziert ist Gesundheitsschadliche Laserstrahlen werden nicht emittiert Die Anleitungen in diesem Dokument m ssen befolgt werden Bei Einstellungen oder Durchf hrung sonstiger Verfahren die ber die Anleitungen in diesem Dokument bzw im Installationshandbuch des Laserger ts hinausgehen ka
60. generates uses and can radiate radio frequency energy and if not installed and used in accordance with the instructions may cause harmful interference to radio communications However there is no guarantee that interference will not occur in a particular installation If this equipment does cause harmful interference to radio or television reception which can be determined by turning the equipment Regulatory compliance identification numbers 237 off and on the user is encouraged to try to correct the interference by one or more of the following measures e Reorient or relocate the receiving antenna e Increase the separation between the equipment and receiver e Connect the equipment into an outlet on a circuit that is different from that to which the receiver is connected e Consult the dealer or an experienced radio or television technician for help Modification The FCC requires the user to be notified that any changes or modifications made to this device that are not expressly approved by Hewlett Packard Company may void the user s authority to operate the equipment Cables When provided connections to this device must be made with shielded cables with metallic RFI EMI connector hoods in order to maintain compliance with FCC Rules and Regulations Canadian notice Avis Canadien Class A equipment This Class A digital apparatus meets all requirements of the Canadian Interference Causing Equipment Regulations Cet
61. hardware events e 9720 CX storage device is not supported for HP Insight Remote Support Configuring the IBRIX cluster for Insight Remote Support To enable 9720 9730 systems for remote support you need to configure the Virtual SAS Manager Virtual Connect Manager and Phone Home settings All nodes in the cluster should be up when you perform this step NOTE Configuring Phone Home removes any previous IBRIX snmp configuration details and populates the SNMP configuration with Phone Home configuration details When Phone Home is enabled you cannot use ibrix snmpagent to edit or change the snmp agent configuration However you can use ibrix snmptrap to add trapsink IPs and you can use ibrix event to associate events to the trapsink IPs Registering Onboard Administrator The Onboard Administrator is registered automatically Configuring the Virtual SAS Manager On 9730 systems the SNMP service is disabled by default on the SAS switches To enable the SNMP service manually and provide the trapsink IP on all SAS switches complete these steps 1l Open the Virtual SAS Manager from the OA Select OA IP gt Interconnet Bays gt SAS Switch gt Management Console 2 Virtual SAS Manager open the Maintain tab click SAS Blade Switch and select SNMP Settings On the dialog box enable the SNMP service and supply the information needed for alerts WA 155 7 System Status Updated 2012 06 04 12 11 Refre
62. is pONSWE erratas pueblo 160 IBRIX RPC call to hostales als 160 Degrade server blade Power Flint 160 LUN status is tal RTT 160 Apparent failure of 161 X9700c enclosure front panel fault ID LED is amber 162 Replacement disk drive LED is not illuminated 162 X9700cx GSI LED trade 162 X9700cx drive LEDs are amber after firmware is 162 Configuring the Virtual Connect dom Mustaine 162 Synchronizing information on file serving nodes and the configuration database 163 Troubleshooting an Express Query Manual Intervention Failure 163 16 Recovering the 9720 9730 SIOF gBIesc i ein prota ho acerlo 166 Obtaining the latest IBRIX software release eene 166 Preparing for the TEC OVEN Ys eee aspecto ces invecta 166 Recovering 9720 9730 serving lis 167 Completing the restore EET 173 SS 175 LO remote console does not respond to 175 17 Support and other dedica 176 Contacting ras it dida 176 Related nina daa 176 KR websites aeee I IQ 177 A A EA TES 177 Product a 177 Con
63. is disabled This section provides the prerequisites and steps for enabling crash capture NOTE Enabling crash capture adds a delay up to 240 seconds to the failover to allow the crash kernel to load The failover process ensures that the crash kernel is loaded before continuing When crash capture is enabled the system takes the following actions when a node fails 1 The Fusion Manager triggers a core dump on the failed node when failover starts changing the state of the node to Up InFailover 2 The failed node boots into the crash kernel The state of the node changes to Dumping InFailover po The failed node continues with the failover changing state to Dumping FailedOver 4 After the core dump is created the failed node reboots and its state changes to Up FailedOver Configuring failover O IMPORTANT Complete the steps in Prerequisites for setting up the crash capture page 53 before setting up the crash capture Prerequisites for setting up the crash capture The following parameters must be configured in the ROM based setup utility RBSU before a crash can be captured automatically on a file server node in failed condition 1 Start RBSU Reboot the server and then Press F9 Key 2 Highlight the System Options option in main menu and then press the Enter key Highlight the Virtual Serial Port option below figure and then press the Enter key Select the COM1 port and then press the Enter ke
64. management console Fusion Manager for your cluster Recovering a 9720 or 9730 file serving node 167 NOTE If a management console is not located the following screen appears Select Enter FM IP and go to step 5 3 The Verify Hostname dialog box displays a hostname generated by the management console Enter the correct hostname for this server The Verify Configuration dialog box shows the configuration for this node Because you changed the hostname in the previous step the IP address is incorrect on the summary Select Reject and the following screen appears Select Enter FM IP 168 Recovering the 9720 9730 Storage Join the Cluster Failed You may have exhausted the selected FM s template in which case this can be corrected and the same FM retried Select FM Again Enter FM IP On the System Date and Time dialog box enter the system date day month year and time 24 hour format Tab to the Time Zone field and press Enter to display a list of time zones Select your time zone from the list Server Setup System Date and Time System Date System Time Time Zone On the Server Networking Configuration dialog box configure this server for bondo the cluster network Note the following e The hostname can include alphanumeric characters and the hyphen special character Do not use an underscore _ in the hostname IP address is the address of the server on bondo e The default
65. new OS fails power cycle the node Try rebooting If the install does not begin after the reboot power cycle the machine and select the upgrade line from the grub boot menu After the upgrade check usr local ibrix setup logs postupgrade log for errors or warnings If configuration restore fails on any node look at usr local ibrix autocfg logs appliance log on that node to determine which 192 Cascading Upgrades feature restore failed Look at the specific feature log file under usr local ibrix setup logs for more detailed information To retry the copy of configuration use the command appropriate for your server file serving node usr local ibrix autocfg bin ibrixapp upgrade s o An agile node a file serving node hosting the agile management console usr local ibrix autocfg bin ibrixapp upgrade f s e the install of the new image succeeds but the configuration restore fails and you need to revert the server to the previous install execute boot info r and then reboot the machine This step causes the server to boot from the old version the alternate partition e public network interface is down and inaccessible for any node power cycle that node Manual upgrade Check the following e the restore script fails check usr local ibrix setup logs restore log for details e If configuration restore fails look at usr local ibrix autocfg logs appliance log to determine which feature restore fa
66. of the IBRIX node as returned by hostname command If these two lines do not exist or they do not contain all of the information open the etc Hum hosts file with a text editor such as vi and modify the file so it contains the two lines matching the format provided in this step For example if the hostname command returns ss01 then the lines should appear as follows 127 0 0 1 ss01 localhost localdomain localhost 1 localhost6 localdomain6 localhost6 After the upgrade the Fusion Manager on each server in the IBRIX cluster must be restarted manually Online upgrades for IBRIX software 6 x to 6 2 125 1 Restart all passive Fusion Managers a Determine if the Fusion Manager is in passive mode by entering the following command ibrix fm i b the command returns passive regardless of failover disabled or not enter the following command to restart Fusion Manager Service ibrix fusionmanager restart c Redo steps a and b for each Fusion Manager 2 Restart the Active Fusion Manager by issuing the following commands on the active FM server a Enter the following command to set all instances of Fusion Manager to the nofmfailover mode ibrix fm m nofmfailover A b restart Fusion Manager enter the following command Service ibrix fusionmanager restart c Enter the following command to set all instances of Fusion Manager to the passive mode ibrix fm m passive A Automated offline upgrades fo
67. operational states see Monitoring the status of file serving nodes page 86 Both automated and manual failovers trigger an event that is reported on the GUI Automated failover can be configured with the HA Wizard or from the command line Configuring automated failover with the HA Wizard The HA wizard configures a backup server pair and optionally standby NICs on each server in the pair It also configures a power source such as an LO on each server The Fusion Manager uses the power source to power down the server during a failover On the GUI select Servers from the Navigator ESE Servers Updated Jun 14 2012 2 20 31 Status Name State CPU Met MB s Disk MB s Backup HA 06951 Up 4 0 01 0 00 ib69s2 off Event Status 24 hours 0 1 3 692 Up 1 0 03 0 00 ib69s1 off ilh Dashboard a Cluster Configuration E Filesystems Ei Snapshots E E Servers File Shares Click High Availability to start the wizard Typically backup servers are configured and server HA is enabled when your system is installed and the Server HA Pair dialog box shows the backup pair configuration for the server selected on the Servers panel IF necessary you can configure the backup pair for the server The wizard identifies the servers in the cluster that see the same storage as the selected server Choose the appropriate server from the list The wizard also attempts to locate th
68. option toggles agent trap transmission on and off The default is on For example to create a v2 trapsink with a new community name enter ibrix snmptrap c h lab13 116 v 2 m private For a v3 trapsink additional options define security settings USERNAME is a v3 user defined on the trapsink host and is required The security level associated with the trap message depends on which passwords are specified the authentication password both the authentication and privacy passwords or no passwords The CONTEXT _ NAME is required if the trap receiver has defined subsets of managed objects The format is ibrix snmptrap c h HOSTNAME v 3 p PORT n USERNAME j MD5 SHA k AUTHORIZATION PASSWORD DES AES z PRIVACY PASSWORD x CONTEXT NAME s on off The following command creates a v3 trapsink with a named user and specifies the passwords to be applied to the default algorithms If specified passwords must contain at least eight characters ibrix snmptrap c h lab13 114 v 3 n trapsender k auth passwd z priv passwd Associating events and trapsinks Associating events with trapsinks is similar to associating events with email recipients except that you specify the host name or IP address of the trapsink instead of an email address Use the ibrix event command to associate SNMP events with trapsinks The format is 58 Configuring cluster event notification ibrix_event c y SNMP e ALERT INFO EVENTLIST
69. passive or nofmfailover mode A Fusion Manager in nofmfailover mode can be moved only to passive mode With the exception of the local node running the active Fusion Manager the A option moves all instances of the Fusion Manager to the specified mode The h option moves the Fusion Manager instances in lt FMLIST gt to the specified mode Agile Fusion Manager and failover Using an agile Fusion Manager configuration provides high availability for Fusion Manager services If the active Fusion Manager fails the cluster virtual interface will go down When the passive Fusion Manager detects that the cluster virtual interface is down it will become the active console This Fusion Manager rebuilds the cluster virtual interface starts Fusion Manager services locally transitions into active mode and take over Fusion Manager operation Failover of the active Fusion Manager affects the following features e User networks The virtual interface used by clients will also fail over Users may notice a brief reconnect while the newly active Fusion Manager takes over management of the virtual interface e GUI You must reconnect to the Fusion Manager VIF after the failover Failing over the Fusion Manager manually To fail over the active Fusion Manager manually place the console into nofmfailover mode Enter the following command on the node hosting the console ibrix fm m nofmfailover Agile management consoles 39 The command takes effec
70. server 1 On the node hosting the active management console place the management console into maintenance mode lt ibrixhome gt bin ibrix fm m nofmfailover This step fails back the active management console role to the node currently hosting the passive agile management console the node that originally was active Wait approximately 90 seconds for the failover to complete and then run the following command on the node that was the target for the failover lt ibrixhome gt bin ibrix fm i The command should report that the agile management console is now Active on this node On the node with the active agile management console move the installer gt ibrix directory used in the previous release installation to ibrix old For example if you Upgrading the IBRIX software to the 5 5 release 199 12 13 14 15 16 17 18 19 20 21 expanded the tarball in root during the previous IBRIX installation on this node the installer is in root ibrix On the node with the active agile management console expand the distribution tarball or mount the distribution DVD in a directory of your choice Expanding the tarball creates a subdirectory named ibrix that contains the installer program For example if you expand the tarball in root the installer is in root ibrix Change to the installer directory if necessary and run the upgrade ibrixupgrade f The installer upgrades both the m
71. servers To view information about the server and chassis included in your system 1 Select Servers from the Navigator tree The Servers panel lists the servers included in each chassis 2 Select the server you want to obtain more information about Information about the servers in the chassis is displayed in the right pane High slate Servers Server amp Storage Expansan Updated Oct 8 2012 6 53 50 AM EDT Status ServerName Bay Chassis Type State CPU Net MBis Disk MBis Backup HA 1 Bichasis ch OSUSET33EAON 2 Items Evert Status 24 hours 1 1 4126 1661 1 9720 up 5 001 000 0126 1852 on O 4126 1682 2 9730 Up 1 0 00 000 0128 1081 00 dle Dainora Certncates E Storage Vendor Storage m Cierts Hostgo ps 4 Evertz To view summary information for the selected server select the Summary node in the lower Navigator tree 68 Monitoring cluster operations Summary Hame State Group Standby Auto Failover Enabled Module ID Uptime Last Update Admin IP Filesystem Version IAD Version Protocol 05 Kernel Version Architecture Processor CPUCSystem User Util Nice Load 1 5 15 min Memory Total MB Memory Free MB Buffers KB Cached KB Swap Total MB Swap Free MB Network MB s Disk MB s Number of Admin Threads Number of Server Threads Value Up servers ib128 1652 Yes Loaded 271 71 0668 4315 9 25 279 53 78308 13 19 23 Thu Oct 11 17 2
72. services are currently running v One or more tasks are running No tasks are running Statistics Historical performance graphs for the following items e Network I O MB s e Disk I O MB s e usage e Memory usage On each graph the X axis represents time and the Y axis represents performance Use the Statistics menu to select the servers to monitor up to two to change the maximum value for the Y axis and to show or hide resource usage distribution for CPU and memory Recent Events The most recent cluster events Use the Recent Events menu to select the type of events to display You can also access certain menu items directly from the Cluster Overview Mouse over the Capacity Filesystems or Segment Server indicators to see the available options Navigator The Navigator appears on the left side of the window and displays the cluster hierarchy You can use the Navigator to drill down in the cluster configuration to add view or change cluster objects such as file systems or storage and to initiate or view tasks such as snapshots or replication When you select an object a details page shows a summary for that object The lower Navigator allows you to view details for the selected object or to initiate a task In the following example we selected Filesystems in the upper Navigator and Mountpoints in the lower Navigator to see details about the mounts for file system ifs1 Management interfaces
73. serving nodes manage the individual segments of the file system Each segment is assigned to a file serving and each node can own several segments Segment ownership can be migrated from one node to another while the file system 15 actively in use The Fusion Manager must be running for this migration to occur The following diagram shows a front view of a performance block Class Blade enclosure with half height device bays numbering 1 through 16 Front view of a c Class Blade enclosure The following diagram shows a front view of a performance block Class Blade enclosure with half height device bays numbering 1 through 16 220 The IBRIX 9720 component and cabling diagrams Flex 10 A y 2 UT mv ri ge y gt MILII o NITE LET PI E SR IE Nue T 0 17 1 Interconnect bay 1 Virtual Connect Flex 10 6 Interconnect bay 6 reserved for future use 10 Ethernet Module 2 Interconnect bay 2 Virtual Connect Flex 10 7 Interconnect bay 7 reserved for future use 10 Ethernet Module 3 Interconnect bay 3 SAS Switch 8 Interconnect bay 8 reserved for future use 4 Interconnect bay 4 SAS Switch 9 Onboard Administrator 1 5 Interconnect bay 5 reserved for future use 10 Onboard Administrator 2 networks The server blades in the IBRIX 9720 Storage have two built in Flex 10 10 NICs The Flex
74. surface or area of the equipment marked with these symbols indicates ee of electrical shock hazards Enclosed area contains no operator serviceable parts To reduce the risk of injury from electrical shock hazards do not open this enclosure WARNING NO Any RJ 45 receptacle marked with these symbols indicates a network interface connection To reduce the risk of electrical shock fire or damage to the equipment do not plug telephone or telecommunications connectors into this receptacle WARNING AA Any surface or area of the equipment marked with these symbols indicates the presence of a hot surface or hot component Contact with this surface could result in injury A A WARNING Tap Power supplies or systems marked with these symbols indicate the presence of multiple sources of power WARNING N M Any product or assembly marked with these symbols indicates that the component exceeds the recommended weight for one individual to handle safely Weight warning A WARNING AA The device can be very heavy To reduce the risk of personal injury or damage to equipment e Remove all hot pluggable power supplies and modules to reduce the overall weight of the device before lifting e Observe local health and safety requirements and guidelines for manual material handling e help to lift and stabilize the device during installation or removal especially when the device is not fastened to the rails When a device we
75. tecyeling Hollee tope bi e eei A is 245 Romanian recycling Mola dene oet notet Exe bee ponit eae eats oen ea tete 246 Slovak recycling Notice see SS Ache lan Bird Qe AE A Le VIDES 246 Spanish recycling pa e IN EET ILU Ld 246 Swecisl eeyeliri HOHCE e acea ete oai er A E E A Cesta us 246 Battery replacement oldest id 247 Dutch battery ad 247 danse 247 battery A I E 248 lalian batery Hol eco Sat 248 Japanese battery MOLES Rog O 249 10 Contents Spanish battery nl ai a 249 o A AA 250 A 252 Contents 11 1 Product description HP 9720 and 9730 Storage are a scalable network attached storage NAS product The system combines HP IBRIX software with HP server and storage hardware to create a cluster of file serving nodes System features The 9720 and 9730 Storage provide the following features e Segmented scalable file system under a single namespace e SMB Server Message Block FTP and HTTP support for accessing file system data e Centralized CLI and GUI for cluster management e Policy management e Continuous remote replication e Dual redundant paths to all storage components e of throughput
76. the GUI has not been rebranded to SMB yet CIFS is just a different name for SMB The Power panel displays the following information o Host o o Address Slot ID e Events The Events panel displays the following information o evel o Time o Event e Hardware The Hardware panel displays the following information o name of the hardware component o The information gathered in regards to that hardware component See Monitoring hardware components page 71 for detailed information about the Hardware panel Monitoring hardware components The front of the chassis includes server bays and the rear of the chassis includes components such as fans power supplies Onboard Administrator modules and interconnect modules VC modules Monitoring 9720 9730 hardware 71 and SAS switches The following Onboard Administrator view shows a chassis enclosure on an IBRIX 9730 system Front View To monitor these components from the GUI l Click Servers from the upper Navigator tree 2 Click Hardware from the lower Navigator tree for information about the chassis that contains the server selected on the Servers panel as shown in the following image 1128 1651 Hardware 000000 gj Summary Name Value 8 HBAs Name ch_09USE133EADN NICs P Mountpoints Chassis type 9730 NFS Serial Number O9USE133EA0N CFS Monitoring Hos
77. the ONBACKUP field indicating that the primary server now owns the segments even though it does not In this situation you will be unable to complete the failback after you fix the storage subsystem problem Perform the following manual recovery procedure 1 Restore the failed storage subsystem 2 Reboot the primary server which will allow the arrested failback to complete 9000 client I O errors following segment migration Following successful segment migration to a different file serving node the Fusion Manager sends all 9000 clients an updated map reflecting the changes which enables the clients to continue I O operations If however the network connection between a client and the Fusion Manager is not active the client cannot receive the updated map resulting in client I O errors To fix the problem restore the network connection between the clients and the Fusion Manager Windows 9000 clients Logged in but getting a Permission Denied message The 9000 client cannot access the Active Directory server because the domain name was not specified Reconfigure the Active Directory settings specifying the domain name see the HP IBRIX 9000 Storage Installation Guide for more information Verify button in the Active Directory Settings tab does not work This issue has the same cause as the above issue Mounted drive does not appear in Windows Explorer To make a drive appear in Explorer after mounting it log off and then
78. the alternate partition usr local ibrix setup boot info r e the public network interface is down and inaccessible for any node power cycle that node NOTE Each node stores its ibrixupgrade 1log file in tmp Manual upgrade Check the following e restore script fails check usr local ibrix setup logs restore log for details e If configuration restore fails look at usr 1ocal ibrix autocfg logs appliance log to determine which feature restore failed Look at the specific feature log file under usr local ibrix setup logs for more detailed information To retry the copy of configuration use the following command usr local ibrix autocfg bin ibrixapp upgrade f s Offline upgrade fails because LO firmware is out of date If the firmware is out of date on a node the auto ibrixupgrade script will fail The usr local ibrix setup logs auto ibrixupgrade log reports the failure and describes how to update the firmware After updating the firmware run the following command on the node to complete the IBRIX software upgrade local ibrix ibrixupgrade f Node is not registered with the cluster network Nodes hosting the agile Fusion Manager must be registered with the cluster network If the ibrix fm command reports that the IP address for a node is on the user network you will need to reassign the IP address to the cluster network For example the following commands report that node ib51 101 whic
79. the expansion rack 211 C IBRIX 9730 spare parts list The following tables list spare parts both customer replaceable and non customer replaceable for the IBRIX 9730 Storage components The el information is current as of the publication date of this document For the latest spare parts information go to http partsurfer hp com HP IBRIX 9730 Performance Chassis QZ729A Description Spare part number SPS PWR MOD SINGLE PHASE 413494 001 SPS MODULE LCD 415839 001 SPS CA KIT MISC 416002 001 SPS CA SUV 416003 001 SPS HARDWARE KIT 432463 001 SPS PLASTICS HARDWARE KIT 441835 001 SPS SFP 1Gb VC RJ 45 453578 001 SPS P S 2450W 12V HTPLG 500242 001 SPS BD LCD PASS THRU 519348 001 SPS UPS R T3KVA 2U HV INTL G2 638842 001 SPS MODULE ENET VC FLEX 10 688896 001 HP IBRIX 9730 140 TB MLStorage 2xBL Performance Module QZ7304 Description Spare part number SPS CA EXT MINI SAS 2M 408767 001 SPS FAN SYSTEM 413996 001 SPS PLASTICS HARDWARE 414063 001 SPS PWR SUPPLY 441830 001 SPS DRV HD 146G SAS 2 5 SP 10 453138 001 SPS BD MEM MOD 256MB 40B 462974 001 SPS PROC WSM 2 4 80W E5620 594887 001 SPS DIMM 4GB 10600R 512MX4 595424 001 SPS BD PCA HP FBWC 1G CL5 598414 001 SPS BD SYSTEM I O G7 605659 001 SPS BD SMART ARRAY IDP1 8 8 MEZZ 615360 001 SPS PLASTICS HARDWARE MISC 619821 001 SPS COVER TOP 619822 001 SPS BACKPLANE HDD
80. the file system and its segments The v option produces detailed information about configuration checks that received a Passed result For example to view a detailed report for file serving node xsOl hp com ibrix haconfig i h xs01 hp com Overall HA Configuration Checker Results FAILED pS Overall Host Results Host HA Configuration Power Sources Backup Servers Auto Failover Nics Monitored Standby Nics HBAs Monitored xs01 hp com FAILED PASSED PASSED PASSED FAILED PASSED FAILED eco Server xs0l hp com FAILED Report Check Description Result Result Information Power source s configured PASSED Backup server or backups for segments configured PASSED Automatic server failover configured PASSED Cluster amp User Nics monitored Cluster nic xs01 hp com ethl monitored FAILED Not monitored User nics configured with a standby nic PASSED HBA ports monitored Hba port 21 01 00 e0 8b 2a 0d 6d monitored FAILED Not monitored Hba port 21 00 00 e0 8b 0a 0d 6d monitored FAILED Not monitored Capturing a core dump from a failed node 52 The crash capture feature collects a core dump from a failed node when the Fusion Manager initiates failover of the node You can use the core dump to analyze the root cause of the node failure When enabled crash capture is supported for both automated and manual failover Failback is not affected by this feature By default crash capture
81. the remaining charge and the charge status In the following image the Battery panel shows the information about a battery that has 100 percent of its charge remaining Battery Status Status Type uvulo Properties battery 5001438010B9FO0A0 Batteryt estimatedGChargeRemaining 100 chargeStatus FULLY CHARGED Monitoring the lO Cache Modules for a storage controller The IO Cache Module panel displays the following information about the IO cache module for a storage controller e Status e e UUID e Properties Provides information about the read write and cache size properties In the following image the IO Cache Module panel shows an IO cache module with read write properties enabled 10 Cache Module Status Status Type Properties OK tiOCacheModule 500143801099F 0A9 IlOCacheMod readCacheEnable true wrteCacheEnabie true cacheSize 1 GB 84 Monitoring cluster operations Monitoring storage switches in a storage cluster The Storage Switch panel provides the following information about the storage switches e Status e e Name e UUID e Serial Number e Model e Firmware Version e location To view information about a storage enclosed processor SEP for a storage switch expand the Storage Switch node and select the SEP node The following information is provided for the SEP e Status e e UUID e Model e Firmware Version In the following image the S
82. the utility file system from mounting The syntax is d exds stdiag raw lt filename gt The raw lt filename gt option saves the raw data gathered by the tool into the specified file in a format suitable for offline analysis for example by HP support personnel Following is a typical example of the output rootGkudosi exds stdiag ExDS storage diagnostic rev 7336 Storage visible to kudos1 Wed 14 Oct 2009 14 15 33 0000 node 7930RFCC BL460c G6 fw I24 20090620 cpus 2 arch Intel 5001438004DEF5D0 4101 in 7930RFCC fw 2 00 boxes 1 disks 2 luns 1 batteries 0 cache PAPWVOF9SXAOOS P700m in 7930RFCC fw 5 74 boxes 0 disks 0 luns 0 batteries 0 cache switch HP 3G SAS BL SWH in 2 72 switch HP 3G SAS BL SWH in 2 72 switch HP 3G SAS BL SWH in 4B fw 2 72 switch HP 3G SAS BL SWH in fw 2 72 Troubleshooting 9720 systems 147 ctlr P89A40A9SV600X ExDS9100cc in 01 USP7030EKR slot 1 fw 0126 2008120502 boxes 3 disks 80 luns 10 batteries 2 0K cache OK box 1 ExDS9100c sn USP7030EKR fw 1 56 temp OK fans OK OK OK OK power OK OK box 2 ExDS9100cx sn CN881502JE fw 1 28 temp OK fans OK OK power OK OK OK OK box 3 ExDS9100cx sn CN881502JE fw 1 28 temp OK fans OK OK power OK OK OK OK ctlr P89A40A9SUSOLC ExDS9100cc in 01 USP7030EKR slot 2 fw 0126 2008120502 boxes 3 disks 80 luns 10 batteries 2 OK cache OK box 1 ExDS9100c sn USP7030EKR fw 1 56 temp OK fans OK OK OK OK power OK OK box 2
83. to become active root x109s3 ibrix ibrix fm i FusionServer x109s3 active quorum is running Command succeeded Install the file serving node software on the node ibrixinit ts C cluster device i cluster VIP F Verify that the new file serving node has joined the cluster ibrix server 1 Look for the new file serving node in the output Rediscover storage on the file serving node ibrix pv a Set up the file serving node to match the other nodes in the cluster For example configure any user NICs user and cluster NIC monitors NIC failover pairs power backup servers preferred NIC s for IBRIX clients and so on Converting the original management console node to a file serving node hosting the agile Fusion Manager 121 12 Upgrading the IBRIX software to the 6 2 release This chapter describes how to upgrade to the 6 2 IBRIX software release O IMPORTANT Print the following table and check off each step as you complete it Table 4 Prerequisites checklist for all upgrades Step 1 Description Verify that the entire cluster is currently running IBRIX 6 0 or later by entering the following command ibrix version 1 IMPORTANT All the IBRIX nodes must be at the same release e f you are running a version of IBRIX earlier than 6 0 upgrade the product as described in Cascading Upgrades page 179 e f you are running 6 0 or later proceed with the upgrade steps in t
84. upgrade This upgrade procedure is appropriate for major upgrades The management console must be upgraded first You can then upgrade file serving nodes in any order 196 Cascading Upgrades Preparing for the upgrade 1 From the management console disable automated failover on all file serving nodes lt ibrixhome gt bin ibrix server m U From the management console verify that automated failover 15 off In the output the HA column should display o lt ibrixhome gt bin ibrix server 1 Stop the NFS and SMB services on all file serving nodes to prevent NFS and SMB clients from timing out lt ibrixhome gt bin ibrix server s t cifs c stop lt ibrixhome gt bin ibrix server s t nfs c stop Verify that all likewise services are down on all file serving nodes ps grep likewise Use kill 9 to kill any likewise services that are still running From the management console unmount all IBRIX file systems lt ibrixhome gt bin ibrix umount f lt fsname gt Upgrading the management console Complete the following steps on the management console 1 6 Force a backup of the configuration lt ibrixhome gt bin ibrix fm B The output is stored at usr local ibrix tmp fmbackup zip Be sure to save this file in a location outside of the cluster Move the lt installer_dir gt ibrix directory used in the previous release installation to ibrix old For example if you expanded the tarball in root durin
85. used in Windows environments for shared folders Command line interface An interface comprised of various commands which are used to control operating system responses CSR Customer self repair DAS Direct attach storage A dedicated storage device that connects directly to one or more servers DNS Domain name system FTP File Transfer Protocol GSI Global service indicator HA High availability HBA Host bus adapter HCA Host channel adapter HDD Hard disk drive IAD HP 9000 Software Administrative Daemon iLO Integrated Lights Out IML Initial microcode load IOPS I Os per second Intelligent Platform Management Interface JBOD Just a bunch of disks KVM Keyboard video and mouse LUN Logical unit number A LUN results from mapping a logical unit number port ID and LDEV ID to a RAID group The size of the LUN is determined by the emulation mode of the LDEV and the number of LDEVs associated with the LUN MTU Maximum Transmission Unit NAS Network attached storage NFS Network file system The protocol used in most UNIX environments to share folders or mounts NIC Network interface card A device that handles communication between a device and other devices on a network NTP Network Time Protocol A protocol that enables the storage system s time and date to be obtained from a network attached server keeping multiple hosts and storage devices synchronized OA Onboard Administrator OFED OpenFabrics Enter
86. values Single Summary Graph Home Cluster Activity Report ndex cluster 100 T T Disk 0 50 KB s In this report Disk Wait 1 00 ms req Net 20 KB s Read Write Summary Graph Stacked Summary Graph Servers Single Summary Gre 40 20 Generating reports To generate a new report click Request New Report on the IBRIX Management Console Historical Reports GUI Using the Statistics tool Y X9000 Management Console Historical Reports Reports Table View Simple View Tools Request New Report Report Generation Whole Cluster Report V Report Granularity hourly Specify the start and end times for the report to be generated The times are specified in the format YYYY MM DD HH MM where the letters stand for year month day hour and minute Hours and mintues may be left off Start Date Time 2012 10 22 10 00 End Date Time 2012 10 22 11 00 After clicking submit it may take a few moments to assemble the new reports If you are trying to generate up to the minute reports you will need to have the collector allow value set in your stats conf configuration file To generate a report enter the necessary specifications and click Submit The completed report appears in the list of reports on the statistics home page When generating reports be aware of the following e report can be generated only from statistics that have been
87. you send this information to assist in resolving the system crash NOTE recommends that you maintain your crash dumps in var crash directory Ibrix Collect processes the core dumps present in the var crash directory linked to local platform crash only HP also recommends that you monitor this directory and remove unnecessary processed crashes Deleting the archive file You can delete a specific data collection or all collections simultaneously in the GUI and the CLI To delete a specific data collection using the GUI select the collection to be deleted and click Delete The zip file and the tgz file stored locally will be deleted from each node To delete all of the collections click Delete All To delete a specific data collection using the CLI use the following command ibrix collect d n NAME To specify more than one collection to be deleted at a time from the CLI provide the names separated by a semicolon To delete all data collections manually from the CLI use the following command ibrix collect F Downloading the archive file When data 15 collected a compressed archive file is created and stored in a zipped archive file zip under local ibrixcollect archive directory To download the collected data to your desktop select the collection and click Download from the Fusion Manager 144 Troubleshooting NOTE Only one collection can be downloaded at a time NOTE The average size of the arc
88. 00R 512MX4 595424 001 SPS BD PCA FBWC 1G CL5 598414 001 SPS BD SYSTEM I O G7 605659 001 HP IBRIX 9730 210 TB ML Storage 2xBL Performance Module QZ731A 213 Description Spare part number SPS BD SMART ARRAY CTRL IDP1 8 8 MEZZ 615360 001 SPS PLASTICS HARDWARE MISC 619821 001 SPS COVER TOP 619822 001 SPS BACKPLANE HDD SAS 619823 001 SPS CAGE HDD W BEZEL 619824 001 SPS ENCLOS BLADE 3000C NO DRIVE 621742 001 SPS HEATSINK VC 624787 001 SPS DRV HD 3TB 6 SAS 7 2 3 5 DP MDL SC 653959 001 HP X9730 210TB 6G ML Storage 2xBL Performance Module QZ733A Description Spare part number SPS CA EXT MINI SAS 2M 408767 001 SPS FAN SYSTEM 413996 001 SPS PLASTICS HARDWARE 414063 001 SPS PWR SUPPLY 441830 001 SPS DRV HD 146G SAS 2 5 SP 1OK 453138 001 SPS BD MEM MOD 256MB 40B 462974 001 SPS PROC WSM 2 4 80W E5620 594887 001 SPS DIMM 4GB 10600R 512MX4 595424 001 SPS BD PCA HP FBWC 1G CL5 598414 001 SPS BD SYSTEM G7 605659 001 SPS BD SMART ARRAY IDP1 8 8 MEZZ 615360 001 SPS PLASTICS HARDWARE MISC 619821 001 SPS COVER TOP 619822 001 SPS BACKPLANE HDD SAS 619823 001 SPS CAGE HDD W BEZEL 619824 001 SPS ENCLOS BLADE 3000 NO DRIVE 621742 001 SPS HEATSINK VC 624787 001 SPS DRV HD 3TB 6 SAS 7 2 3 5 DP MDL SC 653959 001 214 IBRIX 9730 spare parts list D The IBRIX 9720 component and cabling diagrams Base and expansion c
89. 19 User Role admin Logout Help Filesystems Updated Jun 16 2011 10 59 05 AM a 2 p Status lame State Space GB Space Files Files Generation Segments 940 is Mounted 3089 07 268280000 1 2 4 Event Status 24 hours T4102 fly Dashboard Qr Cluster Configuration r Servers A File Shares Snes crs Mountpoints Bl E Summary Host Mountpoint Access State Segments e 4 ffs FUN Mounted NFS Exports E amp fs RW Mounted CFS Shares HTTP Shares 2 FTP Shares Remote Replication Exports NOTE When you perform an operation on the GUI a spinning finger is displayed until the operation is complete However if you use Windows Remote Desktop to access the GUI the spinning finger is not displayed Customizing the GUI For most tables in the GUI you can specify the columns that you want to display and the sort order of each column When this feature is available mousing over a column causes the label to change color and a pointer to appear Click the pointer to see the available options In the following example you can sort the contents of the Mountpoint column in ascending or descending order and you can select the columns that you want to appear in the display Mountpoints Host Mountpoint Access State miwralmvmt 143_1s1 2 Sort Ascending Mounted mwr3lmvm2 143_fs1 z
90. 2 16 3 100 Command succeeded The original cluster address is now configured to the newly created cluster VIF device bondo 1 If you created the interface bond1 0 in step 3 now set up the user network specifying the user VIF IP address and VIF device used in step 3 NOTE This step does not apply to SMB NFS clients If you are not using IBRIX clients you can skip this step Set up the user network VIF ibrix fm c user VIF IP d user VIF device n user VIF netmask v user For example rootex109s1 ibrix fm c 10 30 83 1 d bond1 0 n 255 255 0 0 v user Command succeeded Register the agile Fusion Manager also known as agile FM to the cluster ibrix fm R FM hostname I local cluster ipaddr gt NOTE Verify that the local agile Fusion Manager name is in the etc ibrix fminstance xml file Run the following command grep i current etc ibrix fminstance xml property name currentFmName value ib50 86 gt lt property gt From the agile Fusion Manager verify that the definition was set up correctly grep i vif etc ibrix fusion xml The output should be similar to the following property name fusionManagerVifCheckInterval value 60 gt lt property gt property name vifDevice value bond0 0 gt lt property gt lt property name vifNetMask value 255 255 254 0 gt lt property gt NOTE If the output is empty restart the fusionmanager services as in step 9 an
91. 4 26 UTC 2012 10 28 16 1 6 2 332 internal rev 130489 in SYN 5 2 332 TCP GNU Linux 2 6 18 194 el5 x86_64 x86 54 0 1 2 0 0 27 0 32 0 22 43 169 57 45 664 70 402844 758500 16 393 74 15 383 74 0 00 0 00 10 128 Select the server component that you want to view from the lower Navigator panel such as NICs Monitoring 9720 9730 hardware 69 Navigator dia Dashboard op Cluster Configuration Filesystems Snapshots Dye 8j Summary j nics wountpoints nes crs Power amp Events Hardware Blade Enclosure Am Server EB The following are the top level options provided for the server NOTE Information about the Hardware node can be found in Monitoring hardware components page 71 e HBAs The HBAs panel displays the following information o Node WWN WWN Backup Monitoring State e The NICs panel shows all NICs on the server including offline NICs The NICs panel displays the following information o State 70 Monitoring cluster operations o Route o Standby Server Standby Interface e Mountpoints The Mountpoints panel displays the following information Mountpoint Filesystem o Access e NFS The NFS panel displays the following information o Host o Path Options e CIFS The panel displays the following information NOTE in
92. 5 To delete a network interface use the following command ibrix nic d n IFNAME h HOSTLIST The following command deletes interface eth3 from file serving nodes s1 hp com and s2 hp com ibrix nic d n eth3 h s1 hp com s2 hp com Viewing network interface information Executing the ibrix nic command with no arguments lists all interfaces on all file serving nodes Include the option to list interfaces on specific hosts ibrix nic 1 h HOSTLIST The following table describes the fields in the output Field Description BACKUP HOST File serving node for the standby network interface BACKUP IF Standby network interface HOST File serving node IFNAME Network interface on this file serving node IP_ADDRESS IP address of this NIC LINKMON Whether monitoring is on for this NIC ADDR MAC address of this NIC ROUTE IP address in routing table used by this NIC STATE Network interface state TYPE Network type cluster or user 16 Maintaining the system When ibrix_nic 15 used with the 1 option it reports detailed information about the interfaces Use the h option to limit the output to specific hosts Use the n option to view information for a specific interface ibrix nic i h HOSTLIST n NAME Maintaining networks 117 11 Migrating to an agile Fusion Manager configuration The agile Fusion Manager configuration provides one active Fusion Manager and one passive Fusio
93. 8 host groups 65 add 9000 client 66 add domain rule 66 create host group tree 66 delete 66 view 66 hostgroups prefer a user network interface 115 HP technical support 176 HP Insight Remote Support 24 configure 25 Phone Home 26 troubleshooting 33 hpacucli command 150 hpasmcli 4 command 90 hpspAdmin user account 22 Ibrix Collect 143 configure 145 troubleshooting 146 IBRIX software shut down 99 start 101 upgrade 122 179 IBRIX software 5 5 upgrade 193 IBRIX software 5 6 upgrade 189 ibrix reten adm u command 186 IML clear or view 90 hpasmcli 4 command 90 Integrated Management Log IML clear or view 90 hpasmcli 4 command 90 IP address change for 9000 client 115 L labels symbols on equipment 234 laser compliance notices 240 link state monitoring 38 Linux 9000 clients upgrade 131 184 loading rack warning 234 localization 15 log files 90 collect for HP Support 143 logging in 15 LUN layout 9720 153 M management console migrate to agile configuration 118 manpages 22 monitoring blade enclosures 72 chassis and components 71 cluster events 87 cluster health 88 file serving nodes 86 node statistics 90 servers 68 75 storage and components 78 253 N NDMP backups 61 cancel sessions 63 configure NDMP parameters 62 rescan for new devices 63 start or stop NDMP Server 63 view events 64 view sessions 62 view tape and media changer devices 63
94. 9 to kill any likewise services that are still running Unmount all IBRIX file systems lt ibrixhome gt bin ibrix umount f lt fsname gt Upgrading the file serving nodes hosting the management console Complete the following steps 1 10 On the node hosting the active management console force a backup of the management console configuration lt ibrixhome gt bin ibrix fm B The output is stored at usr local ibrix tmp fmbackup zip Be sure to save this file in a location outside of the cluster On the node hosting the passive management console place the management console into maintenance mode lt ibrixhome gt bin ibrix fm m nofmfailover On the active management console node move the installer dir ibrix directory used in the previous release installation to ibrix old For example if you expanded the tarball in root during the previous IBRIX installation on this node the installer is in root ibrix On the active management console node expand the distribution tarball or mount the distribution DVD in a directory of your choice Expanding the tarball creates a subdirectory named ibrix that contains the installer program For example if you expand the tarball in root the installer is in root ibrix Change to the installer directory if necessary and run the upgrade ibrixupgrade f The installer upgrades both the management console software and the file serving node software on this node Verify the sta
95. 9 to stop any NFS processes that are still running If necessary run the following command on all nodes to find any open file handles for the mounted file systems lsof mountpoint Use kill 9 to stop any processes that still have open file handles on the file systems Unmount each file system manually ibrix umount f FSNAME Wait up to 15 minutes for the file systems to unmount Troubleshoot any issues with unmounting file systems before proceeding with the upgrade See File system unmount issues page 134 Performing the upgrade manually This upgrade method is supported only for upgrades from IBRIX software 6 x to the 6 2 release Complete the following steps 1 To obtain the latest HP IBRIX 6 2 1 pkg u11 1S0 ISO image register to download the software on the HP StoreAll Download Drivers and Software web page Mount the ISO image on each node and copy the entire directory structure to local ibrix directory on the disk running the OS The following is an example of the mount command mount o loop local pkg ibrix pkgfull FS 6 2 374 1 5 6 2 374 x86 64 iso mnt Change directory to 1ocal ibrix on the disk running the OS and then run chmod R 777 on the entire directory structure Run the following upgrade script ibrixupgrade f The upgrade script automatically stops the necessary services and restarts them when the upgrade is complete The upgrade script installs the Fusion Manager on all file serv
96. 9000 System Type System Subtype Fusion Manager IP Configuring HP Insight Remote Support on IBRIX 9000 systems 29 Device Discovered as Product Model HP 9000 Solution File serving nodes System Type Storage Device System Subtype 9000 Storage HP ProLiant Product Model HP 9720 NetStor FSN ProLiant BLA6O G6 HP 9720 NetStor FSN ProLiant BLA6O G6 HP 9730 NetStor FSN ProLiant BLA60 G7 HP 9730 NetStor FSN Proliant BL460 G7 The following example shows discovered devices on HP SIM 7 1 HS Summa 0 Critical V 3 Major 0 Minor 2 Normal EA 0 Disabled 0 Unknown 0 Informational Total 5 PE System Name y O 102420 Storage Device 10 24 20 HP X9300 NetStor FSN P Managed by 10 2 59 104 Y 10245 Management Processor 10 24 54 Integrated Lights Out in Server 10 2 4 20 o 102468 Storage Device 10 24 68 StorageWorks MSA231 Managed by 10 2 59 104 y O 1025910 Fusion Manager 10 2 59 104 HP X9000 Solution O wintetcthastog Server 10 24 75 ProLiant DL360 G6 File serving nodes and the OA IP are associated with the Fusion Manager IP address In HP SIM select Fusion Manager and open the Systems tab Then select Associations to view the devices You can view all IBRIX devices under Systems by Type Storage System Scalable Storage Solutions gt All X9000 Systems Configuring Insight Remote Support for HP SIM 6 3 and IRS 5 6 Discovering devices in HP SIM HP Systems Insight Manager
97. 9000 clients Complete the following steps on each client 1 Remove the old Windows 9000 client software using the Add or Remove Programs utility in the Control Panel 2 Copy the Windows 9000 client MSI file for the upgrade to the machine 3 Launch the Windows Installer and follow the instructions to complete the upgrade 4 Register the Windows 9000 client again with the cluster and check the option to Start Service after Registration Check Administrative Tools Services to verify that the 9000 client service is started 5 6 Launch the Windows 9000 client On the Active Directory Settings tab click Update to retrieve the current Active Directory settings 7 Mount file systems using the IBRIX Windows client GUI 184 Cascading Upgrades NOTE If you are using Remote Desktop to perform an upgrade you must log out and log back in to see the drive mounted Upgrading pre 6 0 file systems for software snapshots To support software snapshots the inode format was changed in the IBRIX 6 0 release The upgrade60 sh utility upgrades a file system created on a pre 6 0 release enabling software snapshots to be taken on the file system The utility can also determine the needed conversions without actually performing the upgrade When using the utility you should be aware of the following e file system must be unmounted e Segments marked as BAD are not upgraded e The upgrade takes place in parallel across all
98. 962 1494 0042 1000 9954 4 IW 99 NUSTH Y SHH cwNa AZOG Sod 647386C4DA00001095B6413 962c1404 0043 1000 9504 4 a6 Oy 3j8 J8JG UwAa ATOF yen amp BI97EFADADOO010958841 e042763 00d3 1000 95b7 4 i7 97 1 5 7 1 5 6935724800001 055551 0042763 00da 1000 95b7 4 IG dB ndNUzt eCi enrw d6o F KON 65407368080000109588413 Sa9c 16e 00db 1000 9 b3 4 ins 49 JeRCIN SSP F e 1H VgnicEg 53574 8808000010848 41 889 8162 0040 1000 9853 4 i10 410 EWNJEL QAMG Eyko ZWd 20 000010958 4 01190043 00d0 1000 9500 4 111 at vODCs3 0S05 HCBC SW7P X 64512F760C00001095BF41 01190c45 00dc 1000 9500 4 i12 012 5053 20 0000001099 141 ccbfensic DOdc 1000 95c0 41 116 916 1212w5 Oe Wrc I xrc4 e C W 678CT355DOO0001035C241 ccbfe05c DOdc 1000 95c 0 41 11413 di3 EGcLbP ZiMe fisCE nVIN kcH Monitoring the status of file serving nodes The dashboard on the GUI displays information about the operational status of file serving nodes including CPU I O and network performance information To view this information from the use the ibrix server 1 command as shown in the following sample output ibrix server 1 SERVER NAME nodel node2 STATE CPU IO MB s DISK IO MB s BACKUP Up HBAsDown 0 0 00 0 00 off Up HBAsDown 0 0 00 0 00 off File serving nodes can be in one of three operational states Normal Alert or Error These states
99. B Japanese VCCI marking Japanese power cord statement Ral BRA KFEREL FS 07 ER OF Ic MORA CI amp Meare use the attached power cord The attached power cord is ret allowed to ute wilh other product Korean notices Class A equipment Aa 7 7 4732 323417121 0121712 AF822 2210924 Prop AP SES amp 01 AS 1298121 sg Es TUS Mol 28980 2 Class equipment 2121 CFSS S5 5847121 0 21212 29 2 87171241 PARO BE LEMOA ASE S USHCH Japanese notices 239 Taiwanese notices BSMI Class A notice gt EEE RAR gt ORE BIS aA gt ISR T RSS ARE SRR AY ES Taiwan battery recycle statement ee ra ta Turkish recycling notice T rkiye Cumhuriyeti EEE Y netmeligine Uygundur Vietnamese Information Technology and Communications compliance marking ICT Laser compliance notices English laser notice This device may contain a laser that is classified as a Class 1 Laser Product in accordance with U S FDA regulations and the IEC 60825 1 The product does not emit hazardous laser radiation A WARNING of controls or adjustments or performance of procedures other than those specified herein or in the laser product s installation guide may result in hazardous radiation exposure To reduce the risk of exposure to hazardous radiation e Do not try to open the module enclosure
100. EP panel displays the SEP processors for a storage switch SEP Status Status Type UUID Model Firmware Version OK SEP 5001438010180 51 HP 6G SAS BL SWH 3 24 5001438010180 53 HP 6G SAS BL SWH 3 24 Managing LUNs in a storage cluster The LUNs panel provides information about the LUNs in a storage cluster The following information is provided in the LUNs panel e Volume Name e UNID e RAID Group UUID e logical Volume Name e Physical Volume Name e Physical Volume UUID In the following image the LUNs panel displays the LUNs for a storage cluster Monitoring 9720 9730 hardware 85 LUNs Volume Name LUN t LUN 2 LUN 3 LUN 4 LUN 5 LUN amp LUN 7 LUN B LUN 5 LUN 10 LUN LUN 12 LUN 13 LUN t4 LUN 15 LUN 16 LUN ID 0 692390400001 0964241 Raid Group UUID 3493009 000 gt 1000 952b 4 Logical Volume Name 114 Physical Volume dia Physical Volume UUID Xg amp mCO Dqgle an20 x DSN YT 6 amp 307E03BDA 00010984D41 349 3209 0065 1000 953b 4 1 15 015 cEg3 Er20 Ayer 614780440400001098 741 42649 8 0083 1000 954 4 1 1 12 8 5 392 77 5104000010958041 42609 908 0003 1000 9538 4 i2 92 5 8145 7 4 G53F0067D D0000109582413 62670308 0008 1000 9501 4 IMS 03 Wim NR IDAS 5809887 04000010998341 6 amp 25f0353 0Ucs 1000 9521 4 Ih 34 gAVSy2 BLNe BgWA IQ5K 2P ESTRALICDADODO 10565541
101. ExDS9100cx sn CN881502JE fw 1 28 temp OK fans OK OK power OK OK OK OK box 3 ExDS9100cx sn CN881502JE fw 1 28 temp OK fans OK OK power OK OK OK OK Analysis disk problems on USP7030EKR box 3 drive 10 15 missing or failed ctlr firmware problems on USP7030EKR 0126 2008120502 min 0130 2009092901 on ctlr P89A40A9SV600 exds netdiag utility The exds netdiag utility performs tests on and retrieves data from the networking components in an IBRIX 9720 Storage It performs the following functions e Reports failed Ethernet Interconnects failed as reported by the HP Blade Chassis Onboard Administrator e Reports missing failed or degraded site uplinks e Reports missing or failed NICs in server blades Sample output Starting Networking Diagnostics Gathering Data Parsing Data Analysing Data Error eth0 MAC address is incorrect eth and ethl may be swapped Error ethl MAC address is incorrect eth and ethl may be swapped eth0 Hardware OK Device is slave to a Bonded device ethl Hardware OK Device is slave to a Bonded device eth2 Hardware OK Device is slave to a Bonded device eth3 Hardware OK Device is slave to a Bonded device Warning ethd not UP and RUNNING Warning eth4 only this server seen on physical network possible hardware problem Warning eth5 not UP and RUNNING Warning eth5 only this server seen on physical network possible hardware problem bond0 Hardware OK other systems seen on physica
102. Fils E3 Status Registration Mount Umount Tune Host Active Directory Settings Setting nic v Host ibrvm 31 5 2 Host Interface Name Tue nee Preferring a network interface for hostgroup You can prefer an interface for multiple 9000 clients at one time by specifying a hostgroup To prefer a user network interface for all 9000 clients specify the clients hostgroup After preferring 114 Maintaining the system a network interface for a hostgroup you can locally override the preference on individual 9000 clients with the command ibrix lwhost To prefer a network interface for a hostgroup use the following command ibrix hostgroup n g HOSTGROUP A DESTHOST IFNAME The destination host DESTHOST cannot be a hostgroup For example to prefer network interface eth3 for traffic from all 9000 clients the clients hostgroup to file serving node s2 hp com ibrix hostgroup n g clients A s2 hp com eth3 Unpreferring network interfaces To return file serving nodes or 9000 clients to the cluster interface unprefer their preferred network interface The first command unprefers a network interface for a file serving node the second command unprefers a network interface for a client ibrix server n h SRCHOST D DESTHOST ibrix client n h SRCHOST D DESTHOST To unprefer a network interface for a hostgroup use the following command ibrix client n g HOSTGROUP A DESTHOST Making network changes
103. HP IBRIX 9720 9730 Storage Administrator Guide HP Part Number AW549 96051 Published Edition 10 Abstract This guide describes tasks related to cluster configuration and monitoring system upgrade and recovery hardware component replacement and troubleshooting It does not document IBRIX file system features or standard Linux administrative tools and commands For information about configuring and using IBRIX file system features see the HP IBRIX 9000 Storage File System User Guide This guide is intended for system administrators and technicians who are experienced with installing and administering networks and with performing Linux operating and administrative tasks For the latest IBRIX guides browse to http www hp com support IBRIXManuals December 2012 O Copyright 2009 2012 Hewlett Packard Development Company L P Confidential computer software Valid license from HP required for possession use or copying Consistent with FAR 12 211 and 12 212 Commercial Computer Software Computer Software Documentation and Technical Data for Commercial Items are licensed to the U S Government under vendor s standard commercial license The information contained herein 15 subject to change without notice The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as constituting an additional warranty HP shall
104. LTA EE MO RENE BELT BEREL ZU OLE JHII SEH 433 DIR SRI AT La HP HP 3 HF2 7 1 amp HP i FEO EJ 22335 U CES Hy T 3E1825 4 U38 917288 3 75 AL OU COO f HPOD ER H2 LZ 5 RU buin Spanish battery notice Declaraci n sobre las bater as AN ADVERTENCIA Este dispositivo podr a contener una bater a No intente recargar las baterias si las extrae Evite el contacto de las bater as con aqua y no las exponga a temperaturas superiores a los 60 C 140 F No utilice incorrectamente ni desmonte aplaste o pinche las bater as No cortocircuite los contactos externos ni la arroje al fuego o al agua Sustituya las bater as s lo por el repuesto designado por HP Las bater as los paquetes de bater as y los acumuladores no se deben eliminar junto con los desperdicios generales de la casa Con el fin de tirarlos al contenedor de reciclaje adecuado utilice los sistemas p blicos de recogida o devu lvalas a HP un distribuidor autorizado de HP o sus agentes Para obtener m s informaci n sobre la sustituci n de la bater a o su eliminaci n correcta consulte con su distribuidor o servicio t cnico autorizado Battery replacement notices 249 Glossary ACE Access control entry ACL Access control list ADS Active Directory Service ALB Advanced load balancing BMC Baseboard Management Configuration CIFS Common Internet File System The protocol
105. MODULE OA DDR2 503826 001 Mandatory SPS BD MID PLANE ASSY 519345 001 No SPS SLEEVE ONBRD ADM 519346 001 Mandatory SPS LCD MODULE WIDESCREEN 519349 001 No ASSY X9700 Blade Server AW550A Description Spare part number Customer self repair SPS BD SA DDR2 BBWC 512MB 451792 001 Optional SPS BD BATTERY 462976 001 Mandatory CHARGER MOD 4 V700HT SPS BD MEM MOD 256MB 40B 462974 001 Mandatory SPS BD RAID CNTRL SAS 484823 001 Optional SPS DRV HD 493083 001 Mandatory 300 SAS 10K 2 5 DP HP SPS DIMM 4 501535 001 Mandatory PC3 8500R 128MX8 ROHS SPS HEATSINK BD 508955 001 Optional SPS MISC CABLE KIT 511789 001 Mandatory SPS PLASTICS HARDWARE MISC 531223 001 Mandatory SPS BACKPLANE HDD SAS 531225 001 Mandatory SPS CAGE HDD W BEZEL 531228 001 Mandatory X9700 Server Chassis AW549A 229 Note the following X9700 82TB Capacity Block X9700c and X9700cx AQ551A e X9700c midplane is used for communication between controllers e There are 2x backplanes in the X9700c 230 9720 spare parts list Description Spare part number Customer self repair SPS RAIL KIT 383663 001 Mandatory SPS BD DIMM DDR2 MOD 512MB 398645 001 Mandatory X9700c SPS BD MIDPLANE X9700c 399051 001 Optional SPS FAN MODULE X9700c 399052 001 No SPS BD USB UID X9700c 399053 001 Optional SPS BD POWER UID W CABLE 399054 001 Optional X9700c SPS BD RISE
106. NS Server Domain Mame Primary NTP Server Secondary NTP Server The Network Configuration dialog box lists the interfaces configured on the system If the information is correct select Continue and go to the next step Metuork Conf iguration Select the interface to configure Remove Device bond8 18 38 287 1 eth8 eth3 cluster If the information specified for a bond is incorrect select the bond and then select Configure to customize the interface On the Select Interface Type dialog box select Bonded Interface On the Edit Bonded Interface dialog box enter the IP address and netmask specify any bond options and change the slave devices as necessary for your configuration Recovering a 9720 or 9730 file serving node 171 me bond 1 30 74 4 255 255 8 8 Bond Op Hons node 1 miimon 188 updelay 188 iC 1 eth8 16 ethi 16 Slave Devices eth3 16 eth4 106 eth5 186 9 The Configuration Summary dialog box lists the configuration you specified Select Commit to apply the configuration 10 Because the hostname you specified was previously registered with the management console the following message appears Select Yes to replace the existing server 172 Recovering the 9720 9730 Storage 11 The wizard now registers a passive Management Console Fusion Manager on the blade and then configures and starts it The wizard then runs additional setup scripts NOTE
107. Navigator The Power Source panel shows the power source configured on the server when HA was configured You can add or remove power sources on the server and can power the server on or off or reset the server Power Source Power Of E Summary Host Name Type IP Address Slot ID HBAS i is ibB9s1 106951 102 192 168 69 101 1 5 Mountpoints nes crs taj Power Events Configuring automated failover manually To configure automated failover manually complete these steps 1 Configure file serving nodes in backup pairs 2 Identify power sources for the servers in the backup pair 3 Configure NIC monitoring 4 Enable automated failover 1 Configure server backup pairs File serving nodes are configured in backup pairs where each server in a pair is the backup for the other This step is typically done when the cluster is installed The following restrictions apply e same file system must be mounted on both servers in the pair and the servers must see the same storage e In a SAN environment a server and its backup must use the same storage infrastructure to access a segment s physical volumes for example a multiported RAID array For a cluster using the unified network configuration assign backup nodes for the 1 interface For example nodel is the backup node2 and 2 is the backup for nodel 46 Configuring failover 1 Add the VIF ibrix nic n
108. R X9700c 399056 001 Optional SPS BD 7 SEGMENT DISPLAY 399057 001 Optional X9700c SPS POWER SUPPLY X9700c 405914 001 Mandatory SPS CA EXT MINI SAS 2M 408767 001 Mandatory SPS CA EXT MINI SAS 4M 408768 001 Mandatory SPS FAN SYSTEM X9700cx 413996 001 Mandatory SPS RACKMOUNT KIT 432461 001 Optional SPS BATTERY MODULE 9700 436941 001 Mandatory SPS PWR SUPPLY X9700cx 441830 001 Mandatory SPS BD BACKPLANE II X9700c 454574 001 Optional SPS POWER BLOCK W POWFER B P 455974 001 Optional BDS X9700cx SPS HDD B P W CABLES 8 455976 001 No DRAWER ASSY X9700cx SPS BD LED PANEL W CABLE 455979 001 Optional 9700 5 5 3 5 461289 001 Mandatory SPS BD CONTROLLER 9100C 489833 001 Optional X9700c SPS BD 2 PORT W 1 5 EXPAND 498472 001 Mandatory X9700cx SPS DRV HD 1TB 7 2K 6G DP SAS 508011 001 Mandatory SPS CHASSIS X9700c 530929 001 Optional Note the following X9700 164TB Capacity Block X9700c and X9700cx AW598B e X9700c midplane is used for communication between controllers e There are 2x backplanes in the X9700c Description Spare part number Customer self repair SPS PLASTICS KIT 314455 001 Mandatory SPS RAIL KIT 383663 001 Mandatory SPS BD DIMM DDR2 MOD 512MB 398645 001 Mandatory X9700c SPS BD MIDPLANE X9700c 399051 001 Optional SPS FAN MODULE X9700c 399052 001 No SPS BD USB UID X9700c 399053 001 Optional
109. SAS 619823 001 SPS CAGE HDD W BEZEL 619824 001 SPS ENCLOS TAPE BLADE 3000C NO DRIVE 621742 001 212 IBRIX 9730 spare parts list Description Spare part number SPS HEATSINK VC 624787 001 SPS DRV HD 2TB 7 2K EVA FATA M6412 FC 637981 001 HP IBRIX 9730 210 TB ML Storage 2xBL Performance Module QZ731A Description Spare part number SPS CA EXT MINI SAS 2M 408767 001 SPS FAN SYSTEM 413996 001 SPS PLASTICS HARDWARE 414063 001 SPS PWR SUPPLY 441830 001 SPS DRV HD 146G SAS 2 5 SP 10 453138 001 SPS BD MEM MOD 256MB 40B 462974 001 SPS PROC WSM 2 4 80W E5620 594887 001 SPS DIMM 4GB 10600R 512MX4 595424 001 SPS BD PCA FBWC 1G CL5 598414 001 SPS BD SYSTEM I O G7 605659 001 SPS BD SMART ARRAY CTRL IDP1 8 8 MEZZ 615360 001 SPS PLASTICS HARDWARE MISC 619821 001 SPS COVER TOP 619822 001 SPS BACKPLANE HDD SAS 619823 001 SPS CAGE HDD W BEZEL 619824 001 SPS ENCLOS BLADE 3000C NO DRIVE 621742 001 SPS HEATSINK VC 624787 001 SPS DRV HD 6G SAS 7 2K 3 5 DP MDL SC 653959 001 HP X9730 140 TB 6G ML Storage 2xBL Performance Module 07732 Description Spare part number SPS CA EXT MINI SAS 2M 408767 001 SPS FAN SYSTEM 413996 001 SPS PLASTICS HARDWARE 414063 001 SPS PWR SUPPLY 441830 001 SPS BD MEM MOD 256MB 40B 462974 001 SPS DRV HD 2TB 6G SAS 7 2K 3 5 LFF 508010 001 SPS PROC WSM 2 4 80W E5620 594887 001 SPS DIMM 4GB PC3 106
110. STABLIZER 600MM 10GK2 385973 001 Mandatory SPS SHOCK PALLET 600MM 10KG2 385976 001 Mandatory CBL C13 419595 001 419595 001N Mandatory SPS RACK BUS BAR amp Wire Tray 457015 001 Optional SPS STICK AXC 13 Attached CBL 460430 001 Mandatory SPS STICK 4X 483915 001 Optional FIXED C 13 OFFSET WW HP J9021A SWITCH 2810 24G 19021 69001 Mandatory X9700 Expansion Rack AQ552A Description Spare part number Customer self repair SPS BRACKETS PDU 252641 001 Optional SPS PANEL SIDE 10642 10KG2 385971 001 Mandatory SPS STABLIZER 600MM 10GK2 385973 001 Mandatory SPS SPS STICK ATTACH D CBL C13 419595 001 Mandatory 0 1FT 228 The 9720 spare parts list Spare part number Description Customer self repair SPS RACK BUS BAR 8 WIRE TRAY 457015 001 Optional SPS STICK AX 483915 001 Optional FIXED C 13 OFFSET WW X9700 Server Chassis AW549A Description Spare part number Customer self repair SPS PWR MOD SINGLE PHASE 413494 001 Mandatory SPS FAN SYSTEM 413996 001 Mandatory SPS BLANK BLADE 414051 001 Mandatory SPS BLANK INTERCONNECT 414053 001 Mandatory SPS CA SUV 416003 001 Mandatory SPS RACKMOUNT KIT 432461 001 Optional SPS BD MUSKET SAS SWITCH 451789 001 Optional SPS SFP 1 VC RJ 45 453578 001 Optional SPS MODULE ENET BLC VC FLEX 10 456095 001 Optional SPS P S 2450W 12V HTPLG 500242 001 Mandatory SPS
111. Statistics tool is uninstalled when the IBRIX software 15 uninstalled To uninstall the Statistics tool manually use one of the following commands e Uninstall the Statistics tool including the Statstics tool and dependency rpms ibrixinit tt u e Uninstall the Statistics tool retaining the Statstics tool and dependency rpms ibrixinit tt U 98 Using the Statistics tool 10 Maintaining the system Shutting down the system To shut down the system completely first shut down the IBRIX software and then power off the hardware Shutting down the IBRIX software Use the following procedure to shut down the IBRIX software Unless noted otherwise run the commands from the node hosting the active Fusion Manager 1 Stop any active Remote Replication data tiering or rebalancer tasks Run the following command to list active tasks and note their task IDs ibrix task 1 Run the following command to stop each active task specifying its task ID ibrix task k n TASKID Disable High Availability on all cluster nodes ibrix server m U Move all passive Fusion Manager instances into nofmfailover mode ibrix fm m nofmfialover A Stop the SMB NFS and NDMP services on all nodes Run the following commands ibrix server s t cifs c stop ibrix server s t nfs c stop ibrix server s t ndmp c stop If you are using SMB verify that all likewise services are down on all file serving nodes ps ef grep likewise
112. Support with Ibrix Collect page 143 14 Getting started File systems Set up the following features as needed e SMB Server Message Block FTP or HTTP Configure the methods you will use to access file system data e Quotas Configure user group and directory tree quotas as needed e Remote replication Use this feature to replicate changes in a source file system on one cluster to a target file system on either the same cluster or a second cluster e Data retention and validation Use this feature to manage WORM and retained files e Antivirus support This feature is used with supported Antivirus software allowing you to scan files on an IBRIX file system e BRIX software snapshots This feature allows you to capture a pointin time copy of a file system or directory for online backup purposes and to simplify recovery of files from accidental deletion Users can access the file system or directory as it appeared at the instant of the snapshot e File allocation Use this feature to specify the manner in which segments are selected for storing new files and directories e tiering Use this feature to move files to specific tiers based on file attributes For more information about these file system features see the HP IBRIX 9000 Storage File System User Guide Localization support Red Hat Enterprise Linux 5 uses the UTF 8 8 bit Unicode Transformation Format encoding for supported locales This al
113. There are no user serviceable components inside e Do not operate controls make adjustments or perform procedures to the laser device other than those specified herein e Allow only HP Authorized Service technicians to repair the unit 240 Regulatory compliance notices The Center for Devices and Radiological Health CDRH of the U S Food and Drug Administration implemented regulations for laser products on August 2 1976 These regulations apply to laser products manufactured from August 1 1976 Compliance is nodos or products marketed in the United States Dutch laser notice AN WAARSCHUWING dit apparaat bevat mogelijk een laser die is geclassificeerd als een laserproduct van Klasse 1 overeenkomstig de bepalingen van de Amerikaanse FDA en de richtlijn 60825 1 Dit product geeft geen gevaarlijke laserstraling af Als u bedieningselementen gebruikt instellingen aanpast of procedures uitvoert op een andere manier dan in deze publicatie of in de installatichandleiding van het laserproduct wordt aangegeven loopt u het risico te worden blootgesteld aan gevaarlijke straling Het risico van blootstelling aan gevaadijke straling beperkt u als volgt Probeer de behuizing van de module niet te openen U mag zelf geen onderdelen repareren Gebruik voor de laserapparatuur geen andere knoppen of instellingen en voer geen andere aanpassingen of procedures vit dan die in deze handleiding worden beschreven Alleen door HP
114. UP Thu May 27 01 33 19 2010 192 168 10 1 m Session History imvm3 13299 DATA RESTORE Thu May 27 01 34 59 2010 192 168 10 1 i Tape Devices Support Tickets 18 License To see similar information for completed sessions select NDMP Backup gt Session History View active sessions from the CLI ibrix ndmpsession 1 View completed sessions ibrix ndmpsession 1 s t YYYY MM DD The t option restricts the history to sessions occurring on or before the specified date Cancel sessions on a specific file serving node ibrix ndmpsession c SESSION1 SESSION2 SESSION3 h HOST Starting stopping or restarting an NDMP Server When a file serving node is booted the NDMP Server is started automatically If necessary you can use the following command to start stop or restart the NDMP Server on one or more file serving nodes ibrix server s t ndmp c start stop restart h SERVERNAMES Viewing or rescanning tape and media changer devices To view the tape and media changer devices currently configured for backups select Cluster Configuration from the Navigator and then select NDMP Backup Tape Devices Tape and Media Changer Devices 4 1 pus Hostname Device Type Device ID Device Node ge Email Events Imvm2 MediaChanger HP VLS 028AMWPQDO idevisg12 a ge SNMP Imvm2 TapeDrive HP Ultriurm_3 SCSLO294MVWPQ01 idevinstO amp Events Imvm2 TapeDrive HP Llitrium 3 SCSE028AMWPQDO1 idevisg El NDMP
115. Use kill 9 to stop any likewise services that are still running If you are using NFS verify that all NFS processes are stopped ps grep nfs If processes are running use the following commands on the affected node pdsh service nfslock stop dshbak 4 pdsh a service nfs stop dshbak If necessary run the following command on all nodes to find any open file handles for the mounted file systems lsof mountpoint Use kill 9 to stop any processes that still have open file handles on the file systems List file systems mounted on the cluster 4 ibrix fs 1 Unmount all file systems from 9000 clients e Linux 9000 clients run the following command to unmount each file system ibrix lwumount f fs name On Windows 9000 clients stop all applications accessing the file systems and then use the client GUI to unmount the file systems for example 1 DRIVE Next go to Services and stop the fusion service Shutting down the system 99 7 Unmount all file systems on the cluster nodes ibrix umount f fs name To unmount file systems from the GUI select Filesystems gt unmount 8 Verify that all file systems are unmounted ibrix fs 1 If a file system fails to unmount on a particular node continue with this procedure The file system will be forcibly unmounted during the node shutdown 9 Shut down all IBRIX Server services and verify the operation pdsh a etc init d ibrix server stop dshbak
116. VLAN tagging VLAN capabilities provide hardware support for running multiple logical networks over the same physical networking hardware To allow multiple packets for different VLANs to traverse the same physical interface each packet must have a field added that contains the VLAN tag The tag is a small integer number that identifies the VLAN to which the packet belongs When an intermediate switch receives a tagged packet it can make the appropriate forwarding decisions based on the value of the tag When set up properly IBRIX systems support VLAN tags being transferred all of the way to the file serving node network interfaces The ability of file serving nodes to handle the VLAN tags natively in this manner makes it possible for the nodes to support multiple VLAN connections simultaneously over a single bonded interface Linux networking tools such as ifconfig display a network interface with an associated VLAN tag using a device label with the form VLAN id For example if the first bond created by IBRIX has a VLAN tag of it will be labeled It is also possible to add a VIF on top of an interface that has an associated VLAN tag In this case the device label of the interface takes the form bonds lt VLAN_id gt lt VVIF_label gt For example if a VIF with a label of 2 is added for bondo 30 interface the new interface device label will be bond0 30 2 The following commands show configuring
117. abinets A minimum IBRIX 9720 Storage base cabinet has from 3 to 16 performance blocks that is server blades and from 1 to 4 capacity blocks An expansion cabinet can support up to four more capacity blocks bringing the system to eight capacity blocks The servers are configured as file serving nodes with one of the servers hosting the active Fusion Manager The Fusion Manager 15 responsible for managing the file serving nodes The file serving nodes are responsible for managing segments of a file system Front view of a base cabinet 16800 1 X9700c 1 2 TFT monitor and keyboard 3 cClass Blade enclosure 4 X9700cx 1 Base and expansion cabinets 215 Back view of a base cabinet with one capacity block a xa do tee 17554 1 Management switch 2 2 Management switch 1 3 X9700c 1 4 TFT monitor and keyboard 5 cClass Blade enclosure 6 X9700cx 1 216 The IBRIX 9720 component and cabling diagrams Front view of a full base cabinet 1 X9700c 4 2 X9700c 3 3 X9700c 2 4 X9700c 1 5 X9700cx 4 16812 6 X9700cx 3 7 TFT monitor and keyboard 8 cClass Blade Enclosure 9 X9700cx 2 10 X9700cx 1 Base and expansion cabinets 217 Back view of a full base cabinet 0 o 1 Management switch 2 2 Management switch 1 3 X9700c 4 4 X9700c 3 5 9700 2 6 9700 1 218 The IBRIX 9720 component and c
118. abling diagrams 17553 7 X9700cx 4 8 X9700cx 3 9 TFT monitor and keyboard 10 c Class Blade Enclosure 11 X9700cx 2 12 X9700cx 1 Front view of an expansion cabinet The optional X9700 expansion cabinet can contain from one to four capacity blocks The following diagram shows a front view of an expansion cabinet with four capacity blocks 16813 1 X9700c 8 5 X9700cx 8 2 X9700c 7 6 X9700cx 7 3 X9700c 6 7 X9700cx 6 4 X9700c 5 8 X9700cx 5 Base and expansion cabinets 219 Back view of an expansion cabinet with four capacity blocks 16811 1 X9700c 8 5 X9700cx 8 2 X9700c 7 6 X9700cx 7 3 X9700c 6 7 X9700cx 6 4 X9700c 5 8 X9700cx 5 Performance blocks c Class Blade enclosure A performance block is a special server blade for the 9720 Server blades are numbered according to their bay number in the blade enclosure Server 1 15 in bay 1 in the blade enclosure and so on Server blades must be contiguous empty blade bays are not allowed between server blades Only IBRIX 9720 Storage server blades can be inserted in a blade enclosure The server blades are configured as file serving nodes One node hosts the active Fusion Manager and the other nodes host passive Fusion Managers e active Fusion Manager is responsible for managing the cluster configuration including file serving nodes and IBRIX clients The Fusion Manager is not involved in file system I O operations e File
119. access the GUI see Adding user accounts for GUI access page 20 X9000 Management Console Done Upon login GUI dashboard opens allowing you to monitor entire cluster See the online help for information about all GUI displays and operations There are three parts to the dashboard System Status Cluster Overview and the Navigator Management interfaces 17 User ibrix Role admin Logout Help Dashboard Updated Jun 16 2011 9 46 48 AM apacity M atistics M 77 Event Stetus 24 hours 0 1 2 Network 1 0 MB s Disk 1 0 MB s W Free 5 30468 90 90 SS usec 5 a alh Dashboard 2 Cluster Configuration Fi 0 0 Fllesystems 9 FF BE 98 RSS Snapshots pF P P EEE P po P P F P S Servers Bl Healthy 3 S S 5 S S S S S S S 9 YF 5 S9 File Shares a NFS rto Bio B Read B write ors ew CPU Usage Memory Usage FTP e 90 9096 8 Segment Servers ss il Storage T fo Healthy 2 Vendor Storage E wan pe 30 e e e e e e e i 0 0 aie W Aer o 5 5 AS Hostgroups Unknown 0 2 E E E pog E E P amp Events S SS S 5 5 9 Services avo e Avg Data Tiering E Recent Events Rebalancer m 16 09 59 37 User ibrix logged from host 16 213 41 14 Remote ge E 3 Jun 16 09 27 02 User ibrix
120. agnosis by HP Support when system issues occur The collection can be triggered manually using the GUI or CLI or automatically during a system crash Collect gathers the following information e Specific operating system and IBRIX command results and logs e Crash digester results e Summary of collected logs including error exception failure messages e Collection of information from LHN and MSA storage connected to the cluster NOTE When the cluster is upgraded from an IBRIX software version earlier than 6 0 the support tickets collected using the ibrix supportticket command will be deleted Before performing the upgrade download a copy of the archive files tgz from the admin platform diag supporttickets directory Collecting logs To collect logs and command results using the GUI 1 Select Cluster Configuration and then select Data Collection 2 Click Collect EST Summary 2011 10229 AN UTC lt IET Meme Va oa i 9 Custer Mar ren 3 11 2 I Fusion Manager Primary Adaress 103 11 100 db A Y Darter a Fietysten rora B sve ere Shree E S266 i Mame Description E Date Size ets 2011 45 06 1650 crus trwm 31 Crested Node brwm 311 1 Collected 201 1 05 06 17 07 24 Crash AZ Bos 04 4 Pert 2011 06 06 16 15 conss 31 Mode brme 3 11 Coleced 2011 06 06 16 22 04 Oren 4162 a DP rie Sterg Atar 0
121. al number Model Firmware version Location Properties Obtaining server details The Management Console provides detailed information for each server in the chassis To obtain summary information for a server select the Server node under the Hardware node The following overview information is provided for each server e Status e e Name e UUID e Serial number e Model e Firmware version Message e Diagnostic Message Column dynamically appears depending on the situation Obtain detailed information for hardware components in the server by clicking the nodes under the Server node Y Hardware As Blade Enclosure ILO Module Memory DIMM NIC Power Management Controller El Ai storage Cluster Gd Drive El AL storage Controller 10 Cache Module ot volume a 3 storage Controller O Battery IO Cache Module OE Temperature Sensor Monitoring 9720 9730 hardware 75 Table 2 Obtaining detailed information about a server Panel name CPU Information provided Status Type Name UUID Model Location ILO Module Status Type Name UUID Serial Number Model Firmware Version Properties Memory DiMM Status Type Name UUID Location Properties NIC Status Type Name UUID Properties Power Management Controller Status Type Name UUID Firmware Version Storage Cluster Status Type Name UUID Dr
122. ame 3 Uninstall the IBRIX software from the node ibrixinit u This command removes both the file serving node and Fusion Manager software The node is no longer in the cluster Maintaining networks Cluster and user network interfaces IBRIX software supports the following logical network interfaces e Cluster network interface This network interface carries Fusion Manager traffic traffic between file serving nodes and traffic between file serving nodes and clients A cluster can have only one cluster interface For backup purposes each file serving node can have two cluster NICs e User network interface This network interface carries traffic between file serving nodes and dients Multiple user network interfaces are permitted The cluster network interface was created for you when your cluster was installed A virtual interface is used for the cluster network interface One or more user network interfaces may also have been created depending on your site s requirements You can add user network interfaces as necessary Adding user network interfaces Although the cluster network can carry traffic between file serving nodes and either NFS SMB HTTP FTP or 9000 clients you may want to create user network interfaces to carry this traffic If your cluster must accommodate a mix of NFS SMB FTP HTTP clients and 9000 clients or if you need to segregate client traffic to different networks you will need one or more user networks In ge
123. anagement console software and the file serving node software on this node Verify the status of the management console etc init d ibrix fusionmanager status The status command confirms whether the correct services are running Output will be similar to the following Fusion Manager Daemon pid 18748 running Also run the following command which should report that the console is Active ibrixhome bin ibrix fm i Check usr local ibrix log fusionserver log for errors If the upgrade was successful failback the file serving node Run the following command on the node with the active agile management console ibrixhome bin ibrix server f U h HOSTNAME From the node on which you failed back the active management console in step 8 change the status of the management console from maintenance to passive ibrixhome bin ibrix fm m passive If the node with the passive management console is also a file serving node manually fail over the node from the active management console lt ibrixhome gt bin ibrix server f p h HOSTNAME Wait a few minutes for the node to reboot and then run the following command to verify that the failover was successful The output should report Up FailedOver ibrixhome bin ibrix server 1 On the node with the passive agile management console move the installer dir gt ibrix directory used in the previous release installation to ibrix old For example if you expanded the tarball in ro
124. anager as described in the HP IBRIX 9000 Storage Installation Guide e Mount Mounts a file system Select the Cluster Name from the list the cluster name is the Fusion Manager name enter the name of the file system to mount select a drive and then dick Mount If you are using Remote Desktop to access the client and the drive letter does not appear log out and log in again e Umount Unmounts a file system e Tune Host Tunable parameters include the NIC to prefer the client uses the cluster interface by default unless a different network interface is preferred for it the communications protocol UDP or TCP and the number of server threads to use e Active Directory Settings Displays current Active Directory settings Management interfaces 21 For more information see the client GUI online help IBRIX software manpages IBRIX software provides manpages for most of its commands To view the manpages set the MANPATH variable to include the path to the manpages and then export it The manpages are in the SIBRIXHOME man directory For example if TBRIXHOME is usr local ibrix the default set the MANPATH variable as follows and then export the variable MANPATH SMANPATH usr local ibrix man Changing passwords O IMPORTANT The hpspAdmin user account is added during the IBRIX software installation and is used internally Do not remove this account or change its password You can change the following password
125. and set SELINUX parameter to either permissive or disabled SElinux will be stopped at the next boot For 9000 clients the client might not be registered with the Fusion Manager For information on registering clients see the HP IBRIX 9000 Storage Installation Guide Failover Cannot fail back from failover caused by storage subsystem failure When a storage subsystem fails and automated failover is turned on the Fusion Manager will initiate its failover protocol It updates the configuration database to record that segment ownership has transferred from primary servers to their standbys and then attempts to migrate the segments to the standbys However segments cannot migrate because neither the primary servers nor the standbys can access the storage subsystem and the failover is stopped Perform the following manual recovery procedure 1 Restore the failed storage subsystem for example replace failed Fibre Channel switches or replace a LUN that was removed from the storage array 2 Reboot the standby servers which will allow the failover to complete 158 Troubleshooting Cannot fail back because of a storage subsystem failure This issue is similar to the previous issue a storage subsystem fails after you have initiated a failback the configuration database will record that the failback occurred even though segments never migrated back to the primary server you execute ibrix_fs i FSNAME the output will list No in
126. and the tar file 3 Run the upgrade script ibrixupgrade f The upgrade software automatically stops the necessary services and restarts them when the upgrade is complete 4 Execute the following command to verify the client is running IBRIX software etc init d ibrix client status IBRIX Filesystem Drivers loaded IBRIX IAD Server pid 3208 running The IAD service should be running as shown in the previous sample output If it is not contact HP Support Installing a minor kernel update on Linux clients The 9000 client software is upgraded automatically when you install a compatible Linux minor kernel update If you are planning to install a minor kernel update first run the following command to verify that the update is compatible with the 9000 client software usr local ibrix bin verify client update kernel update version The following example is for a RHEL 4 8 client with kernel version 2 6 9 89 ELsmp usr local ibrix bin verify client update 2 6 9 89 35 1 ELsmp Kernel update 2 6 9 89 35 1 ELsmp is compatible If the minor kernel update is compatible install the update with the vendor RPM and reboot the The 9000 client software is then automatically updated with the new kernel and 9000 client services start automatically Use the ibrix version 1 C command to verify the kernel version on the client NOTE use the verify client command the 9000 client software must be installed Upgrading Windows
127. appareil num rique de la class A respecte toutes les exigences du R glement sur le mat riel brouilleur du Canada Class B equipment This Class B digital apparatus meets all requirements of the Canadian Interference Causing Equipment Regulations Cet appareil num rique de la class B respecte toutes les exigences du R glement sur le mat riel brouilleur du Canada European Union notice This product complies with the following EU directives e low Voltage Directive 2006 95 EC Directive 2004 108 EC Compliance with these directives implies conformity to applicable harmonized European standards European Norms which are listed on the EU Declaration of Conformity issued by Hewlett Packard for this product or product family This compliance is indicated by the following conformity marking placed on the product This marking 15 valid for non Telecom products and EU harmonized Telecom products e g Bluetooth Certificates can be obtained from http www hp com go certificates Hewlett Packard GmbH HQ TRE Herrenberger Strasse 140 71034 Boeblingen Germany 238 Regulatory compliance notices Japanese notices Japanese VCCI A notice CORE 7 7 ABERRANTE C do CORE RIERA CERT SCBRBS ET WREBTSLIBRKENSCEPGWET VCCI A Japanese VCCI B notice CORB 7 7 1 CI CORB RARE CEA FIZCEFARE CORMITIA PILES RARO ALTANA t SsEESSIEECTYCICINAEUE DRASTE CIE LU RW BRE UC Us VCCI
128. appears on the NIC HA setup dialog box Configuring High Availability on the cluster 43 Server HA Pair NIC HA Setup sm NIC HA Setup High Availability for a physical or virutal MIC typically servicing file share data works by assigning a standy NIC on the backup server in a server pair When server HA is enabled a monitored NIC will cause auto failover if the NIC becomes unavailable ib69s1 Active User NICs MERA Add NIC Remove Monitoring Server Server Standby HIC Standby Server Off 0 1 10 30 69 151 ib69s1 106952 ib69s2 Active User NICs Monitoring Server Server Standby HIC Standby Server No active physical or virtual User NICs Found Add one via the Add NIC button Next enable NIC monitoring on the VIF Select the new user NIC and click NIC HA On the NIC HA Config dialog box check Enable NIC Monitoring NCHA Config Enable NIC Monitoring Server ib69s1 User NIC bond0 1 10 30 69 151 Standby Server ib69s2 Standby NIC M Required Value In the Standby NIC field select New Standby to create the standby on backup server ib69s2 The standby you specify must be available and valid To keep the organization simple we specified bond0 1 as the Name this matches the name assigned to the NIC on server ib69s1 When you click OK the NIC HA configuration is complete 44 Configuring fa
129. at received a Passed result as well as details about the file system and segments Viewing a summary health report To view a summary health report use the ibrix health 1 command ibrix health 1 h HOSTLIST f b By default the command reports on all hosts To view specific hosts include the h HOSTLIST argument To view results only for hosts that failed the check include the argument To include standby servers in the health check include the b argument The following is an example of the output from the ibrix health 1 command rootebv18 03 ibrix health 1 Overall Health Checker Results PASSED Host Summary Results Host Result Type State Network Last Update bv18 03 PASSED Server Up 10 10 18 3 Thu Oct 25 14 23 12 MDT 2012 bv18 04 PASSED Server Up 10 10 18 4 Thu Oct 25 14 23 22 MDT 2012 Viewing a detailed health report To view a detailed health report use the ibrix health i command ibrix health i HOSTLIST f s v The option displays results only for hosts that failed the check The s option includes information about the file system and its segments The v option includes details about checks that received a Passed or Warning result The following example shows a detailed health report for file serving node bv18 04 rootebv18 04 ibrix health i h bv18 04 Overall Health Checker Results PASSED Host Result Type State Network Last Update bv18 04 PASSED Server
130. autocfg bin ibrixapp upgrade f s Offline upgrade fails because LO firmware is out of date If the firmware is out of date on a node the auto ibrixupgrade script will fail The usr local ibrix setup logs auto ibrixupgrade log reports the failure and describes how to update the firmware After updating the firmware run the following command on the node to complete the IBRIX software upgrade root ibrix ibrix ibrixupgrade f Node is not registered with the cluster network Nodes hosting the agile Fusion Manager must be registered with the cluster network If the ibrix fm command reports that the IP address for a node is on the user network you will need to reassign the IP address to the cluster network For example the following commands report that node ib51 101 which is hosting the active Fusion Manager has an IP address on the user network 15 226 51 101 instead of the cluster network root ib51 101 ibrix ibrix fm i FusionServer ib51 101 active quorum is running rooteib51 101 ibrix ibrix fm f NAME IP ADDRESS ib51 101 15 226 51 101 ib51 102 10 10 51 102 1 the node is hosting the active Fusion Manager as in this example stop the Fusion Manager on that node root ib51 101 ibrix etc init d ibrix fusionmanager stop Stopping Fusion Manager Daemon OK root ib51 101 ibrix On the node now hosting the active Fusion Manager ib51 102 in the example unregister node ib51 101 root
131. bond0 2 h nodel node2 node3 node4 2 Setup a standby server for each VIF ibrix_nic b H nodel bond0 1 node2 bond0 2 ibrix nic b H node2 bond0 1 nodel1 bond0 2 ibrix nic b H node3 bond0 1 node4 bondo 2 ibrix nic b H node4 bond0 1 node3 bond0 2 2 Identify power sources To implement automated failover perform a forced manual failover or remotely power a file serving node up or down you must set up programmable power sources for the nodes and their backups Using programmable power sources prevents a split brain scenario between a failing file serving node and its backup allowing the failing server to be centrally powered down by the Fusion Manager in the case of automated failover and manually in the case of a forced manual failover IBRIX software works with iLO IPMI and integrated power sources The following configuration steps are required when setting up integrated power sources e For automated failover ensure that the Fusion Manager has LAN access to the power sources e Install the environment and any drivers and utilities as specified by the vendor documentation If you plan to protect access to the power sources set up the UID and password to be used Use the following command to identify a power source ibrix powersrc a t ipmi openipmi openipmi2 ilo h HOSTNAME I IPADDR u USERNAME p PASSWORD For example to identify an LO power source at IP address 192 168 3 170 for nod
132. brix nic c n bond0 ibrix nic c n bond0 h nodel 16 123 200 201 M 255 255 255 0 B 16 123 200 255 h node2 I 16 123 200 202 M 255 255 255 0 B 16 123 200 255 h node3 I 16 123 200 203 M 255 255 255 0 B 16 123 200 255 0 1 1 1 1 h node4 I 16 123 200 204 M 255 255 255 0 B 16 123 200 255 Configuring backup servers The servers in the cluster are configured in backup pairs If this step was not done when your cluster was installed assign backup servers for the bondo 1 interface For example node1 is the backup for node2 and node2 is the backup for node1 1 Add the VIF ibrix nic a n bond0 2 h nodel node2 node3 node4 2 Setup a backup server for each VIF ibrix nic b H nodel bond0 1 node2 bond0 4 ibrix nic b H node2 bond0 1 nodel bond0 4 ibrix nic b H node3 bond0 1 node4 bond0 4 ibrix nic b H node4 bond0 1 node3 bond0 Configuring NIC failover NIC monitoring should be configured on VIFs that will be used by NFS SMB FTP or HTTP O IMPORTANT When configuring NIC monitoring use the same backup pairs that you used when configuring standby servers 36 Configuring virtual interfaces for client access For example ibric nic m h nodel A node2 bond0 ibric nic m h node2 A nodel bond0 4 ibric nic m h node3 A node4 bond0 4 ibric nic m h node4 A node3 bond0 Configuring automated failover To enable automated failover for your file servin
133. brix tmp fmbackup zip Be sure to save this file in a location outside of the cluster On the active management console node disable automated failover on all file serving nodes lt ibrixhome gt bin ibrix server m U Verify that automated failover 15 off lt ibrixhome gt bin ibrix server 1 In the output the HA column should display of On the node hosting the active management console place the management console into maintenance mode This step fails over the active management console role to the node currently hosting the passive agile management console lt ibrixhome gt bin ibrix fm m nofmfailover Wait approximately 60 seconds for the failover to complete and then run the following command on the node that was the target for the failover lt ibrixhome gt bin ibrix fm i The command should report that the agile management console is now Act ive on this node From the node on which you failed over the active management console in step 4 change the status of the management console from maintenance to passive lt ibrixhome gt bin ibrix fm m passive On the node hosting the active management console manually fail over the node now hosting the passive management console lt ibrixhome gt bin ibrix server f p h HOSTNAME Wait a few minutes for the node to reboot and then run the following command to verify that the failover was successful The output should report Up FailedOver lt ibrixhome gt bin ibrix
134. c i h hostname Check the output for a line such as the following Monitored By titanl6 To remove the monitor use the following command ibrix nic m MONITORHOST D DESTHOST IFNAME 166 Recovering the 9720 9730 Storage For example ibrix nic m h titanl6 D titan15 eth2 Recovering a 9720 or 9730 file serving node NOTE If you are recovering blade on an IBRIX 9730 system the Quick Restore procedure goes through the steps needed to form a cluster It requires that you validate the chassis components however you do not need to configure or modify the cluster configuration To recover a failed blade follow these steps 1 Log in to the server 9730 systems The welcome screen for the installation wizards appears and the setup wizard then verifies the firmware on the system and notifies you if a firmware update is needed The installation configuration times noted throughout the wizard are for a new installation Replacing a node requires less time IMPORTANT recommends that you update the firmware before continuing with the installation 9730 systems have been tested with specific firmware recipes Continuing the installation without upgrading to a supported firmware recipe can result in a defective system 9720 systems The System Deployment Menu appears Select Join an existing cluster 2 The wizard scans the network for existing clusters On the Join Cluster dialog box select the
135. cates that a monitored HBA or pair of HBAs has failed What happens during a failover The following actions occur when a server is failed over to its backup 40 Configuring failover 1 The Fusion Manager verifies that the backup server is powered on and accessible 2 The Fusion Manager migrates ownership of the server s segments to the backup and notifies all servers and 9000 clients about the migration This is a persistent change If the server is hosting the active FM it transitions to another server 3 NIC monitoring is configured the Fusion Manager activates the standby NIC and transfers the IP address or VIF to it Clients that were mounted on the failed over server may experience a short service interruption while server failover takes place Depending on the protocol in use clients can continue operations after the failover or may need to remount the file system using the same VIF In either case clients will be unaware that they are now accessing the file system on a different server To determine the progress of a failover view the Status tab on the GUI or execute the ibrix server 1 command While the Fusion Manager is migrating segment ownership the operational status of the node is Up InFailover or Down InFailover depending on whether the node was powered up or down when failover was initiated When failover is complete the operational status changes to Up FailedOver or Down FailedOver For more information about
136. cation and privacy settings are optional An authentication password is required if the group has a security level of either authNoPriv or authPriv The privacy password is required if the group has a security level of authPriv If unspecified MD5 is used as the authentication algorithm and DES as the privacy algorithm with no passwords assigned For example to create user3 add that user to group2 and specify an authorization password for authorization and no encryption enter ibrix snmpuser c n user3 g group2 k auth passwd s authNoPriv Deleting elements of the SNMP configuration All SNMP commands use the same syntax for delete operations using d to indicate the object is to delete The following command deletes a list of hosts that were trapsinks ibrix snmptrap d h lab15 12 domain com lab15 13 domain com lab15 14 domain com There are two restrictions on SNMP object deletions e view cannot be deleted if it is referenced by a group e A group cannot be deleted if it is referenced by a user Listing SNMP configuration information All SNMP commands employ the same syntax for list operations using the 1 flag For example ibrix snmpgroup 1 This command lists the defined group settings for all SNMP groups Specifying an optional group name lists the defined settings for that group only 60 Configuring cluster event notification 6 Configuring system backups Backing up the Fusion Manager configuration The Fus
137. ce in an automated failover setup the Fusion Manager will query the node the first time a network is failed over to the VIF Otherwise you must enter the VIF s IP address and netmask manually in the configuration database see Setting network interface options in the configuration database page 113 The Fusion Manager does not require a MAC address for a VIF IF you created a user network interface for 9000 client traffic you will need to prefer the network for the 9000 clients that will use the network see Preferring network interfaces page 114 Setting network interface options in the configuration database To make a VIF usable execute the following command to specify the IP address and netmask for the VIF You can also use this command to modify certain ifconfig options for a network ibrix nic c n IFNAME h HOSTNAME 1 IPADDR M NETMASK B BCASTADDR T MTU For example to set netmask 255 255 0 0 and broadcast address 10 0 0 4 for interface eth3 on file serving node s4 hp com ibrix nic c n eth3 h s4 hp com M 255 255 0 0 B 10 0 0 4 Maintaining networks 113 Preferring network interfaces After creating a user network interface for file serving nodes or 9000 clients you will need to prefer the interface for those nodes and clients It is not necessary to prefer a network interface for NFS or SMB clients because they can select the correct user network interface at mount time A network interface prefer
138. ce on the local disk to save node specific configuration information After each node 15 upgraded its configuration 15 automatically reapplied e Manual upgrades Before each server upgrade this process requires that you back up the node specitic configuration information hon the server onto an external device After the server is upgraded you will need to copy and restore the node specific configuration information manually The upgrade takes approximately 45 minutes for 9720 systems with a standard configuration NOTE If you are upgrading from an IBRIX 5 x release any support tickets collected with the ibrix supportticket command will be deleted during the upgrade Download a copy of the archive files tgz from the admin platform diag supporttickets directory Automatic upgrades All file serving nodes and management consoles must be up when the upgrade If a node or management console is not up the upgrade script will fail To determine the status of your cluster nodes check the dashboard on the GUI or use the ibrix health command To upgrade all nodes in the cluster automatically complete the following steps 1 Check the dashboard on the management console GUI to verify that all nodes are up 2 Obtain the latest release image from the HP kiosk at http www software hp com kiosk you will need your HP provided login credentials 3 Copy the release iso file onto the current active management console 4 Runth
139. ch On Drawer 2 e SAS port 1 connector on the primary O module Drawer 2 to port 2 on the Bay 7 SAS switch e SAS pon 1 connector on the secondary I O module Drawer 2 to port 2 on the Bay 8 SAS switc IBRIX 9730 CX 2 connections to the SAS switches 209 IBRIX 9730 CX 3 connections to the SAS switches nd El mL m I A On Drawer 1 e SAS Sl 1 connector on the primary I O module Drawer 1 to port 3 on the Bay 5 SAS switc e SAS port 1 connector on the secondary I O module Drawer 1 to port on the Bay 6 SAS switch On Drawer 2 SAS port 1 connector on the primary I O module Drawer 2 to port 3 on the Bay 7 SAS switch e SAS p 1 connector on the secondary O module Drawer 2 to port 3 on the Bay 8 SAS switc 210 IBRIX 9730 component and cabling diagrams IBRIX 9730 CX 7 connections to the SAS switches in the expansion rack lo mim s o ol UE DES A A TT nl On Drawer 1 e SAS pU 1 connector on the primary I O module Drawer 1 to port 7 on the Bay 5 SAS switc e SAS port 1 connector on the secondary I O module Drawer 1 to port 7 on the Bay 6 SAS switch On Drawer 2 e SAS port 1 connector on the primary I O module Drawer 2 to port 7 on the Bay 7 SAS switch e SAS pon 1 connector on the secondary O module Drawer 2 to port 7 on the Bay 8 SAS switc IBRIX 9730 CX 7 connections to the SAS switches in
140. cluster network interface or a user network interface To specify a new virtual cluster interface use the following command ibrix fm c VIF IP address d VIF Device n VIF Netmask v cluster I Local IP address or DNS hostname gt Maintaining networks 115 Managing routing table entries IBRIX Software supports one route for each network interface in the system routing table Entering a new route for an interface overwrites the existing routing table entry for that interface Adding a routing table entry To add a routing table entry use the following command ibrix nic r n IFNAME h HOSTNAME A R ROUTE The following command adds a route for virtual interface eth2 232 on file serving node s2 hp con sending all traffic through gateway gw hp com ibrix nic r n eth2 232 h s2 hp com A R gw hp com Deleting a routing table entry If you delete a routing table entry it is not replaced with a default entry A new replacement route must be added manually To delete a route use the following command ibrix nic r n IFNAME h HOSTNAME D The following command deletes all routing table entries for virtual interface et ho 1 on file serving node s2 hp com ibrix nic r n eth0 1 h s2 hp com D Deleting a network interface Before deleting the interface used as the cluster interface on a file serving node you must assign a new interface as the cluster interface See Changing the cluster interface page 11
141. ctiveFM 172 16 0 281 No Specifying VIFs in the client configuration When you configure your clients you may need to specify the VIF that should be used for client access NFS SMB Specify the VIF IP address of the servers for example bondo 1 to establish connection You can also configure DNS round robin to ensure NFS or SMB client to server distribution In both cases the NFS SMB clients will cache the initial IP they used to connect to the respective share usually until the next reboot FTP When you add an FTP share on the Add FTP Shares dialog box or with the ibrix ftpshare command specify the VIF as the IP address that clients should use to access the share HTTP When you create a virtual host on the Create Vhost dialog box or with the ibrix httpvhost command specify the VIF as the IP address that clients should use to access shares associated with the Vhost 9000 clients Use the following command to prefer the appropriate user network Execute the command once for each destination host that the client should contact using the specified interface ibrix client n h SRCHOST A DESTNOST IFNAME For example ibrix client n h client12 mycompany com A ib50 81 mycompany com bond1 Configuring automated failover 37 NOTE Because the backup NIC cannot be used as a preferred network interface for 9000 clients add one or more user network interfaces to ensure that HA and client communication work together Configuring
142. d Administrator Accessing the OA through the network The OA has a that can be accessed using ssh The address of the OA is automatically placed in etc hosts The name is lt systemname gt mp For example to connect to the OA on a system called glory use the following command ssh exds glory mp Access the OA Web based administration interface The OA also has a Web based administration interface Because the OA s IP address is on the management network you cannot access it directly from outside the system You can use ssh tunneling to access the OA For example using a public domain tool such as putty you can configure local port for example 8888 to forward to lt systemname gt mp 443 on the remote server For example if the system is called glory you configure the remote destination as glory mp 443 Then log into glory from your desktop On your desktop point your browser at https localhost 8888 This will connect you to the OA Accessing the Onboard Administrator 149 On a Linux system this 15 equivalent to the following command ssh gloryl L 8888 glory mp 443 However your Linux browser might not be compatible with the OA Accessing the OA through the serial port Each OA has a serial port This port can be connected to a terminal concentrator This provides remote access to the system if all servers are powered off All OA commands and functionality is available through the serial port To log in you ca
143. d IBRIX file system Getting started Port 8008 tcp 9002 tcp 9005 tcp 9008 tcp 9009 tcp 9200 tcp 2049 tcp 2049 udp 111 tcp 1M udp 875 tcp 875 udp 32803 tcp 32769 udp 892 tcp 892 udp 662 tcp 662 udp 2020 tcp 2020 udp 4000 4003 tcp Description Between file serving nodes and NFS clients user network NFS RPC quota lockmanager lockmanager mount daemon stat stat outgoing reserved for use by a custom application CMU and can be disabled if not used 137 udp 138 udp 139 tcp 445 tcp Between file serving nodes and SMB clients user network 9000 9002 tcp 9000 9200 udp Between file serving nodes and 9000 clients user network 20 tcp 20 udp 21 tcp 21 udp Between file serving nodes and FTP clients user network 7777 tcp 8080 tcp Between GUI and clients that need to access the GUI 5555 tcp 5555 udp Dataprotector 631 tcp 631 udp Internet Printing Protocol IPP 1344 tcp 1344 udp ICAP Configuring NTP servers When the cluster 15 initially set up primary and secondary NTP servers are configured to provide time synchronization with an external time source The list of NTP servers 15 stored in the Fusion Manager configuration The active Fusion Manager node synchronizes its time with the external source The other file serving nodes synchronize their time with the active Fusion Manager node In the absence of an external time source the
144. d for as either the owner or the backup by entering the following commands 1 To view all segments logical volume name and owner enter the following command on one line ibrix fs i egrep e OWNER e MIXED awk print 1 3 6 2 14 5 Eb 2 To verify the visibility of the correct segments on current file system node enter following command on each file system node lvm lvs awk print 1 Ensure that no active tasks are running Stop any active Remote Replication data tiering or Rebalancer tasks running on the cluster Use ibrix_task 1 to list active tasks When the upgrade is complete you can start the tasks again For additional information on how to stop a task enter the ibrix_task command for the help For 9720 systems delete the existing vendor storage by entering the following command ibrix vs d n EXDS The vendor storage is registered automatically after the upgrade Record all host tunings FS tunings and FS mounting options by using the following commands 1 To display file system tunings enter ibrix fs tune 1 gt local ibrix fs tune 1 txt 2 To display default IBRIX tunings and settings enter ibrix_host_tune L gt local ibrix host tune L txt 3 To display all non default configuration tunings and settings enter ibrix host tune q local ibrix host tune q txt Ensure that the ibrix local user account exists and it has the same UID number on all t
145. d then recheck Restart the fusionmanager services etc init d ibrix fusionmanager restart NOTE It takes approximately 90 seconds for the agile Fusion Manager to return to optimal with the agile cluster vif device appearing in ifconfig output Verify that this device is present in the output Verify that the agile Fusion Manager is active ibrix fm i For example rootex109s1 ibrix fm i FusionServer x109s1 active quorum is running Command succeeded Verify that there is only one Fusion Manager in this cluster ibrix fm f For example rootex109s1 ibrix fm f NAME IP ADDRESS Performing the migration 119 X109s1 172 16 3 100 Command succeeded Tl Install a passive agile Fusion Manager on a second file serving node In the command the option forces the overwrite of the new 1vm2 file that was installed with the IBRIX software Run the following command on the file serving node ibrix ibrixinit tm C local cluster interface device v agile cluster VIF IP m cluster netmask d cluster VIF device w 9009 M passive F For example root x109s3 ibrix install code directory ibrixinit tm C bondO v 172 16 3 1 m 255 255 248 0 d bond0 0 V 10 30 83 1 N 255 255 0 0 D bond1 0 w 9009 M passive F NOTE Verify that the local agile Fusion Manager name is in the etc ibrix fminstance xml file Run the following command grep i current etc ibrix fmin
146. dd the specified host to the finance group ibrix hostgroup m g finance h cl01 hp com Adding a domain rule to a host group To configure automatic host group assignments define a domain rule for host groups A domain rule restricts host group membership to clients on a particular cluster subnet The Fusion Manager uses the IP address that you specify for clients when you register them to perform a subnet match and sorts the clients into host groups based on the domain rules Setting domain rules on host groups provides a convenient way to centrally manage mounting tuning allocation policies and preferred networks on different subnets of clients A domain rule is a subnet IP address that corresponds to a client network Adding a domain rule to a host group restricts its members to 9000 clients that are on the specified subnet You can add a domain rule at any time To add a domain rule to a host group use the ibrix hostgroup command as follows ibrix hostgroup a g GROUPNAME D DOMAIN For example to add the domain rule 192 168 to the finance group ibrix hostgroup a g finance D 192 168 Viewing host groups To view all host groups or a specific host group use the following command ibrix hostgroup 1 g GROUP Deleting host groups When you delete a host group its members are reassigned to the parent of the deleted group 66 Creating host groups for 9000 clients To force the reassigned 9000 clients to implement the mounts
147. download iLO2 version 2 05 using the following URL and copy the firmware update to each server Follow the installation instructions noted in the URL This issue does not affect G7 servers http h20000 www2 hp com bizsupport TechSupport SoftwareDescription isp lang en cc us amp prodlypeld 15351 amp prodSeriesld 1466584 swltem MTX 949698a 14e 114478b9fe 126499 amp prod Nameld 11357728 swEnvOID 41038 swlangz8 amp taskld 135 amp mode 3 A change in the inode format impacts files used for Snapshots Files used for snapshots must either be created on IBRIX software 6 0 or later or the pre 6 0 file system containing the files must be upgraded for snapshots To upgrade a file system use the upgrade60 sh utility For more information see Upgrading pre 6 0 file systems for software snapshots page 185 o Data retention Files used for data retention including WORM and auto commit must be created on IBRIX software 6 1 1 or later or the pre 6 1 1 file system containing the files must be upgraded for retention features To upgrade a file system use the ibrix reten adm u f FSNAME command Additional steps are required before and affer you run the ibrix reten adm u f FSNAME command For more information see Upgrading pre 6 0 file systems for software snapshots page 185 Offline upgrades for IBRIX software 5 6 x or 6 0 x to 6 1 Preparing for the upgrade To prepare for the upgrade complete the following step
148. e by using the df command Note any custom tuning parameters such as file system mount options When the upgrade is complete you can reapply the parameters Stop all client O to the cluster or file systems On the Linux client use Lsof lt mountpoint gt to show open files belonging to active processes On all nodes hosting the passive Fusion Manager place the Fusion Manager into maintenance mode ibrixhome bin ibrix fm m nofmfailover On the active Fusion Manager node disable automated failover on all file serving nodes ibrixhome bin ibrix server m U Run the following command to verify that automated failover is off In the output the HA column should display o ibrixhome bin ibrix server 1 Unmount file systems on Linux 9000 clients ibrix umount f MOUNTPOINT 128 Upgrading the IBRIX software to the 6 2 release 12 Stop the SMB NFS and NDMP services on all nodes Run the following commands on the 13 node hosting the active Fusion Manager ibrix server s t cifs c stop ibrix server s t nfs c stop ibrix server s t ndmp c stop If you are using SMB verify that all likewise services are down on all file serving nodes ps ef grep likewise Use kill 9 to stop any likewise services that are still running If you are using NFS verify that all NFS processes are stopped ps grep nfs If necessary use the following command to stop NFS services etc init d nfs stop Use kill
149. e Using AutoPass to retrieve and install permanent license keys page 135 e Fax Password Request Form that came with your License Entitlement Certificate See the certificate for fax numbers in your area e or email the HP Password Center See the certificate for telephone numbers in your area or email addresses Using AutoPass to retrieve and install permanent license keys The procedure must be run from a client with JRE 1 5 or later installed and with a desktop manager running for example a Linux based system running X Windows The ssh client must also be installed 1 On the Linux based system run the following command to connect to the Fusion Manager ssh X rootGe management console IP 2 When prompted enter the password for the Fusion Manager 3 Launch the AutoPass GUI usr local ibrix bin fusion license manager P In the AutoPass GUI go to Tools select Configure Proxy and configure your proxy settings 5 Click Retrieve Install License gt Key and then retrieve and install your license key If the Fusion Manager machine does not have an Internet connection retrieve the license from a machine that does have a connection deliver the file with the license to the Fusion Manager machine and then use the AutoPass GUI to import the license Viewing license terms 135 14 Upgrading firmware Before performing any of the procedures in this chapter read the important warnings precautions and safet
150. e Fusion Manager To force this contact restart IBRIX software services on the clients reboot the clients or execute ibrix lwmount aoribrix lwhost a When contacted the Fusion Manager informs the clients about commands that were executed on host groups to which they belong The clients then use this information to perform the operation You can also use host groups to perform different operations on different sets of clients To do this create a host group tree that includes the necessary host groups You can then assign the clients manually or the Fusion Manager can automatically perform the assignment when you register an IBRIX 9000 client based on the client s cluster subnet To use automatic assignment create a domain rule that specifies the cluster subnet for the host group Creating a host group tree The clients host group is the root element of the host group tree Each host group in a tree can have only one parent but a parent can have multiple children In a host group tree operations performed on lower level nodes take precedence over operations performed on higher level nodes This means that you can effectively establish global client settings that you can override for specific clients For example suppose that you want all clients to be able to mount file system ifs1 and to implement a set of host tunings denoted as Tuning 1 but you want to override these global settings for certain host groups To do this mount ifs1 on
151. e IP addresses of the iLOs on each server If it cannot locate an IP address you will need to enter the address on the dialog box When you have completed the information click Enable HA Monitoring and Auto Failover for both servers Configuring High Availability on the cluster 41 m Server HA Pair Server HA Pair NIC HA Setup Server High Availability works by designating server pairs as backups of each other hen is enabled auto failover will occur if either server becomes unavailable ILO IP addresses are required to be able to automatically power a server up or down Server HA Analysis Selected Server ib69s1 current designated backup ib69s2 Selected Server System Type G5 X9320 6G MOTE Servers ib69s1 and ib69s2 are verified as a couplet pair seeing the same storage Server HA Pairing Server ib69s1 Server ib69s2 Y ILO IP 192 168 69 101 ILO IP 192 168 69 102 v Enable HA Monitoring and Auto Failover for both servers Use the NIC HA Setup dialog box to configure NICs that will be used for data services such as SMB or NFS You can also designate NIC HA pairs on the server and its backup and enable monitoring of these NICs Server HA Pair NIC HA Setup wb NIC HA Setup High Availability for a physical or virutal NIC typically servicing file share data works by assigning a standy NIC on the backup server in a server pair When server HA is enabled a monitored NIC will caus
152. e appropriate for your cluster 1 O disabled The following examples set the parameters to the default values for the 6 2 release ibrix cifsconfig t S smb signing enabled 0 smb signing required 0 ibrix cifsconfig t S ignore writethru 1 The SMB signing feature specifies whether clients must support SMB signing to access SMB shares See the HP IBRIX 9000 Storage File System User Guide for more information about this feature Whenignore_writethru is enabled IBRIX software ignores writethru buffering to improve SMB write performance on some user applications that request it Mount file systems on Linux 9000 clients IF you have a file system version prior to version 6 you might have to make changes for snapshots and data retention as mentioned in the following list e Snapshots Files used for snapshots must either be created on IBRIX software 6 0 or later or the pre 6 0 file system containing the files must be upgraded for snapshots To upgrade a file system use the upgrade60 sh utility For more information see Upgrading pre 6 0 file systems for software snapshots page 185 e Data retention Files used for data retention including WORM and auto commit must be created on IBRIX software 6 1 1 or later or the pre 6 1 1 file system containing the files must be upgraded for retention features To upgrade a file system use the ibrix reten adm u f FSNAME command Additional steps are required before and after yo
153. e auto failover if the NIC becomes unavailable Add NIC Monitoring Server Server Standby HIC ib69s1 Active User NICs META No active physical or virtual User NICs found Add one via the Add NIC button Remove ib69s2 Active User HICs NCHA Monitoring Server Server Standby Standby Server No active physical or virtual User NICs found Add one via the Add NIC button 42 Configuring failover For example you can create a user VIF that clients will use to access an SMB share serviced by server ib 9s1 The user is based on an active physical network on that server To do this click Add NIC in the section of the dialog box for ib69s1 On the Add NIC dialog box enter a NIC name In our example the cluster uses the unified network and has only bondo the active cluster FM IP We cannot use bondo 0 which is the management IP VIF We will create the VIF bondo 1 using bondo as the base When you click OK the user VIF is created Server ib69s1 Enter a NIC name of an existing physical interface e g eth4 or bond1 to configure an active physical network To create a virtual interface VIF enter a NIC name e g bond1 1 based on an existing active physical network Name bond0 1 P Address 10 30 69 151 Net Mask 255 255 255 0 Route MTU Required Value Add NIC ne The new active user NIC
154. e following command specifying the location of the local iso copy as the argument usr local ibrix setup upgrade iso The upgrade script performs all necessary upgrade steps on every server in the cluster and logs progress in the file usr local ibrix setup upgrade 1log After the script completes each server will be automatically rebooted and will begin installing the latest software 5 After the install is complete the upgrade process automatically restores node specific configuration information and the cluster should be running the latest software If an UPGRADE FAILED message appears on the active management console see the specified log file for details Manual upgrades The manual upgrade process requires external storage that will be used to save the cluster configuration Each server must be able to access this media directly not through a network as the network configuration is part of the saved configuration HP recommends that you use a USB stick or DVD NOTE Be sure to read all instructions before starting the upgrade procedure To determine which node is hosting the agile management console configuration run the ibrix fm i command Preparing for the upgrade Complete the following steps 1 Ensure that all nodes are up and running 2 the active management console node disable automated failover on all file serving nodes lt ibrixhome gt bin ibrix server m U 190 Cascading Upgrades R
155. e for minor and maintenance upgrades e Offline upgrades This procedure requires that file systems be unmounted on the node and that services be stopped Each file serving node may need to be rebooted if NFS or SMB causes the unmount operation to fail You can then perform the upgrade Clients experience a short interruption to file system access while each tile serving node is upgraded You can use an automatic or a manual procedure to perform an offline upgrade Online upgrades must be performed manually Upgrading the IBRIX software to the 5 5 release 193 Automatic upgrades The automated upgrade procedure is run as an offline upgrade When each file serving node is upgraded all file systems are unmounted from the node and services are stopped Clients will experience a short interruption to file system access while the node is upgraded All file serving nodes and management consoles must be up when you perform the upgrade If a node or management console 15 not up the upgrade script will fail and you will need to use a manual upgrade procedure instead To determine the status of your cluster nodes check the dashboard on the GUI To upgrade all nodes in the cluster automatically complete the following steps 1 Check the dashboard on the management console GUI to verify that all nodes are up 2 Verify that you have an even number of FSNs configured in a couplet pair high availability architecture by running the following command
156. e is deleted using management console physicalvolume deleted You can be notified of cluster events by email or SNMP traps To view the list of supported events use the command ibrix event q Setting up email notification of cluster events You can set up event notifications by event type or for one or more specific events To set up automatic email notification of cluster events associate the events with email recipients and then configure email settings to initiate the notification process Associating events and email addresses You can associate any combination of cluster events with email addresses all Alert Warning or Info events all events of one type plus a subset of another type or a subset of all types The notification threshold for Alert events is 90 of capacity Threshold triggered notifications are sent when a monitored system resource exceeds the threshold and are reset when the resource Cluster events 55 utilization dips 10 below the threshold For example a notification is sent the first time usage reaches 90 or more The next notice is sent only if the usage declines to 80 or less event 15 reset and subsequently rises again to 90 or above To associate all types of events with recipients omit the e argument in the following command ibrix event c e ALERT WARN INFO EVENTLIST m EMAILLIST Use the ALERT WARN and INFO keywords to make specific type associations or use EVENTLIST to associate specific
157. e moved to bond1 to enable access to ports 1234 and 9009 To move the Agile Cluster VIF to bond1 complete these steps 1 From the active Fusion Manager list all passive management consoles move them to maintenance mode and then unregister them from the agile configuration 4 ibrix fm f 4 ibrix fm m nofmfialover ibrix fm u management console name Define a new Agile Cluster VIF DEV and the associated Agile Cluster VIF IP 3 Change the Fusion Manager s local cluster address from bondo to in the IBRIX database a Change the previously defined Agile Cluster VIF IP registration address On the active Fusion Manager specify a new Agile Cluster VIF IP on the bond1 subnet ibrix fm t I new Agile Cluster VIF IP n NOTE The ibrix fm t command is not documented but can be used for this operation b On each file serving node edit the etc ibrix iadconf xml file vi etc ibrix iadconf xml In the file enter new Agile Cluster VIF IP address on the following line 188 Cascading Upgrades property name fusionManagerPrimaryAddress valuez XXX XXX XXX XXX 4 the active Fusion Manager re register all backup management consoles using the bond1 Local Cluster IP address for each node 4 ibrix fm R management console name I local cluster network IP NOTE When registering a Fusion Manager be sure the hostname specified with R matches the hostname of the server 5 R
158. e network interface vendor documentation for any rules or restrictions required for link aggregation NICO bond0 01 192 168 1 101 MAC addr 00 07 91 E9 04 42 50 40 02 192 168 1 102 NIC1 C 00 40 03 192 16821 MAC addr 0007 91 E9 04 D bondd 04 192 168 22 bond0 05 192 168 3 101 Bandwidth is bondedto Bandwidth is divided into use a single network separate virtual Interfaces interface name 0 IFs to manage and with twice the capacity route independently Identifying a user network interface for a file serving node To identify a user network interface for specific file serving nodes use the ibrix nic command The interface name IFNAME can include only alphanumeric characters and underscores such as eth1 ibrix nic a n IFNAME h HOSTLIST If you are identifying VIF add the VIF suffix nnn to the physical interface name For example the following command identifies virtual interface eth1 1 to physical network interface eth1 on file serving nodes s1 hp com and s2 hp com ibrix nic a n eth1 1 h s1 hp com s2 hp com When you identify a user network interface for a file serving node the Fusion Manager queries the node for its IP address netmask and MAC address and imports the values into the configuration database You can modify these values later if necessary If you identify a VIF the Fusion Manager does not automatically query the node If the VIF will be used only as a standby network interfa
159. e recovered node If you disabled NIC monitoring before using the QuickRestore DVD re enable the monitor ibrix nic m h MONITORHOST A DESTHOST IFNAME For example Completing the restore 173 ibrix nic m h titanl6 A titanl5 eth2 7 Configure Insight Remote Support on the node See Configuring HP Insight Remote Support on IBRIX 9000 systems page 24 8 Runibrix health 1 from the node hosting the active Fusion Manager to verify that no errors are being reported NOTE Ifthe ibrix health command reports that the restored node failed run the following command ibrix health i h hostname this command reports failures for volume groups run the following command ibrix pv a h Hostname of restored node 9 the following files are customized on your system update them on the restored node e etc hosts Copy this file from a working node to etc hosts on the restored node e etc machines Ensure that all servers have server hostname entries in on all nodes Restoring services When you perform a Quick Restore of a file serving node the NFS SMB FTP and HTTP export information is not automatically restored to the node After operations are failed back to the node the I O from client systems to the node fails for the NFS SMB FTP and HTTP shares To avoid this situation manually restore the NFS SMB FTP and HTTP exports on the node before failing it back Restore SMB services Complete the
160. e remaining functional port with no Fusion Manager involvement HBAs use worldwide names for some parameter values These are either worldwide node names WWNN or worldwide port names WWPN The WWPN is the name an HBA presents when logging in to a SAN fabric Worldwide names consist of 16 hexadecimal digits grouped in pairs In IBRIX software these are written as dot separated pairs for example 21 00 00 e0 8b 05 05 04 To set up HBA monitoring first discover the HBAs and then perform the procedure that matches your HBA hardware e For single port HBAs without built in standby switching Turn on HBA monitoring for all ports that you want to monitor for failure e dual port HBAs with built in standby switching and single port HBAs that have been set up as standby pairs in a software operation Identify the standby pairs of ports to the configuration database and then turn on HBA monitoring for all paired ports If monitoring is turned on for just one port in a standby pair and that port fails the Fusion Manager will fail over the server even though the HBA has automatically switched traffic to the surviving port When monitoring is turned on for both ports the Fusion Manager initiates failover only when both ports in a pair fail Configuring High Availability on the cluster 49 When both HBA monitoring and automated failover for file serving nodes are configured the Fusion Manager will fail over a server in two situations e B
161. e sich bez glich der Entsorgung mit einem HP Partner in Verbindung Weitere Informationen zum Austausch von Batterien und Akkus oder zur sachgemiifien Entsorgung erhalten Sie bei Ihrem HP Partner oder Servicepartner Italian battery notice Istruzioni per la batteria AN AVVERTENZA Questo dispositivo pu contenere una batteria Non tentare di ricaricare le batterie se rimosse Evitare che le batterie entrino in contatto con l acqua o siano esposte a temperature superiori a 60 Non smontare schiacciare forare o utilizzare in modo improprio la batteria Non accorciare i contatti esterni o gettare in acqua o sul fuoco la batteria Sostituire la batteria solo con i ricambi HP previsti a questo scopo Le batterie e eli accumulatori non devono essere smaltiti insieme ai rifiuti domestici Per procedere al riciclaggio o al corretto smaltimento utilizzare il sistema di raccolta pubblico dei rifiuti o restituirli a ai Partner Ufficiali HP o ai relativi rappresentanti Per ulteriori informazioni sulla sostituzione e sullo smaltimento delle batterie contattare un Partner Ufficiale o un Centro di assistenza autorizzato 248 Regulatory compliance notices Japanese battery notice AN ES ANAL yTVEAMLT SMAMHVET KYFUERYALTUSMBS IS EBLE THES e NY FUBIKISASLEY 60 C 140 F LEO BRIDA hal NYT URI URL REBT YLELY CCHS EY KOKT RELET R Ol HP E O RESA
162. e ss01 ibrix powersrc a t ilo h ss01 I 192 168 3 170 u Administrator p password 3 Configure NIC monitoring NIC monitoring should be configured on user VIFs that will be used by NFS SMB FTP or HTTP IMPORTANT When configuring NIC monitoring use the same backup pairs that you used when configuring backup servers Identify the servers in a backup pair as NIC monitors for each other Because the monitoring must be declared in both directions enter a separate command for each server in the pair ibrix nic m h MONHOST A DESTHOST IFNAME The following example sets up monitoring for NICs over bondo 1 ibric nic m h nodel A node2 bond0 1 ibric nic m h node2 A nodel bond0 1 ibric nic m h node3 A node4 bond0 1 ibric nic m h node4 A 1 The next example sets up server s2 hp com to monitor server s1 hp com over user network interface ethl ibrix nic m h s2 hp com A s1 hp com ethl 4 Enable automated failover Automated failover is turned off by default When automated failover is turned on the Fusion Manager starts monitoring heartbeat messages from file serving nodes You can turn automated failover on and off for all file serving nodes or for selected nodes Configuring High Availability on the cluster 47 Turn on automated failover ibrix server m h SERVERNAME Changing the HA configuration manually Failing Update a power source If you change the IP address or passw
163. e upgrade mantel tn 129 Afer the A RR 130 Upgrading Linux 9000 caceria 131 Installing a minor kernel update on Linux clients trit imet 132 Upgrading Windows 9000 lina td 132 Troubleshooting upgrade Ue 132 A sett t tetett ttt cen en ttt m P EErEE NE dem EUN MM EM 132 Manual 1o ia 133 Offline upgrade fails because firmware is out 133 Node is not registered with the cluster 133 File system unmount IU Mm 134 ede rae mer 135 Viewing license tesina 135 Retrieving a license Konica iii dd it 135 Using AutoPass to retrieve and install permanent license keyS oooooooccccnnooocococcccccononononccccnnnos 135 14 Upgrading Fi WA cscs TT 136 Components Tor firmware Ipods canti 136 Steps for upgrading MEW in dus 137 Finding additional information on FM Tues rat 140 Adding performance modules on 9730 systems 141 Adding new server blades on 9720 systemsS ooocccoooooconcccocononononcnononanononcncnnonanonnnnnnnonnnanos 141 15 Troisbleshiaollrig iii 143 Collecting information for HP Support with 143 Collecting daa 143 BUR P 144 Downloading the archive 144 Configuring si dde a 145 6 Contents Viewing data collection normar ic ai 146 Viewing data collection configuration information erro
164. ector 2 SAS port 2 connector 4 Primary I O module Drawer 2 6 SAS port 1 connector 8 SAS port 2 connector 10 SAS port 1 connector 12 Secondary I O module Drawer 1 Back view of the expansion rack 207 IBRIX 9730 CX 1 connections to the SAS switches a E cw DICO Ben EN n Or pez 727i SSS 2 Nes rm The connections to the SAS switches are o TIP SAS port 1 connector on the primary I O module Drawer 1 to port 1 on the Bay 5 SAS switch SAS p 1 connector on the secondary O module Drawer 1 to port 1 on the Bay 6 SAS switc SAS E 1 connector on the primary I O module Drawer 2 to port 1 on the Bay 7 SAS switc SAS port 1 connector on the secondary I O module Drawer 2 to port 1 on the Bay 8 SAS switch The number corresponding to the location of the 9730 CX corresponds to the port number on the SAS switch to which the 9730 CX is connected The ports on the SAS switches are labeled 1 through 8 starting from the left For example the 9730 CX 2 connects to port 2 on each SAS switch The IBRIX 9730 CX 7 connects to port 7 on each SAS switch 208 IBRIX 9730 component and cabling diagrams IBRIX 9730 CX 2 connections to the SAS switches On Drawer 1 e SAS pen 1 connector on the primary I O module Drawer 1 to port 2 on the Bay 5 SAS switc e SAS port 1 connector on the secondary I O module Drawer 1 to port 2 on the Bay 6 SAS swit
165. ed components will be reported in the output of ibrix vs i and failed storage components will be reported in the output of ibrix health V i Identifying failed I O modules an X9700cx chassis When an X9700cx I O module or the SAS cable connected to it fails the X9700c controller attached to the I O module reboots and if the I O module does not immediately recover the X9700c controller stays halted Because there are two X9700cx I O modules it is not immediately obvious which I O module has failed In addition the X9700c controller may halt or appear to fail for other reasons This document describes how to identify whether the failure condition is on the X9700cx I O module or elsewhere IBRIX 9720 LUN layout 153 Failure indications A failed or halted X9700c controller is indicated in a number of ways as follows On 9720 systems the exds_stdiag report could indicate a failed or halted X9700c controller An email alert In the GUI the logical volumes in the affected capacity black show a warning The amber fault LED on the X9700c controller is flashing The seven segment display shows an H1 H2 C1 or C2 code The second digit represents the controller with a problem For example H1 indicates a problem with controller 1 the left controller as viewed from the back Identifying the failed component O IMPORTANT replacement X9700cx I O module could have the wrong version of firmware pre installed The X9700cx I O m
166. ed for the NIC or HBA Monitored columns see the sections for ibrix nic m h host A node 2 node interface and ibrix hba m h host p World Wide Name in the guide for your current release Performing the upgrade The online upgrade is supported only from the IBRIX 6 x to 6 2 release O IMPORTANT Complete all steps provided in the Table 4 page 122 Complete the following steps 124 Upgrading the IBRIX software to the 6 2 release m To obtain the latest HP IBRIX 6 2 1 pkg u11 18S0 lSO image register to download the software on the HP StoreAll Download Drivers and Software web page Mount the ISO image and copy the entire directory structure to the 1ocal ibrix directory on the disk running the OS The following is an example of the mount command mount o loop local pkg ibrix pkgfull FS 6 2 374 1 5 6 2 374 x86 64 iso mnt Change directory to local ibrix and then run chmod R 777 on the entire directory structure Run the upgrade script and follow the on screen directions auto online ibrixupgrade Upgrade Linux 9000 clients See Upgrading Linux 9000 clients page 131 If you received a new license from HP install it as described in the Licensing chapter in this guide After the upgrade Complete these steps 1 2 3 If your cluster nodes contain any 10Gb NICs reboot these nodes to load the new driver You must do this step before you upgrade the server firmware as requ
167. eib51 102 ibrix fm u ib51 101 Command succeeded On the node hosting the active Fusion Manager register node ib51 101 and assign the correct IP address root ib51 102 ibrix fm R ib51 101 I 10 10 51 101 Command succeeded Upgrading the IBRIX software to the 6 1 release 187 NOTE When registering a Fusion Manager be sure the hostname specified with matches the hostname of the server The ibrix fm commands now show that node ib51 101 has the correct IP address and node ib51 102 is hosting the active Fusion Manager rooteib51 102 ibrix fm f NAME IP ADDRESS ib51 101 10 10 51 101 ib51 102 10 10 51 102 rooteib51 102 ibrix fm i FusionServer ib51 102 active quorum is running File system unmount issues If a file system does not unmount successfully perform the following steps on all servers 1 Run the following commands chkconfig ibrix server off chkconfig ibrix ndmp off chkconfig ibrix fusionmanager off p Reboot all servers 3 Runthe following commands to move the services back to the on state The commands do not start the services chkconfig ibrix server on chkconfig ibrix ndmp on chkconfig ibrix fusionmanager on 4 Unmount the file systems and continue with the upgrade procedure Moving the Fusion Manager VIF to bond When the 9720 system is installed the cluster network is moved to bond1 The 6 1 release requires that the Fusion Manager VIF Agile Cluster VIF also b
168. ena 234 Device warnings and wei eve GU e RR Rep 235 G Regulatory compliance note iii 237 Regulatory compliance identification nuMbeXS oooooocccccocooooonoccncoonannnnnccnconon ono nnncnon nono eene 237 Federal Communications Commission Motrin 237 FCC rating ia 237 Class Arequipa iia 237 Contents 9 Class B equipment co o ob ER UR e cR E NR URS adu 237 e e OS 238 E O ra 238 Canadian notice Avis ve rere prede se do aie e aa Du sad 238 Class Aceauiomen tastes shade cele nates aa M E LL M E Lo IE 238 Class E ld 238 European Union OIE ias 238 Japan s MOE A UE E 239 Japanese YECLA NOCE sin nn P EEEE La SEEN UH TU 239 Japanese VCCIB Deleg pac a 239 Japanese VCCI o A Mia 239 Japanese power cord statement SN eic M nanan eee an bes 239 Korean TA 239 Class Acequipiielibassexe esce secs DADA ne ee ee e TOU ue 239 Class B dida 239 Taiwanese TONES A ci 240 BSMI Class A NOCE 240 ls cara AMA 240 T rkish recycling matices Lp 240 Vietnamese Information Technology and Communications complia
169. ence is executed immediately on file serving nodes For 9000 clients the preference intention 15 stored on the Fusion Manager When IBRIX software services start on a client the client queries the Fusion Manager for the network interface that has been preferred for it and then begins to use that interface If the services are already running on 9000 clients when you prefer a network interface you can force clients to query the Fusion Manager by executing the command ibrix lwhost a the client or by rebooting the client Preferring a network interface for a file serving node or Linux 9000 client The first command prefers a network interface for a File Server Node the second command prefers a network interface for a client ibrix server n h SRCHOST A DESTHOST IFNAME ibrix client n h SRCHOST A DESTHOST IFNAME Execute this command once for each destination host that the file serving node or 9000 client should contact using the specified network interface IFNAME For example to prefer network interface eth3 for traffic from file serving node s1 hp com to file serving node s2 hp com ibrix server n h sl hp com A s2 hp com eth3 Preferring a network interface for a Windows 9000 client IF multiple user network interfaces are configured on the cluster you will need to select the preferred interface for this client On the Windows 9000 client GUI specify the interface on the Tune Host tab as in the following example Ibrix Client
170. enclosures mesones 72 Obtaining server delle uade ea 75 Monitoring storage and storage 78 Monitoring storage dust acc 79 Monitoring drive enclosures for a storage cluster 80 Monitoring pools for a storage lite 82 4 Contents Monitoring storage controllers for a storage cluster 84 Monitoring storage switches in a storage cluster 85 Managing WINS a storage cisterna 85 Monitoring the status of file serving NODES srl dic bvta ta I eui uta eee 86 Monitoring cluster even iros 87 Viewing VES as TEE mmm 87 Removing events from the events database 88 Monitoring Cluster Merl narrada 88 o ME 88 Healthcheck te as 88 Viewing ro te 90 Viewing and clearing the Integrated Management Log 0 90 Viewing operating statistics for file serving NOdES ooooooccccccooonononcconoconanononcononanononcnononannnnnnnno 90 9 Using the Statistics d cM 92 Installing and configuring the Statistics Tool siii 92 Installing the Statistics tool ainda 92 Enabling collection and syunchronira Noia 92 Upgrading the Statistics tool from IBRIX software 9 0 93 Using the Historical Reports QUIS eiit ata 93 Generating A A A IM 94 A A 95 Maintaining the Statistics DO cis
171. er Wait approximately 60 seconds for the failover to complete and then run the following command on the node that was hosting the passive agile Fusion Manager lt ibrixhome gt bin ibrix fm i The command should report that the agile Fusion Manager is now Active on this node From the node on which you failed over the active Fusion Manager in step 1 change the status of the Fusion Manager from maintenance to passive lt ibrixhome gt bin ibrix fm m passive Verify that the fusion manager database usr local ibrix db is intact on both active and passive Fusion Manager nodes Repeat steps 1 4 to return the node originally hosting the active Fusion Manager back to active mode Converting the original management console node to a file serving node hosting the agile Fusion Manager To convert the original management console node usually node 1 to a file serving node complete the following steps 1 Place the agile Fusion Manager on the node into maintenance mode ibrix fm m nofmfailover Verify that the Fusion Manager is in maintenance mode ibrix fm i For example rootex109s1 ibrix ibrix fm i FusionServer x109s1 maintenance quorum not started Command succeeded Verify that the passive Fusion Manager is now the active Fusion Manager Run the ibrix fm i command on the file serving node hosting the passive Fusion Manager 10953 in this example It may take up to two minutes for the passive Fusion Manager
172. er resources Contacting HP For worldwide technical support information see the HP support website http www hp com support Before contacting HP collect the following information e Product model names and numbers e Technical support registration number if applicable e Product serial numbers e Error messages e Operating system type and revision level e Detailed questions Related information Using the IBRIX 9720 Storage e HP IBRIX 9000 Storage File System User Guide e HP IBRIX 9000 Storage CLI Reference Guide e HP IBRIX 9000 Storage Release Notes e ExDS9100c 9720 Storage System Controller Cache Module Customer Self Repair Instructions e ExDS9100c 9720 Storage System Controller Battery Customer Self Repair Instructions e ExDS9100c 9720 Storage System Controller Customer Self Repair Instructions e 9720 Storage Controller User Guide Describes how to install administer and troubleshoot the HP X9700c To access these manuals go to the IBRIX Manuals page http www hp com support IBRIXManuals Using and maintaining file serving nodes e HP Proliant BLA6Oc Server Blade Maintenance and Service Guide e HP Proliant BLA6Oc Server Blade User Guide To access these manuals go to the Manuals page http www hp com support manuals and click bladesystem gt BladeSystem Server Blades and then select HP Proliant BL 460c G7 Server Series or HP Proliant BL 460c G6 Server Series Troubleshooting and maintainin
173. er the upgrade Complete the following steps 1 Run the following command to rediscover physical volumes ibrix pv Apply any custom tuning parameters such as mount options Remount all file systems ibrix mount f fsname m mountpoint Re enable High Availability if used ibrix server m Start any Remote Replication Rebalancer or data tiering tasks that were stopped before the upgrade If you are using SMB set the following parameters to synchronize the SMB software and the Fusion Manager database smb signing enabled smb signing required ignore writethru Use ibrix cifsconfig to set the parameters specifying the value appropriate for your cluster 1 enabled O disabled The following examples set the parameters to the default values for the 6 1 release ibrix cifsconfig t S smb signing enabled 0 smb signing required 0 ibrix cifsconfig t S ignore writethru 1 The SMB signing feature specifies whether clients must support SMB signing to access SMB shares See the HP IBRIX 9000 Storage File System User Guide for more about this feature Whenignore writethru is enabled IBRIX software ignores writethru buffering to improve SMB write performance on some user applications that request it Mount file systems on Linux 9000 clients IF the cluster network is configured on bond1 the 6 1 release requires that the Fusion Manager Agile Cluster VIF also be on bond1 To check your syste
174. er to set the DUMPING status timeout by entering the following command ibrix fm tune S o dumpingStatusTimeout 240 This command is required to delay the failover until the crash kernel is loaded otherwise Fusion Manager will bring down the failed node 54 Configuring failover 5 Configuring cluster event notification Cluster events There are three categories for cluster events e Alerts Disruptive events that can result in loss of access to file system dota Warnings Potentially disruptive conditions where file system access is not lost but if the situation is not addressed it can escalate to an alert condition Information Normal events that change the cluster The following table lists examples of events included in each category Event Type Trigger Point Name ALERT User fails to log into GUI login failure File system is unmounted filesystem unmounted File serving node is down restarted Server status down File serving node terminated unexpectedly server unreachable WARN User migrates segment using GUI segment migrated INFO User successfully logs in to GUI login success File system is created filesystem cmd File serving node 5 deleted server deregistered NIC is added using GUI nic added NIC is removed using GUI nic removed Physical storage is discovered and added using management console physicalvolume added Physical storag
175. erver 102430 ProLiant DL380 G6 0 102454 Management Processor 102454 in Server 102 420 V Q 0253 Management Processor 102530 integrated Lights Out in Server 10 24 30 0 6 Server 102474 Configuring device Entitlements Configure the CMS software to enable remote support for IBRIX systems For more information see Using the Remote Support Setting Tab to Update Your Client and CMS Information and Adding Individual Managed Systems in the HP Insight Remote Support Advanced A 05 50 Operations Guide Enter the following custom field settings in HP SIM e Custom field settings for 9720 9730 Onboard Administrator The Onboard Administrator OA is discovered with OA IP addresses When the OA is discovered edit the system properties on the HP Systems Insight Manager Locate the Entitlement Information section of the Contract and Warranty Information page and update the following o Enter the IBRIX enclosure product number as the Customer Entered product number o Enter 9000 as the Custom Delivery ID Select the System Country Code o Enter the appropriate Customer Contact and Site Information details e Contract and Warranty Information Under Entitlement Information specify the Customer Entered serial number Customer Entered product number System Country code and Custom Delivery ID Configuring HP Insight Remote Support on IBRIX 9000 systems 31 32 Contract and Warranty Information Entitlement Information
176. es Summary RRA Name Value inia Type 9730 CQ Storage Cluster c Monitoring Host 0128 1651 os Drive Enclosure HA Health Status OK Pool Pool Pool Pool Pool n Pool Pool Pool o Storage Controller 8 The Management Console provides wide range of information in regards to vendor storage as shown in the following image 78 Monitoring cluster operations ch_09USE133EAON_ vs1 Summary A Servers Ci Storage Cluster Storage Switch Qe Storage Switch Ca Storage Switch Ca Storage Switch OD LUNs Drill down into the following components in the lower Navigator tree to obtain additional details Servers The Servers panel lists the host names for the attached storage Storage Cluster The Storage Cluster panel provides detailed information about the storage cluster See Monitoring storage clusters page 79 for more information Storage Switch The Storage Switch panel provides detailed information about the storage switches See Monitoring storage switches in a storage cluster page 85 for more information LUNs The LUNs panel provides information about the LUNs in a storage cluster See Managing LUNS in a storage cluster page 85 for more information Monitoring storage clusters The Management Console provides detailed information for each storage cluster Click one of the following sub nodes displayed u
177. ese laser notice SRI IE US 60825 1124 5 lt Class 1 A GRIS REL PB IE SEL A T8345UL 7 955047AXbF Jv Zi4FIzz fv CU S UAE O7 i CR ME 8817 56 AA ERRAR EDENDAS bo Ud s ELROTRB E ESA TY FO V ERIN CCS 2 RY RA SIV RA LH TI EtA REICMENTUSUROHKET 4 TI AS ME BALES Spanish laser notice AN ADVERTENCIA Este dispositivo podria contener un l ser clasificado como producto de l ser de Clase 1 de acuerdo con la normativa de la FDA de EE UU e 60825 1 El producto no emite radiaciones l ser peligrosas El uso de controles ajustes o manipulaciones distintos de los especificados aqu o en la gu a de instalaci n del producto de l ser puede producir una exposici n peligrosa a las radiaciones Para evitar el riesgo de exposici n a radiaciones peligrosas No intente abrir la cubierta del m dulo Dentro no hay componentes que el usuario pueda reparar No realice m s operaciones de control ajustes o manipulaciones en el dispositivo l ser que los aqu especificados S lo permita reparar la unidad a los agentes del servicio t cnico autorizado HP 242 Regulatory compliance notices Recycling notices English recycling notice Disposal of waste equipment by users in private household in the European Union This symbol means do not dispo
178. ested later in this procedure Upgrade your firmware as described in Upgrading firmware page 136 Start any Remote Replication Rebalancer or data tiering tasks that were stopped before the upgrade If you have a file system version prior to version you might have to make changes for snapshots and data retention as mentioned in the following list e Snapshots Files used for snapshots must either be created on IBRIX software 6 0 or later or the pre 6 0 file system containing the files must be upgraded for snapshots To upgrade a file system use the upgrade60 sh utility For more information see Upgrading pre 6 0 file systems for software snapshots page 185 e Data retention Files used for data retention including WORM and auto commit must be created on IBRIX software 6 1 1 or later or the pre 6 1 1 file system containing the files must be upgraded for retention features To upgrade a file system use the ibrix reten adm u f FSNAME command Additional steps are required before and after you run the ibrix reten adm u f FSNAME command For more information see Upgrading pre 6 0 file systems for software snapshots page 185 Review the file etc hosts on every IBRIX node file serving nodes and management nodes to ensure the hosts file contains two lines similar to the following 127 0 0 1 hostname localhost localdomain localhost 1 localhost6 localdomain6 localhost6 In this instance hostname is the name
179. estore operation overwrites the existing file if it present in same destination directory with the temporary file When the Backing up the Fusion Manager configuration 61 hard quota limit for the directory tree has been exceeded NDMP cannot create a temporary file and the restore operation fails Configuring NDMP parameters on the cluster Certain NDMP parameters must be configured to enable communications between the DMA and the NDMP Servers in the cluster To configure the parameters on the GUI select Cluster Configuration from the Navigator and then select NDMP Backup The NDMP Configuration Summary shows the default values for the parameters Click Modify to configure the parameters for your cluster on the Configure NDMP dialog box See the online help for a description of each field Configure NDMP Enable NDMP Sessions Yes v Minimum Port Number 1025 Maximum Port Number 65535 Listener Port Number 10000 Username ndmp Password ndmp Log Level 0 TCP Window Size Bytes 163840 Concurrent Sessions 128 DMA IP Addresses LN AP Address Delete Required Value ES To configure NDMP parameters from the use the following command ibrix ndmpconfig c d IP1 IP2 IP3 u USERNAME p PASSWORD m MINPORT e O disable 1l enable x MAXPORT v 0 10 n LISTENPORT w BYTES z NUMSESSIONS NDMP process management Norma
180. estricted to those that can see the same storage segment New Owner lt select gt M Reauired Value The Summary dialog box shows the segment migration you specified Click Back to make any changes or click Finish to complete the operation To migrate ownership of segments from the CLI use the following commands Migrate ownership of specific segments ibrix fs m f FSNAME s LVLIST h HOSTNAME M F N To force the migration include M To skip the source host update during the migration include F To skip server health checks include N 108 Maintaining the system The following command migrates ownership of segments 11 2 and ilv3 in file system ifs1 to server2 ibrix fs m f ifsl s ilv2 ilv3 h server2 Migrate ownership of all segments owned by specific servers ibrix fs m FSNAME H HOSTNAME1 HOSTNAME2 M F N For example to migrate ownership of all segments in file system i s1 from server1 to server2 ibrix fs m f ifsl H serverl server2 Evacuating segments and removing storage from the cluster Before removing storage used for an IBRIX software file system you will need to evacuate the segments or logical volumes storing file system data This procedure moves the data to other segments in the file system and is transparent to users or applications accessing the file system When evacuating a segment you should be aware of the following restrictions e While the evacuation
181. eturn the backup management consoles to passive mode 4 ibrix fm m passive 6 Place the active Fusion Manager into nofmfialover mode to force it to fail over It can take up fo a minute for a passive Fusion Manager to take the active role 4 ibrix fm m nofmfialover 7 Unregister the original active Fusion Manager from the new active Fusion Manager 4 ibrix fm u original active management console name 8 Reregister that Fusion Manager with the new values and then move it to passive mode ibrix fm R agileFM name I local cluster network ip 4 ibrix fm m passive 9 Verify that all management consoles are registered properly on bond1 local cluster network 4 ibrix fm f You should see all registered management consoles and their new local cluster IP addresses If an entry is incorrect unregister that Fusion Manager and re register it 10 Reboot the file serving nodes After you have completed the procedure if the Fusion Manager is not failing over or the usr local ibrix log Iad 1og file reports errors communicating to port 1234 or 9009 contact HP Support for further assistance Upgrading the IBRIX software to the 5 6 release This section describes how to pe to the latest IBRIX software release The management console and all file serving nodes must be upgraded to the new release at the same time Note the following e Upgrades to the IBRIX software 5 6 release are supported for systems currently running IBRIX soft
182. f If your cluster uses single port HBAs turn on monitoring for all of the ports to set up automated failover in the event of HBA failure Use the following command ibrix hba m h HOSTNAME p PORT For example to turn on HBA monitoring for port 20 00 12 34 56 78 9a bc on node s1 hp com ibrix hba m h s1 hp com p 20 00 12 34 56 78 9a bc To turn off HBA monitoring for an HBA port include the U option ibrix hba m U h HOSTNAME p PORT Deleting standby port pairings Deleting port pairing information from the configuration database does not remove the standby pairing of the ports The standby pairing is either built in by the HBA vendor or implemented by software To delete standby paired HBA ports from the configuration database enter the following command ibrix b U P WWPN1 WWPN2 h HOSTNAME For example to delete the pairing of ports 20 00 12 34 56 78 9a bc and 42 00 12 34 56 78 9a bc on node s1 hp com ibrix hba b U P 20 00 12 34 56 78 9a bc 42 00 12 34 56 78 9a bc h s1 hp com Deleting HBAs from the configuration database Before switching an HBA to a different machine delete the HBA from the configuration database 50 Configuring failover ibrix hba d h HOSTNAME w WWNN Displaying HBA information Use the following command to view information about the HBAs in the cluster To view information for all hosts omit the h HOSTLIST argument ibrix hba 1 h HOSTLIST The output includes the following
183. figuring HP Insight Remote Support on IBRIX 9000 24 Configuring the IBRIX cluster for Insight Remote Support 25 Configuring Insight Remote Support for HP SIM 71 and IRS 5 7 28 Configuring Insight Remote Support for HP SIM 6 3 and IRS 5 6 30 Testing the Insight Remote Support configuratiON oooococccooonononccocoononnnnonncnonanononnncnonnnnnnnnnoo 33 Updating the Phone Home configuration eene ener 33 Disabling Phone Homes ui 33 Troubleshooting Insight Remote Support 33 Configuring virtual interfaces for client 35 Network and VIF GUIDE IINIES scsi 35 Creating a bonded n NDA Beer cid MR EDI 36 Configuring backup seen a 36 Configuring NIC Tel Vet MONTE 36 Configuring automated alos 37 E 37 Specifying VIPS in the client configuration retocada iaa 37 Configuring VLAN FORAGING sins did 38 Support for link state MONMON MN a inn 38 IGE erguerse ie 39 Agile management CoNSO ES ccccooooooocncccoooooncncnnnconononnnncnnononononnnnnnnnnn non nnne canon nn 39 Agile Fusion Manager mods altares 39 Agile Fusion Manager and ld rt 39 Viewing infor
184. following steps 1 the restored node was previously configured to perform domain authorization for SMB services run the following command ibrix auth n DOMAIN NAME A AUTH PROXY USER NAMEGdomain name P AUTH PROXY PASSWORD h HOSTNAME For example ibrix auth n ibql mycompany com A AdministratorGibq1 mycompany com P password h ib5 9 IF the command fails check the following e Verify that DNS services are running on the node where you ran the ibrix auth command e Verify that you entered a valid domain name with the full path for the n and A options 2 Rejoin the likewise database to the Active Directory domain opt likewise bin domainjoin cli join domain name Administrator 3 Push the original share information from the management console database to the restored node On the node hosting the active management console first create a temporary SMB share ibrix cifs a f FSNAME s SHARENAME p SHAREPATH Then delete the temporary SMB share ibrix cifs d s SHARENAME 4 Runthe following command to verify that the original share information is on the restored node ibrix cifs i h SERVERNAME Restore HTTP services Complete the following steps 174 Recovering the 9720 9730 Storage 1 Take the appropriate actions e If Active Directory authentication is used join the restored node to the AD domain manually e If Local user authentication is used create a temporary local user on the GUI and apply
185. formation about starting the processes The statistics home page provides three views or formats for listing the reports Following is the Simple View which sorts the reports according to type hourly daily weekly detail Upgrading the Statistics tool from IBRIX software 6 0 93 7 X9000 Management Console Historical Reports Simple view Tools Request New R rt Hourly 2012 10 22 11 00 12 00 2012 10 22 10 00 11 00 2012 10 22 09 00 10 00 2012 10 22 08 00 09 00 2012 10 22 07 00 08 00 2012 10 22 06 00 07 00 2012 10 22 05 00 06 00 2012 10 22 04 00 05 00 2012 10 22 03 00 04 00 2012 10 22 02 00 03 00 2012 10 22 01 00 02 00 2012 10 22 00 00 01 00 2012 10 21 23 00 00 00 2012 10 21 22 00 23 00 2012 10 21 21 00 22 00 2012 10 21 20 00 21 00 2012 10 21 19 00 20 00 2012 10 21 18 00 19 00 2012 10 21 16 00 17 00 2012 10 21 15 00 16 00 2012 10 21 14 00 15 00 2012 10 21 13 00 14 00 2012 10 21 12 00 13 00 Daily Weekly 2012 10 21 to 2012 10 22 e 2012 10 14 to 2012 10 21 2012 10 20 to 2012 10 21 2012 10 19 to 2012 10 20 2012 10 18 to 2012 10 19 2012 10 17 to 2012 10 18 2012 10 16 to 2012 10 17 2012 10 15 to 2012 10 16 The Time View lists the reports in chronological order and the Table View lists the reports by cluster or server Click a report to view it Graph covers period from 2010 03 22 01 00 te 2010 03 23 61 00 Current graph Relstes disk throughput wat time and network activity using normalized
186. from each file serving node to the Fusion Manager which controls processing and report generation Installing and configuring the Statistics tool The Statistics tool has two main processes e Manager process This process runs on the active Fusion Manager It collects and aggregates cluster wide statistics from file serving nodes running the Agent process and also collects local statistics The Manager generates reports based on the aggregated statistics and collects reports from all file serving nodes The Manager also controls starting and stopping the Agent process e Agent process This process runs on the file serving nodes It collects and aggregates statistics on the local system and generates reports from those statistics IMPORTANT The Statistics tool uses remote file copy rsync to move statistics data from the file serving nodes to the Fusion Manager for processing report generation and display SSH keys are configured automatically across all the file serving nodes to the active Fusion Manager Installing the Statistics tool The Statistics tool is installed automatically when the IBRIX software is installed on the file serving nodes To install or reinstall the Statistics tool manually use the following command ibrixinit tt Note the following e Installation logs are located at tmp stats install log e default installing the Statistics tool does not start the Statistics tool processes See Controlling
187. ful when all version indicators match If you followed all instructions and the version indicators do not match contact HP Support 4 Propagate a new segment map for the cluster ibrixhome bin ibrix dbck 1 f FSNAME 5 Verify the health of the cluster lt ibrixhome gt bin ibrix health 1 The output should specify Passed on Agile offline upgrade This upgrade procedure is appropriate for major upgrades Perform the agile offline upgrade in the following order e File serving node hosting the active management console e File serving node hosting the passive management console e Remaining file serving nodes NOTE To determine which node is hosting the active management console run the following command lt ibrixhome gt bin ibrix fm i Preparing for the upgrade 1 On the active management console node disable automated failover on all file serving nodes lt ibrixhome gt bin ibrix server m U 2 Verify that automated failover is off In the output the HA column should display of lt ibrixhome gt bin ibrix server 1 3 the active management console node stop the NFS and SMB services on all file serving nodes to prevent NFS and SMB clients from timing out lt ibrixhome gt bin ibrix server s t cifs c stop lt ibrixhome gt bin ibrix server s t nfs c stop 202 Cascading Upgrades Verify that all likewise services are down on all file serving nodes ps grep likewise Use kill
188. g is some sample output Fusion Manager version 5 5 XXX HOST NAME FILE SYSTEM IAD IAS IAD FS OS KERNEL VERSION ARCH ib50 86 5 5 205 9000 5 5 5 5 XXX 5 5 XXX GNU Linux 2 6 18 128 e15 x86 64 ib50 87 5 5 205 9000 5 5 5 5 XXX 5 5 XXX GNU Linux 2 6 18 128 e15 x86 64 You can now upgrade any remaining file serving nodes Upgrading remaining file serving nodes Complete the following steps on each file serving node L Manually fail over the file serving node ibrixhome bin ibrix server f p h HOSTNAME The node will be rebooted automatically Move the installer dir ibrix directory used in the previous release installation to ibrix old For example if you expanded the tarball in root during the previous IBRIX installation on this node the installer is in root ibrix Expand the distribution tarball or mount the distribution DVD in a directory of your choice Expanding the tarball creates a subdirectory named ibrix that contains the installer program For example if you expand the tarball in root the installer is in root ibrix Change to the installer directory if necessary and execute the following command ibrixupgrade f The upgrade automatically stops services and restarts them when the process is complete When the upgrade is complete verify that the IBRIX software services are running on the node etc init d ibrix server status The output will be similar to the following If the IAD service is not running o
189. g nodes execute the following command ibrix server m h SERVERNAME Example configuration This example uses two nodes ib50 81 and 1550 82 These nodes are backups for each other forming a backup pair root ib50 80 ibrix server 1 Segment Servers SERVER NAME BACKUP STATE HA ID GROUP ib50 81 ib50 82 Up on 132cf61a d25b 40f 8 890e e97363ae0d0b servers ib50 82 ib50 81 Up on 70258451 4455 484d bf80 75c94d17121d servers All VIFs on ib50 81 have backup standby VIFs 1550 82 Similarly all VIFs on 1550 82 have backup standby VIFs on ib50 81 NFS SMB FTP and HTTP clients can connect to bondo 1 on either host If necessary the selected server will fail over to bond0 2 on the opposite host 9000 clients could connect to bond1 on either host as these clients do not support or require NIC failover The following sample output shows only the relevant fields root ib50 80 ibrix nic l HOST IFNAME TYPE STATE IP ADDRESS MAC ADDRESS BACKUP HOST BACKUP IF ROUTE VLAN TAG LINKMON ib50 81 bondO Cluster Up LinkUp 172 16 0 81 00 00 00 00 11 172 16 0 254 No 150 81 bond0 1 User Up LinkUp 172 16 0 181 00 00 00 00 11 ib50 82 bond0 2 No ib50 81bond0 2 User 00 00 00 00 11 No ib50 82 bondO Cluster Up LinkUp 172 16 0 82 00 00 00 00 12 172 16 0 254 No ib50 82 bond0 1 User Up LinkUp 172 16 0 182 00 00 00 00 12 ib50 81 bond0 2 No ib50 82 bond0 2 User 00 00 00 00 12 ib50 81 Active FM Nonedit bond0 0 Cluster Up LinkUp A
190. g the HP BladeSystem c7000 Enclosure e BladeSystem c7000 Enclosure Maintenance and Service Guide This document should only be used by persons qualified in servicing of computer equipment e BladeSystem c Class Enclosure Troubleshooting Guide To access these manuals go to the Manuals page http www hp com support manuals and click bladesystem gt HP Blade System c Class Enclosures gt HP BladeSystem c7000 Enclosures 176 Support and other resources Installing and maintaining the HP 3Gb SAS BL Switch e HP 3Gb SAS BL Switch Installation Instructions e HP 3Gb SAS BL Switch Customer Self Repair Instructions To access these manuals go to the Manuals page http www hp com support manuals and click bladesystem gt BladeSystem Interconnects gt HP BladeSystem SAS Interconnects Maintaining the X9700cx also known as the HP 600 Modular Disk System e 600 Modular Disk System Maintenance and Service Guide Describes removal and replacement procedures This document should be used only by persons qualified in servicing of computer equipment To access this manual go to the Manuals page http www hp com support manuals and click storage gt Disk Storage Systems gt HP 600 Modular Disk System HP websites For additional information see the following HP websites e http www hp com go X9000 e http www hp com e http 7www hp com go storage e http www hp com support manuals e http www hp c
191. g the previous IBRIX installation on this node the installer is in root ibrix Expand the distribution tarball or mount the distribution DVD in a directory of your choice Expanding the tarball creates a subdirectory named ibrix that contains the installer program For example if you expand the tarball in root the installer is in root ibrix Change to the installer directory if necessary and execute the following command ibrixupgrade f Verify that the management console started successfully etc init d ibrix fusionmanager status The status command confirms whether the correct services are running Output is similar to the following Fusion Manager Daemon pid 18748 running Check usr local ibrix log fusionserver log for errors Upgrading the file serving nodes After the management console has been upgraded complete the following steps on each file serving node l Move the lt installer_dir gt ibrix directory used in the previous release installation to ibrix old For example if you expanded the tarball in root during the previous IBRIX installation on this node the installer is in root ibrix Expand the distribution tarball or mount the distribution DVD in a directory of your choice Expanding the tarball creates a subdirectory named ibrix that contains the installer program For example if you expand tarball in root the installer is in root ibrix Change to the installer directory if necessary a
192. g the upgrade Linux 9000 clients must be upgraded to the 6 x release NOTE If you are upgrading from an IBRIX 5 x release any support tickets collected with the ibrix supportticket command will be deleted during the upgrade Before upgrading to 6 1 download a copy of the archive files tgz from the admin platform diag supporttickets directory Upgrading 9720 chassis firmware Before upgrading 9720 systems to IBRIX software 6 1 the 9720 chassis firmware must be at version 4 4 0 13 If the firmware is not at this level upgrade it before proceeding with the IBRIX upgrade To upgrade the firmware complete the following steps 1 2 3 4 5 Go to http www hp com go StoreAlll On the HP IBRIX 9000 Storage page select HP Support amp Drivers from the Support section On the Business Support Center select Download Drivers and Software and then select HP 9720 Base Rack gt Red Hat Enterprise Linux 5 Server x86 64 Click HP 9720 Storage Chassis Firmware version 4 0 0 13 Download the firmware and install it as described in the HP 9720 Network Storage System 4 0 0 13 Release Notes Upgrading the IBRIX software to the 6 1 release 179 Online upgrades for IBRIX software 6 x to 6 1 Online uparades are supported only from the IBRIX 6 x release Upgrades from earlier IBRIX P9 pp la pg releases must use the appropriate o ine upgrade procedure When performing an online upgrade note the following File s
193. gain Note that xm is followed by a space and then two underscores and rpm is followed by a space and then two dashes cd var lib rpm rm rpm rebuilddb On the management console ibrixupgrade may also hang if the NFS mount points are stale In this case clean up the mount points reboot the management console and run the upgrade procedure again Upgrading the IBRIX software to the 5 5 release 205 B IBRIX 9730 component and cabling diagrams Back view of the main rack Two IBRIX 9730 CXs are located below the SAS switches the remaining IBRIX 9730 CXs are located above the SAS switches The IBRIX 9730 CXs are numbered starting from the bottom for example the IBRIX 9730 CX 1 is located at the bottom of the rack IBRIX 9730 CX 2 is located directly above IBRIX 9730 CX 1 1 9730 CX 6 2 9730 CX 5 3 9730 CX 4 4 9730 CX 3 5 7000 6 9730 CX 2 7 9730 CX 1 8 Onboard Administrator 2 9 6G SAS switch 4 10 Flex 10 VC module 2 11 TFT7600 12 1U Support shelf 206 IBRIX 9730 component and cabling diagrams Back view of the expansion rack Y 1 9730 CX 8 2 9730 7 IBRIX 9730 CX I O modules and SAS port connectors 1 Secondary I O module Drawer 2 3 SAS port 1 connector 5 SAS port 2 connector 7 SAS port 1 connector 9 Primary O module Drawer 1 1l SAS port 2 conn
194. ged into port 1 Wait for the controller to boot then check the seven segment display e If the seven segment display shows on then the fault has been corrected and the system has returned to normal e the seven segment display continues to shows an Hn 67 or Cn 02 code continue to the next step At this stage you have identified that the problem is with an X9700cx I O module Determine if the fault lies with the top or bottom modules For example if the seven segment display shows 02 then the fault may lie with one of the primary top I O modules Unmount all file systems using the GUI For more information see the HP IBRIX 9000 Storage File System User Guide Identifying failed I O modules an X9700cx chassis 155 7 Examine the I O module LEDs If an I O module has an amber LED a Replace I O module as follows a Detach the SAS cable connecting the module to the X9700c controller b Ensure that the disk drawer is fully pushed in and locked c Remove the I O module d Replace with a new I O module it will not engage with the disk drawer unless the drawer is fully pushed in e Reattach the SAS cable Ensure it is attached to the IN port the bottom port b controller 1 as described below in the section Re seating an X9700c controller page 157 c Wait for the controller to boot and then check the seven segment display e the seven segment display shows on
195. h is hosting the active Fusion Manager has an IP address on the user network 15 226 51 101 instead of the cluster network rooteib51 101 ibrix ibrix fm i FusionServer ib51 101 active quorum is running rooteib51 101 ibrix ibrix fm 1 NAME IP ADDRESS ib51 101 15 226 51 101 ib51 102 10 10 51 102 1 the node is hosting the active Fusion Manager as in this example stop the Fusion Manager on that node Troubleshooting upgrade issues 133 root ib51 101 ibrix etc init d ibrix fusionmanager stop Stopping Fusion Manager Daemon root ib51 101 ibrix 2 On the node now hosting the active Fusion Manager ib51 102 in the example unregister node ib51 101 root ib51 102 ibrix fm u ib51 101 Command succeeded 3 node hosting the active Fusion Manager register node ib51 101 and assign the correct IP address root ib51 102 ibrix fm R ib51 101 I 10 10 51 101 Command succeeded NOTE When registering a Fusion Manager be sure the hostname specified with R matches the hostname of the server The ibrix fm commands now show that node ib51 101 has the correct IP address and node ib51 102 is hosting the active Fusion Manager rooteib51 102 ibrix fm f NAME IP ADDRESS ib51 101 10 10 51 101 ib51 102 10 10 51 102 rooteib51 102 ibrix fm i FusionServer ib51 102 active quorum is running File system unmount issues If a file system does not unmount successf
196. h the right ade enateed or va versa n majos nto normal warns ertical menor Pm VC Enet Traps VCSFC Trapa VCM Trapa 2 Porn Staus 9 Pon Vou CA drag end VOU types om Me wm seda 10 The 7 Pon Theres nents v Omer vom cf metStatus crOorar Statue v Other or vaa vtm Configuring Phone Home settings To configure Phone Home on the GUI select Cluster Configuration in the upper Navigator and then select Phone Home in the lower Navigator The Phone Home Setup panel shows the current configuration Getting started 9000 Management Console ibrix Role admin Logout Help System Sas einen Updated Mar 7 2012 9 56 19 AM PST Hame Value Cluster 106951 Event Status 24 hours hw sr Fusion Manager Primary IP Address Dashboard fad 2 Cluster Configuration ilesystems Snapshots E 8 Servers File Shares Phone Setup 90 Email Mame Value Events State SNMP Central Management Server IP 4 Events Read Community String public G2 File Sharing Authentication System Name Active Directory System Location Share Administrators LDAP System Contact LDAP ID Mapping Software Entitlement ID Local Users Country Code e Local Groups 2 NDMP Backup E ba dli Remote Clusters Click Enable to configure the settings on the Phone Home Settings dialog box
197. he servers in the cluster If they do not have the same UID number create the account and change the UIDs as needed to make them the same on all the servers Similarly ensure that the ibrix user local user group exists and has the same GID number on all servers Enter the following commands on each node grep ibrix etc passwd grep ibrix user etc group Ensure that all nodes are up and running To determine the status of your cluster nodes check the health of each server by either using the dashboard on the GUI or entering the ibrix health S i h nodeX command for each node in the cluster At the top of the output look for PASSED 123 Upgrading 9720 chassis firmware Before upgrading 9720 systems to IBRIX software 6 2 the 9720 chassis firmware must be at version 4 4 0 13 If the firmware 15 not at this level upgrade it before proceeding with the IBRIX upgrade To upgrade the firmware complete the following steps 1 Go to http www hp com go StoreAll 2 the HP IBRIX 9000 Storage page select HP Support 8 Drivers from the Support section 3 the Business Support Center select Download Drivers and Software and then select HP 9720 Base Rack gt Red Hat Enterprise Linux 5 Server x86 64 4 Click HP 9720 Storage Chassis Firmware version 4 0 0 13 5 Download the firmware and install it as described in the HP 9720 Network Storage System 4 0 0 13 Release Notes Online upgrades for IBRIX
198. he files must be upgraded for retention features To upgrade a file system use the ibrix reten adm u f FSNAME command Additional steps are required before and after you run ibrix reten adm u f FSNAME command For more information see Upgrading pre 6 0 file systems for software snapshots page 185 Review the file etc hosts on every IBRIX node file serving nodes and management nodes to ensure the hosts file contains two lines similar to the following 127 0 0 1 hostname localhost localdomain localhost 1 localhost6 localdomain6 localhost6 In this instance hostname is the name of the IBRIX node as returned by the hostname command If these two lines do not exist or they do not contain all of the information open the etc hosts file with text editor such as vi and modify the file so it contains the two lines matching the format provided in this step For example if the hostname command returns 5501 then the lines should appear as follows 127 0 0 1 ss01 localhost localdomain localhost 1 localhost6 localdomain6 localhost6 After the upgrade the Fusion Manager on each server in the IBRIX cluster must be restarted manually 1 Restart all passive Fusion Managers a Determine if the Fusion Manager is in passive mode by entering the following command ibrix fm i b the command returns passive regardless of failover disabled or not enter the following command to restart F
199. he number of events to display The default is 100 ibrix event 1 n 3 EVENT ID TIMESTAMP LEVEL TEXT 1983 Feb 14 15 08 15 INFO File system ifsl created 1982 Feb 14 15 08 15 INFO Nic eth0 99 224 24 03 on host ix24 03 ad hp com up 1981 Feb 14 15 08 15 INFO Ibrix kernel file system is up on ix24 03 ad hp com The ibrix event i command displays events in long formot including the complete event description ibrix event i n 2 Event EVENT ID H 1981 TIMESTAMP Feb 14 15 08 15 LEVEL i INFO EX Ibrix kernel file system is up on ix24 03 ad hp com FILESYSTEM HOST ix24 03 ad hp com USER NAME OPERATION SEGMENT NUMBER PV NUMBER NIC HBA RELATED EVENT 0 Event EVENT 1D 1980 TIMESTAMP Feb 14 15 08 14 LEVEL ALERT TEXT category CHASSIS name 9730 1 overallStatus DEGRADED component OAmodule uuid 09USE038187WOAModule2 status MISSING Message The Onboard Administrator module is missing or has failed Diagnostic message Reseat the Onboard Administrator module If reseating the module does not resolve the issue replace the Onboard Administrator module eventId 000D0004 location OAmodule in chassis S N USE123456W level ALERT Monitoring cluster events 87 FILESYSTEM i HOST ix24 03 ad hp com USER NAME OPERATION SEGMENT NUMBER PV NUMBER NIC HBA i RELATED EVENT 0 The ibrix event 1 and 1 commands can include options that as filters to return records associated with a specif
200. he server registered and filesystem created notifications for admin1Ghp com and admin2 hp com ibrix event d e server registered filesystem created m adminl hp com admin2 hp com Testing email addresses To test an email address with a test message notifications must be turned on If the address 15 valid the command signals success and sends an email containing the settings to the recipient If the address is not valid the command returns an address failed exception ibrix event u n EMAILADDRESS Viewing email notification settings The ibrix event L command provides comprehensive information about email settings and configured notifications ibrix event L Email Notification Enabled 56 Configuring cluster event notification SMTP Server mail hp com From FMehp com Reply To MIS hp com EVENT LEVEL TYPE DESTINATION asyncrep completed ALERT EMAIL adminGhp com asyncrep failed ALERT EMAIL admin hp com Setting up SNMP notifications 9000 software supports SNMP Simple Network Management Protocol V1 V2 and V3 Whereas SNMPV2 security was enforced by use of community password strings V3 introduces the USM and VACM Discussion of these models is beyond the scope of this document Refer to RFCs 3414 and 3415 at http www ietf org for more information Note the following e 1 the SNMPV3 environment every message contains a user name The function of the USM is to authenticate users and ensure message
201. here is no full backup 1 Disable Express Query for the file system by entering the following command ibrix fs T D f lt FSNAME gt Delete the current database for the file system by entering the following command rm rf FS MOUNTPOINT gt archiving database Enable Express Query for the file system by entering the following command ibrix fs T E f lt FSNAME gt NOTE The moment Express Query is enabled database repopulation starts for the file system specified by lt FSNAME gt If there are any backup of the Custom Metadata with the tool MDExport re import them with MDImport as described in CLI Reference Guide NOTE If no such backup exists the custom metadata must be manually created again Wait for the resynchronizer to complete by entering the following command ibrix archiving 1 Repeat this command until it displays the OK status for the file system If none of the above worked contact HP Troubleshooting an Express Query Manual Intervention Failure 165 16 Recovering the 9720 9730 Storage Use these instructions if the system fails and must be recovered or to add or replace a server blade A CAUTION The Quick Restore DVD restores the file serving node to its original factory state This is a destructive process that completely erases all of the data on local hard drives Obtaining the latest IBRIX software release To obtain the latest HP IBRIX 6 2 1 ISO image register to dow
202. hes have internal IPs in the range 169 x x x which cannot be reached from HP SIM These switches will not be monitored however other OA components are monitored Discovered device is reported as unknown on CMS Run the following command on the file serving node to determine whether the Insight Remote Support services are running service snmpd status service hpsmhd status service hp snmp agents status If the services are not running start them service snmpd start service hpsmhd start service hp snmp agents start Alerts are not reaching the CMS If nodes are configured and the system is discovered properly but alerts are not reaching the CMS verify that a trapif entry exists in the conf configuration file on file serving nodes Device Entitlement tab does not show GREEN IF the Entitlement tab does not show GREEN verify the Customer Entered serial number and part number or the device SIM Discovery On SIM discovery use the option Discover a Group of Systems for any device discovery 34 Getting started 3 Configuring virtual interfaces for client access IBRIX software uses a cluster network interface to carry Fusion Manager traffic and traffic between file serving nodes This network is configured as bondo when the cluster is installed To provide failover support for the Fusion Manager a virtual interface is created for the cluster network interface Although the cluster network interface can
203. his section Step completed O Verify that the local partition contains at least 4 GB for the upgrade by using the following command df kh local For 9720 systems enable password less access among the cluster nodes before starting the upgrade The 6 2 release requires that nodes hosting the agile Fusion Manager be registered on the cluster network Run the following command to verify that nodes hosting the agile Fusion Manager have IP addresses on the cluster network ibrix fm 1 If a node is configured on the user network see Node is not registered with the cluster network page 133 for a workaround NOTE The Fusion Manager and all file serving nodes must be upgraded to the new release at the same time Do not change the active passive Fusion Manager configuration during the upgrade Modify the crashkernel parameter on all nodes so that is set to 256 MB by modifying the default boot entry in the etc grub conf file as shown the following example kernel vmlinuz 2 6 18 194 e15 ro root dev vg1 lvi crashkernel 256M016M IMPORTANT e The etc grub conf might contain multiple instances of the crashkernel parameter Make sure you modify each instance that appears in the file e Each server must be rebooted for this change to take effect If your cluster includes G6 servers check the iLO2 firmware version This issue does not affect G7 servers The firmware must be at version 2 05 for HA t
204. hive file depends on the size of the logs present on individual nodes in the cluster NOTE You may later be asked to email this final zip file to HP Support Be aware that the final zip file is not the same as the zip file that you receive in your email Configuring Collect You can configure data collection to occur automatically upon a system crash This collection will include additional crash digester output The archive filename of the system crash triggered collection will be in the format timestamp crash_ lt crashedNodeName gt zip 1 To enable or disable an automatic collection of data after a system crash and to configure the number of data sets to be retained a Select Cluster Configuration and then select Ibrix Collect b Click Modify and the following dialog box will appear Configuration J General Settings v Enable automate date cofector Number of data sets be retanes Email Settings Enable sending Cluster Configuration by email SMTP Server Requred Value aS c Under General Settings enable or disable automatic collection by checking or unchecking the appropriate box d Enter the number of data sets to be retained in the cluster in the text box To enable disable automatic data collection using the CLI use the following command ibrix collect C lt Yes No gt To set the number of data sets to be retained in the cluster using the CLI use the foll
205. hooting 158 Federal Communications Commission notice 237 file serving nodes fail back 49 fail over manually 48 health checks 88 maintain consistency with configuration database 163 migrate segments 107 monitor status 86 operational states 86 power management 101 prefer a user network interface 114 recover 166 remove from cluster 112 rolling reboot 102 run health check 163 start or stop processes 102 statistics 90 troubleshooting 158 tune 102 view process status 102 file system migrate segments 107 firewall configuration 22 firmware upgrade 136 Flex 10 networks 221 Fusion Manager agile 39 back up configuration 61 failover 39 migrate to agile configuration 118 G grounding methods 233 GUI add users 20 change password 22 customize 20 Details panel 19 Navigator 19 open 16 view events 87 H hardware power on 101 shut down 100 hazardous conditions symbols on equipment 234 HBAs display information 51 monitor for high availability 49 health check reports 88 help obtaining 176 High Availability agile Fusion Manager 39 automated failover turn on or off 47 check configuration 51 configure automated failover manually 46 detailed configuration report 52 fail back a node 49 failover protection 13 HBA monitor 49 manual failover 48 NIC HA 40 power management for nodes 101 power sources 47 server HA 40 summary configuration report 51 troubleshooting 15
206. how a constant value if you had previously run a firmware flash utility this can take up to 25 minutes If the value is on the controller is operating normally Otherwise see Identifying the failed component page 154 for more information Viewing software version numbers To view version information for a list of hosts use the following command ibrix version 1 h HOSTLIST For each host the output includes e number of the installed file system e numbers of the IAD and File System module e Operating system type and OS kernel version e architecture The S option shows this information for all file serving nodes The option shows the information for all 9000 clients The file system and IAD FS output fields should show matching version numbers unless you have installed special releases or patches If the output fields show mismatched version numbers and you do not know of any reason for the mismatch contact HP Support A mismatch might affect the operation of your cluster Troubleshooting specific issues Software services Cannot start services on a file serving node or Linux 9000 client SELinux might be enabled To determine the current state of SELinux use the getenforce command If it returns enforcing disable SELinux using either of these commands setenforce Permissive setenforce 0 To permanently disable SELinux edit its configuration file etc selinux conf ig
207. ibrix 2323332 0 unused lsmod grep ipfs ipfsl 102592 0 unused If either grep command returns empty contact HP Support 7 From the management console verify that the new version of IBRIX software FS IAS is installed on the file serving node lt ibrixhome gt bin ibrix version 1 S 8 the upgrade was successful failback the file serving node lt ibrixhome gt bin ibrix server f U h HOSTNAME 9 Repeat steps 1 through 8 for each file serving node in the cluster After all file serving nodes have been upgraded and failed back complete the upgrade Completing the upgrade 1 From the management console turn automated failover back on ibrixhome bin ibrix server m 2 Confirm that automated failover is enabled ibrixhome bin ibrix server 1 In the output HA displays on 3 Verify that all version indicators match for file serving nodes Run the following command from the management console ibrixhome bin ibrix version 1 If there is a version mismatch run the ibrix ibrixupgrade f script again on the affected node and then recheck the versions The installation is successful when all version indicators match If you followed all instructions and the version indicators do not match contact HP Support 4 Propagate a new segment mop for the cluster ibrixhome bin ibrix dbck 1 f FSNAME 5 Verify the health of the cluster ibrixhome bin ibrix health 1 The output should specify Passed on Standard offline
208. ibrix bin verify client update kernel update version The following example is for a RHEL 4 8 client with kernel version 2 6 9 89 ELsmp usr local ibrix bin verify client update 2 6 9 89 35 1 ELsmp Kernel update 2 6 9 89 35 1 ELsmp is compatible IF the minor kernel update is compatible install the update with the vendor RPM and reboot the system The 9000 client software is then automatically updated with the new kernel and 9000 client services start automatically Use the ibrix version 1 C command to verify the kernel version on the client NOTE use the verify client command the 9000 client software must be installed Upgrading Windows 9000 clients Complete the following steps on each client 1 Remove the old Windows 9000 client software using the Add or Remove Programs utility in the Control Panel 2 Copy the Windows 9000 client MSI file for the upgrade to the machine 3 Launch the Windows Installer and follow the instructions to complete the upgrade 4 Register the Windows 9000 client again with the cluster and check the option to Start Service after Registration 5 Check Administrative Tools Services to verify that the 9000 client service is started 6 Launch the Windows 9000 client On the Active Directory Settings tab click Update to retrieve the current Active Directory settings 7 Mount file systems using the IBRIX Windows client GUI NOTE If you are using Remote Desktop to perform an upgrade y
209. ic file system server alert level and start or end time See the HP IBRIX Network Storage System Reference Guide for more information Removing events from the events database table Use the ibrix event p command to removes event from the events table starting with the oldest events The default is to remove the oldest seven days of events To change the number of days include the o DAYS COUNT option ibrix event p o DAYS COUNT Monitoring cluster health Health Health To monitor the functional health of file serving nodes and 9000 clients execute the ibrix health command This command checks host performance in several functional areas and provides either a summary or a detailed report of the results checks The ibrix health command runs these health checks on file serving nodes e Pings remote file serving nodes that share a network with the test hosts Remote servers that are pingable might not be connected to a test host because of a Linux or IBRIX software issue Remote servers that are not pingable might be down or have a network problem e If test hosts are assigned to be network interface monitors pings their monitored interfaces to assess the health of the connection For information on network interface monitoring see Setting network interface options in the configuration database page 113 e Determines whether specified hosts can read their physical volumes The ibrix health command runs
210. icating that the power PIC module has outdated or incompatible firmware If this occurs you can update the PIC firmware as follows l logon to the server 2 Start hp ilo service hp ilo start 3 Flash the power PIC opt hp mxso firmware power pic scexe 4 Reboot the server LUN status is failed A LUN status of failed indicates that the logical drive has failed This is usually the result of failure of three or more disk drives This can also happen if you remove the wrong disk drive when replacing a failed disk drive IF this situation occurs take the following steps l1 Carefully record any recent disk removal or reinsertion actions Make sure you track the array box and bay numbers and know which disk drive was removed or inserted 2 On 9720 systems immediately run the following command exds escalate This gathers log information that is useful in diagnosing whether the data can be recovered Generally if the failure is due to real disk failures the data cannot be recovered However if the failure is due to an inadvertent removal of a working disk drive it may be possible to restore the LUN to operation 160 Troubleshooting 3 Contact HP Support as soon as possible Apparent failure of HP 7 Sometimes when a server is booted the HP P700m cannot access the SAS fabric This is more common when a new blade has just been inserted into the blade chassis but can occur on other occasions Symptoms include
211. ices Dutch battery notice French Verklaring betreffende de batterij AN WAARSCHUWING dit apparaat bevat mogelijk een batterij Probeer de batterijen na het verwijderen niet op te laden Stel de batterijen niet bloot aan water of temperaturen boven 60 C De batterijen mogen niet worden beschadigd gedemonteerd geplet of doorboord Zorg dat u geen kortsluiting veroorzaakt tussen de externe contactpunten en laat de batterijen niet in aanraking komen met water of vuur Gebruik ter vervanging alleen door HP goedgekeurde batterijen Batterijen accu s en accumulators mogen niet worden gedeponeerd bij het normale huishoudelijke afval Als u de batterijen accu s wilt inleveren voor hergebruik of op de juiste manier wilt vernietigen kunt u gebruik maken van het openbare inzamelingssysteem voor klein chemisch afval of ze terugsturen naar HP of een geautoriseerde HP Business of Service Partner Neem contact op met een geautoriseerde leverancier of een Business of Service Partner voor meer informatie over het vervangen of op de juiste manier vernietigen van accu s battery notice Avis relatif aux piles A AVERTISSEMENT peut contenir des piles N essayez pas de recharger les piles apr s les avoir retir es Evitez de les mettre en contact avec de l eau ou de les soumettre des temp ratures sup rieures 60 C N essayez pas de d monter d craser ou de percer les piles N essaye
212. ighs more than 22 5 kg 50 lb at least two people must lift the component into the rack together If the component is loaded into the rack above chest level a third person must assist in aligning the rails while the other two support the device Rack warnings and precautions Ensure that precautions have been taken to provide for rack stability and safety It is important to follow these precautions providing for rack stability and safety and to protect both personnel and property Follow all cautions and warnings included in the installation instructions 234 Warnings and precautions WARNING reduce the risk of personal injury or damage to the equipment e Observe local occupational safety requirements and guidelines for heavy equipment handling e Obtain adequate assistance to lift and stabilize the product during installation or removal e the leveling jacks to the floor e Rest the full weight of the rack on the leveling jacks e Attach stabilizing feet to the rack if it is a single rack installation e Ensure the racks are coupled in multiple rack installations e Fully extend the bottom stabilizers on the equipment Ensure that the equipment is properly supported braced when installing options and boards e careful when sliding rack components with slide rails into the rack The slide rails could pinch your fingertips e Ensure that the rack is adequately stabilized before extending a rack component
213. iled Look at the specific feature log file under usr local ibrix setup logs for more detailed information To retry the copy of configuration use the command appropriate for your server file serving node usr local ibrix autocfg bin ibrixapp upgrade s o An agile node a file serving node hosting the agile management console usr local ibrix autocfg bin ibrixapp upgrade f s Upgrading the IBRIX software to the 5 5 release O This section describes how to upgrade to the IBRIX software 5 5 release The management console and all file serving nodes must be upgraded to the new release at the same time IMPORTANT Do not start new remote replication jobs while a cluster upgrade is in progress If replication jobs were running before the upgrade started the jobs will continue to run without problems after the upgrade completes NOTE If you are upgrading from an IBRIX 5 x release any support tickets collected with the ibrix supportticket command will be deleted during the upgrade Download a copy of the archive files tgz from the admin platform diag supporttickets directory Upgrades can be run either online or offline e Online upgrades This procedure upgrades the software while file systems remain mounted Before upgrading a file serving node you will need to fail the node over to its backup node allowing file system access to continue This procedure cannot be used for major upgrades but is appropriat
214. ilesystem myFS1 Segment LUN UVID Tier Size TB Used Storage State Owner Source Destination 1 D cOffd7e TierOneSixer 1 6 TB 2 myMSA OK ib69s4 al 2 D cOffd7e TierFourT 20TB 2 6955 3 OOcOffd7e TierFourT 20TB 2 myMSA 6954 4 DOcOffd7e 20TB 2 myMSA oK 6955 5 D cOffd7e TierOneSixer 1 5 TB 2 myMSA 6954 The Summary dialog box lists the source and destination segments for the evacuation Click Back to make any changes or click Finish to start the evacuation Maintaining the system The Active Tasks panel reports the status of the evacuation task When the task is complete it will be added to the Inactive Tasks panel When the evacuation is complete run the following command to retire the segment from the file system ibrix fs B f FSNAME n BADSEGNUMLIST The segment number associated with the storage is not reused The underlying LUN or volume can be reused in another file system or physically removed from the storage solution when this step is complete If quotas were disabled on the file system unmount the file system and then re enable quotas using the following command ibrix fs q E f FSNAME Then remount the file system To evacuate a segment using the use the ibrix evacuate command as described in the HP IBRIX 9000 Storage CLI Reference Guide Troubleshooting segment evacuation If segment evacua
215. ilover Server 106952 Enter a NIC name of an existing physical interface e g eth4 or bond1 to configure an active physical network To create a virtual interface VIF enter a NIC name e g bond1 1 based on an existing active physical network Name bond0 1 IP Address No IP Inactive Standby Net Mask 255 255 255 0 Route Es MTU Required Value You can create additional user VIFs and assign standby NICs as needed For example you might want to add a user VIF for another share on server ib69s2 and assign a standby NIC on server ib69s1 You can also specify a physical interface such eth4 and create a standby NIC on the backup server for it The NICs panel on the GUI shows the NICs on the selected server In the following example there are four NICs on server 6951 bondo the active cluster FM IP bondo the management IP VIF this server is hosting the active FM bondo 1 the NIC created in this example and bondo 2 a standby NIC for an active NIC on server ib69s2 Servers Updated Jun 15 2012 4 32 36 PM PDT Status Name State CPU Het MB s Disk MB s Backup HA e A 691 Up 4 0 01 0 00 ib69s2 on Event Status 24 hours 157 3 117 ib69s2 Up 1 0 02 0 00 6951 Navigator llla Dashboard lt Cluster Configuration E Filesystems E Ei Snapshots File Shares Summary IP Type State
216. ined after the failover Check the usr local ibrix log statstool stats 1log for any errors NOTE The reports generated before failover will not be available on the current active Fusion Manager Checking the status of Statistics tool processes To determine the status of Statistics tool processes run the following command 4 etc init d ibrix statsmanager status ibrix statsmanager pid 25322 is running In the output the pid is the process id of the master process Controlling Statistics tool processes Statistics tool processes on all file serving nodes connected to the active Fusion Manager can be controlled remotely from the active Fusion Manager Use the ibrix statscontrol tool to start or stop the processes on all connected file serving nodes or on specified hostnames only e processes on all file serving nodes including the Fusion Manager usr local ibrix stats bin ibrix statscontrol stopall e Start processes on all file serving nodes including the Fusion Manager usr local ibrix stats bin ibrix statscontrol startall e Stop processes on specific file serving nodes usr local ibrix stats bin ibrix statscontrol stop lt hostnamel gt lt hostname2 gt e Start processes on specific file serving nodes usr local ibrix stats bin ibrix statscontrol start lt hostnamel gt lt hostname2 gt Troubleshooting the Statistics tool Testing access To verify that ssh authentication i
217. information To prevent damage to the system be aware of the precautions you need to follow when setting up the system or handling parts A discharge of static electricity from a finger or other conductor could damage system boards or other staticsensitive devices This type of damage could reduce the life expectancy of the device Preventing electrostatic discharge To prevent electrostatic damage observe the following precautions e Avoid hand contact by transporting and storing products in staticsafe containers e electrostaticsensitive parts in their containers until they arrive at staticfree workstations e Place parts on a grounded surface before removing them from their containers e Avoid touching pins leads or circuitry e Always be properly grounded when touching staticsensitive component or assembly Grounding methods There are several methods for grounding Use one or more of the following methods when handling or installing electrostatic sensitive parts e Use a wrist strap connected by a ground cord to a grounded workstation or computer chassis Wrist straps are flexible straps with a minimum of 1 megohm 10 percent resistance in the ground cords To provide proper ground wear the strap snug against the skin e Use heel straps toe straps or boot straps at standing workstations Wear the straps on both feet when standing on conductive floors or dissipating floor mats e Use conductive field service tools e
218. ing the client 102 Maintaining the system You can locally override host tunings that have been set on 9000 Linux clients by executing the ibrix lwhost command Tuning file serving nodes on the GUI The Modify Server s Wizard can be used to tune one or more servers in the cluster To open the wizard select Servers from the Navigator and then click Tuning Options from the Summary panel The General Tunings dialog box specifies the communications protocol TCP or UDP and the number of admin and server threads Modify Server s Wizard Y General Tunings General Tunings IAD Tunings Module Tunings Alltunings general 14D and module are considered ADVANCED options and should only be modified by Servers expert users for specific scenarios Typically tunings are applied by qualified HP representatives during 3 installation 1 Summary Protocol TCP TCP Number of Admin Threads 10 10 Number of Server Threads 10 10 LI ME The IAD Tunings dialog box configures the IBRIX administrative daemon Tuning file serving nodes and 9000 clients 103 General Tunings IAD Tunings IAD Tunings Module Tunings The following are advanced lbrix Administrative Daemon tunings 9 Servers 4 ae Use default values for all IAD tune options defaults defined in parenthesis cudServiceReleaseAttempts 300 300 1 500 cudServiceReleaseRetryinterval 2 2 1 500 a fsFreezeAttempts
219. ing network ches scans 115 Contents 5 Changing the IP address for a Linux 9000 client essen 115 Changing the cluster Ir dr 115 Managing routing table S iet ela betonte Pul 116 Adding a routing Tbe A Gres de utri FEE Mirco ei oin 116 Deleting a routing table GIly tidad pr is bled te pens 116 Deleting a network interface rosada 116 Viewing network interface noni 116 11 Migrating to an agile Fusion Manager 118 Backing up the configuration e esed oeste eue ses dei il 118 Performing fedt fef lo Taceo ss 118 Testing failover and failback of the agile Fusion Manager sss 120 Converting the original management console node to a file serving node hosting the agile Fusion E OM IRARA RR E EENA 121 12 Upgrading the IBRIX software to the 6 2 122 Upgrading 9720 chassis Ward 124 Online upgrades for IBRIX software Go to O iii 124 Preparing for the rpg iii 124 Performing the A iiis reiii isis i asinis 124 After the Upgrade rad a tad 125 Automated offline upgrades for IBRIX software 6 x to 6 2 126 Preparing for the WOOGIE rr 126 Performing the even Aaa 126 After the A a 127 Manual offline upgrades for IBRIX software 6 x to 6 2 128 Preparing Tor the Upa teatral 128 Performing th
220. ing nodes The cluster is now completely shut down 100 Maintaining the system Starting up the system To start an IBRIX 9720 system first power on the hardware components and then start the 9000 Software Powering on the system hardware To power on the system hardware complete the following steps 1 Power on the 9100cx disk capacity block s 2 Power on the 9100c controllers 3 Wait for all controllers to report on in the 7 segment display 4 Power on the file serving nodes Powering on after a power failure power failure occurred all of the hardware will power on at once when the power is restored The file serving nodes will boot before the storage is available preventing file systems from mounting To correct this situation wait until all controllers report on in the 7 segment display and then reboot the file serving nodes The file systems should then mount automatically Starting the IBRIX software To start the IBRIX software complete the following steps 1 Power on the node hosting the active Fusion Manager 2 Power on the file serving nodes root segment segment 1 power on owner first if possible 3 Monitor the nodes on the GUI and wait for them all to report UP in the output from the following command ibrix server 1 4 Mount file systems and verify their content Run the following command on the file serving node hosting the active Fusion Manager ibrix mount f fs name m mount
221. ing nodes The Fusion Manager is in active mode on the node where the upgrade was run and is in passive mode on the other file serving nodes If the cluster includes a dedicated Management Server the Fusion Manager is installed in passive mode on that server Upgrade Linux 9000 clients See Upgrading Linux 9000 clients page 131 If you received a new license from HP install it as described in the Licensing chapter in this guide Manual offline upgrades for IBRIX software 6 x to 6 2 129 After the upgrade Complete the following steps 1 2 3 10 If your cluster nodes contain any 10Gb NICs reboot these nodes to load the new driver You must do this step before you upgrade the server firmware as requested later in this procedure Upgrade your firmware as described in Upgrading firmware page 136 Run the following command to rediscover physical volumes ibrix pv Apply any custom tuning parameters such as mount options Remount all file systems ibrix mount f fsname m mountpoint Re enable High Availability if used ibrix server m Start any Remote Replication Rebalancer or data tiering tasks that were stopped before the upgrade If you are using SMB set the following parameters to synchronize the SMB software and the Fusion Manager database smb signing enabled e smb signing required e ignore writethru Use ibrix cifsconfig to set the parameters specifying the valu
222. ion Manager configuration is automatically backed up whenever the cluster configuration changes The backup occurs on the node hosting the active Fusion Manager The backup file 15 stored at lt ibrixhome gt tmp fmbackup zip on that node The active Fusion Manager notifies the passive Fusion Manager when a new backup file is available The passive Fusion Manager then copies the file to lt ibrixhome gt tmp fmbackup zip on the node on which it is hosted If a Fusion Manager is in maintenance mode it will also be notified when a new backup file is created and will retrieve it from the active Fusion Manager You can create an additional copy of the backup file at any time Run the following command which creates a fmbackup zip file in the STBRIXHOME 10g directory 5 IBRIXHOME bin db backup Once each day a cron job rotates the SIBRIXHOME log directory into the 104 daily subdirectory The cron job also creates a new backup of the Fusion Manager configuration in both SIBRIXHOME tmp and IBRIXHOME log To force a backup use the following command ibrix fm B O IMPORTANT You will need the backup file to recover from server failures or to undo unwanted configuration changes Whenever the cluster configuration changes be sure to save a copy of fmbackup zip in a safe remote location such as a node on another cluster Using NDMP backup applications The NDMP backup feature can be used to back up and recover
223. ionality of the Express Query they are typically due to another unrelated event in the cluster or the file system Therefore most of the work to recover from an Express Query MIF is to check the health of the cluster and the file system and take corrective actions to fix the issues caused by these events Once the cluster and file system have an OK status the MIF status can be cleared since the Express Query service will be recovering and restarting automatically In some very rare cases a database corruption might occur as a result of these external events or from some internal dysfunction Express Query contains a recovery mechanism that tries to rebuild the database from information Express Query is keeping specifically for that critical situation Express Query might be unable to recover from internal database corruption Even though it is unlikely is possible and it might occur in the following two cases e corrupted database has to be rebuilt from data that has been already backed up If the data needed has been backed up there is no automated way for Express Query to recover since that information has been deleted from the IBRIX file system after the backup It is however possible to replay the database logs from the backup Some data needed to rebuild the database is corrupted and therefore it cannot be used Even though database files as well as information used in database recovery are well protected against corruption corr
224. is configured on the user network see Node is not registered with the cluster network page 133 for a workaround On 9720 systems delete the existing vendor storage ibrix vs d n EXDS The vendor storage will be registered automatically after the upgrade Performing the upgrade The online upgrade is supported only from the IBRIX 6 x to 6 1 release Complete the following steps 1 Obtain the latest HP IBRIX 6 1 ISO image from the IBRIX 9000 software dropbox Contact HP Support to register for the release and obtain access to the dropbox 2 Mount the ISO image and copy the entire directory structure to the root ibrix directory on the disk running the OS 3 Change directory to root ibrix on the disk running the OS and then run chmod R 777 on the entire directory structure 4 Run upgrade script and follow the on screen directions auto online ibrixupgrade 5 Upgrade Linux 9000 clients See Upgrading Linux 9000 clients page 131 6 If you received a new license from HP install it as described in the Licensing chapter in this guide 180 Cascading Upgrades After the upgrade Complete these steps Start any Remote Replication Rebalancer or data tiering tasks that were stopped before the upgrade your cluster includes G6 servers check the iLO2 firmware version The firmware must at version 2 05 for HA to function properly If your servers have an earlier version of the LO2 firmware
225. it for it to reboot Boot one affected server Run the following command exds stdiag 10 If X9700c controllers can be seen boot other affected servers and run exds stdiag each If they also see the X9700c controllers the procedure is completed otherwise continue to the next step Perform the following steps For each X9700c controller in turn Slide out controller until LEDs extinguish Reinsert controller Wait for the seven segment to show on Run the exds_stdiag command on affected server IF ok the procedure is completed otherwise repeat steps a through d on next the controller 12 If the above steps do not produce results replace the P700m 13 Boot server and run exds stdiag 14 If you still cannot see the X9700c controllers repeat the procedure starting with step 1 If the system 15 not in production you can use the following shorter procedure 1 Power off all server blades 2 Using OA power off both SAS switches Troubleshooting specific issues 161 3 Power on both SAS switches and wait until they are 4 Power on all server blades 5 Runexds stdiag lfexds stdiag indicates that there are no problems then the procedure is completed otherwise continue to the next step 6 Power off all X9700c enclosures 7 Power on all enclosures 8 Wait until all severssegment displays show on then power on all server blades 9 the HP P700m still can
226. ity is useful if you need to configure LUNs It also allows you to look at the state of arrays Use the hpacuc1i command on any server in the system Do not start multiple copies of hpacucli on several different servers at the same time CAUTION Do not create LUNs unless instructed to do so by HP Support POST error messages For an explanation of server error messages see the POST error messages and beep codes section in the HP ProLiant Servers Troubleshooting Guide at http www hp com support manuals IBRIX 9730 controller error messages IF a controller does not power up during system boot contact HP Support and provide the lockup code that appears on POST 150 Troubleshooting The following table lists lockup codes The first character 15 the lockup type The second character is 1 or 2 depending on whether the controller considers itself to be a MASTER or SLAVE The last two characters are the code Lockup code Description Cn01 Hardware not supported Cn03 Firmware not supported Cn04 Memory modules did not match Cn05 Controller did not receive location string from hardware Cn10 TBM not installed or not detected Cn TBM did not successfully configure SAS2 zoning FnOO Heap has run out of memory FnO1 Firmware assertion Fn10 TLB entry contains an invalid value Fn ll Tried to access an invalid TLB register
227. ive Displays information about each drive in a storage cluster 76 Monitoring cluster operations Status Type Name UUID Serial Number Model Firmware Version Table 2 Obtaining detailed information about a server continued Panel name Information provided Location Properties Storage Controller Displayed for a server Status Type Name UUID Serial Number Model Firmware Version Location Message Diagnostic message Volume Displays volume information for each server Status Type Name UUID Properties Storage Controller Displayed for a storage cluster Battery Displayed for each storage controller Status Type UUID Serial Number Model Firmware Version Message Diagnostic Message Status Type UUID Properties IO Cache Module Displayed for a storage controller Status Type UUID Properties Temperature Sensor Displays information for each temperature sensor Status Type Name UUID locations Properties Monitoring 9720 9730 hardware Monitoring storage and storage components Select Vendor Storage from the Navigator tree to display status and device information for storage and storage components System Status Vendor Storage Updated Oct 2 2012 5 06 55 AM EST Type Event Status 24 hours BEEN Navigator La y Snapshots E Servers File Shares 3 NFS cis FTP HTTP dg Certificat
228. k Restore DVD into the server DVD ROM drive Restart the server to boot from the DVD ROM When the IBRIX 9000 Network Storage System screen appears enter qr to install the IBRIX software on the file serving node The server reboots automatically after the software is installed Remove the DVD from the DVD ROM drive Restoring the node configuration Complete the following steps on each node starting with the previous active management console 1 2 3 4 Log in to the node The configuration wizard should pop up Escape out of the configuration wizard Attach the external storage media containing the saved node configuration information Restore the configuration Run the following restore script and pass in the tgz file containing the node s saved configuration information as an argument usr local ibrix setup restore saved config tgz Reboot the node Completing the upgrade Complete the following steps Upgrading the IBRIX software to the 5 6 release 191 Remount all IBRIX file systems lt ibrixhome gt bin ibrix mount f lt fsname gt m lt mountpoint gt Remount all previously mounted IBRIX file systems on Windows 9000 clients using the Windows client GUI If automated failover was enabled before the upgrade turn it back on from the node hosting the active management console lt ibrixhome gt bin ibrix server m Confirm that automated failover is enabled ibrixhome bin ibrix server 1 In the ou
229. kup pair The server can be powered down or remain up during the procedure You can perform a manual failover at any time regardless of whether automated failover is in effect Manual failover does not require the use of a programmable power supply However if you have identified a power supply for the server you can power it down before the failover Use the GUI or the CLI to fail over a file serving node e On the GUI select the node on the Servers panel and then click Failover on the Summary panel e Onthe run ibrix server f specifying the node to be failed over as the HOSTNAME IF appropriate include the p option to power down the node before segments are migrated ibrix server f p h HOSTNAME Check the Summary panel or run the following command to determine whether the failover was successful 48 gt Configuring failover Failing Setting ibrix server 1 The STATE field indicates the status of the failover If the field persistently shows Down InFailover or Up InFailover the failover did not complete contact HP Support for assistance For information about the values that can appear in the STATE field see What happens during a failover page 40 back a server After an utomated or manual failover of a server you must manually fail back the server which restores ownership of the failed over segments and network interfaces to the server Before failing back the server confirm that it can see all of i
230. l 200 Fan OSJSE13IEADNF an Fan Bay 4 speed 6930 RPM Q ok fan Bay 5 Active Cool 200 Fan OSUSE1SSEAONFanS Fan 83 5 spead 6529 RPM o oK fan Bay 6 Active Cool 200 Fan OSUSE1SSEAONFan6 Fan Bay 6 speed 9500 RPM Q ok fan Bay 7 Active Cool 200 Fan OSUSE1SSEAONFan Fan Bay 7 speed 5499 RPM oK fan Bay B Active Cool 200 Fan OSUSE1SSEA0NFan8 Fan 8 speed 500 RPM ok fan Bay 9 Active Cool 200 Fan OSUSE1SSEAONFand Fan Bay 9 spead 5500 RPM o oK fan Bay 10 Active Cool 200 Fan 08 1 10 Fan Bay 10 speed 459 RPM The sub nodes under the Blade Enclosure node provide information about the hardware components within the blade enclosure Monitoring 9720 9730 hardware 73 74 Table 1 Obtaining detailed information about a blade enclosure Panel name Bay Information provided Status Type Name UUID Serial number Model Properties Temperature Sensor The Temperature Sensor panel enclosure displays information for a bay OA module or for the blade Status UUID Properties Fan The Fan panel displays information for a blade enclosure Status Type Name UUID Location Properties OA Module Status Type Name UUID Serial number Model Firmware version Location Properties Power Supply Status Type Name UUID Serial number Location Shared Interconnect Monitoring cluster operations Status Type Name UUID Seri
231. l Sort Descending Mounted mwer3lmwm3 143_fs1 mwralmvm 143_fs1 Columns Y Host v Mountpoint Y Access State Adding user accounts for GUI access IBRIX software supports administrative and user roles When users log in under the administrative role they can configure the cluster and initiate operations such as remote replication or snapshots When users log in under the user role they can view the cluster configuration and status but cannot make configuration changes or initiate operations The default administrative user name is ibrix The default regular username is ibrixuser User names for the administrative and user roles are defined in the etc group file Administrative users are specified in the ibrix admin group and regular users are specified in the ibrix user 20 Getting started group These groups are created when IBRIX software 15 installed The following entries in the etc group file show the default users in these groups ibrix admin x 501 root ibrix ibrix user x 502 ibrix ibrixUser ibrixuser You can add other users to these groups as needed using Linux procedures For example adduser G ibrix groupname username When using the adduser command be sure to include the G option Using the CLI The administrative commands described in this guide must be executed on the Fusion Manager host and require root privileges The commands are located in SIBRIXHOME bin For complete informatio
232. l network bondl Hardware OK other systems seen on physical network Warning bondZ not UP and RUNNING Warning bond2 only this server seen on physical network possible hardware problem Interconnect Bay l has external uplinks Port 7 Status Linked Active 100Mb Onboard Administrator connection Interconnect Bay Z has external uplinks Port 7 Status Linked Standby 100Mb Onboard Administrator connection Networking Diagnostics Completed exds netperf utility The exds netpert utility measures network performance The tool measures performance between a client system and the 9720 Storage Run this test when the system is first installed Where networks are working correctly the performance results should match the expected link rate of the network that is for a 1 link expect about 90 MB s You can also run the test at other times to determine if degradation has occurred 148 Troubleshooting The exds netpert utility measures streaming performance in two modes e Serial Streaming is done to each network interface in turn The host where exds netperf is run is the client that is being tested e Parallel Streaming I O is done on all network interfaces at the same time This test uses several clients The serial test measures point to point performance The parallel test measures more components of the network infrastructure and could uncover problems not visible with the serial test Keep in mind that overall
233. l node the following information is displayed in the Pool panel e Status e e Name e UUID e Properties To obtain details on the volumes in the pool expand the Pool node and then select the Volume node The following information is displayed for the volume in the pool e Status e e Name e UUID e Properties The following image shows information for two volumes named LUN 15 and 16 on the Volume panel Volume Status Status Type Hame UUID Properties j o volume LUN 1 6068C3DEDCODOO 095c 1413046308349 RAIDS capacity 6 37 18 locaDevice devisdo o volume LUN 16 675079650D000001095C241304E303343 ratdLevel RAID6 capacity 6 37 TB lacalDevice dev sdp Monitoring 9720 9730 hardware 83 Monitoring storage controllers for a storage cluster The Management Console displays a Storage Controller node for each storage controller in the storage cluster Select the Storage Controller node to view the following information for the selected storage controller e Status e e UUID e Serial Number e Model e Firmware Version e location e Message e Diagnostic Message Expand the Storage Controller node to obtain information about the battery and IO cache module for the storage controller Monitoring the batteries for a storage controller The Battery panel displays the following information e Status e e UUID e Properties Provides information about
234. le PASSED dev mpath mpath1 Physical volume h7krR6 2pxA M8bD dkdf 3PK7 iwFE L17jcD readable PASSED dev mpath mpathO Physical volume voXTso a2KQ MWCN tGcu 10Bs ejWG YrKLEe readable PASSED dev mpath mpath3 Check Iad and Fusion Manager consistent Check Description Result Result Information bv18 03 engine uuid matches on Iad and Fusion Manager PASSED bvi8 03 IP address matches on Iad and Fusion Manager PASSED bv18 03 network protocol matches on Iad and Fusion Manager PASSED bv18 03 engine connection state on Iad is up PASSED bv18 04 engine uuid matches on Iad and Fusion Manager PASSED bvi8 04 IP address matches on Iad and Fusion Manager PASSED bv18 04 network protocol matches on Iad and Fusion Manager PASSED bvi8 04 engine connection state on Iad is up PASSED ibrixFS file system uuid matches on Iad and Fusion Manager PASSED ibrixFS file system generation matches on Iad and Fusion Manager PASSED ibrixFS file system number segments matches on Iad and Fusion Manager PASSED ibrixFS file system mounted state matches on Iad and Fusion Manager PASSED Superblock owner for segment 4 of filesystem ibrixFS on bv18 04 matches on Iad and Fusion Manager PASSED Superblock owner for segment 3 of filesystem ibrixFS on bv18 04 matches on Iad and Fusion Manager PASSED Superblock owner for segment 2 of filesystem ibrixFS on bv18 04 matches on Iad and Fusion Manager PASSED Superblock owner for segment 1 of filesystem ibrixFS on bv18 04 matches o
235. le systems 61 Fusion Manager configuration 61 NDMP applications 61 battery replacement notices 247 booting server blades 16 booting the system 16 cabling diagrams 9720 224 capacity blocks 9720 overview 222 21 clients access virtual interfaces 37 cluster events monitor 87 health checks 88 license key 135 license view 135 log files 90 operating statistics 90 version numbers view 158 cluster interface change network 115 defined 112 component monitoring 9720 153 contacting HP 176 controller error messages 9730 150 core dump 52 D Disposal of waste equipment European Union 243 document 252 Index related documentation 176 documentation providing feedback on 178 E email event notification 55 error messages POST 150 escalating issues 146 events cluster add SNMPv3 users and groups 59 configure email notification 55 configure SNMP agent 57 configure SNMP notification 57 configure SNMP trapsinks 58 define MIB views 59 delete SNMP configuration elements 60 enable or disable email notification 56 list email notification settings 56 list SNMP configuration 60 monitor 87 remove 88 types 55 view 87 exds escalate command 146 exds netdiag command 148 exds netperf command 148 exds stdiag utility 147 F failover automated 37 configure automated failover manually 46 crash capture 52 fail back a node 49 manual 48 NIC 36 server 40 troubles
236. lly Fusion Manager failover and the Statistics tool configuration 96 In a High Availability environment the Statistics tool fails over automatically when the Fusion Manager fails over You do not need to take any steps to perform the failover The statistics configuration changes automatically as the Fusion Manager configuration changes The following actions occur after a successful failover e If Statstool processes were running before the failover they are restarted If the processes were not running they are not restarted e Statstool passive management console is installed on the IBRIX Fusion Manager in maintenance mode Setrsync is run automatically on all cluster nodes from the current active Fusion Manager e Loadfmis run automatically to present all file system data in the cluster to the active Fusion Manager e stored clusterlevel database generated before the Fusion Manager failover is moved to the current active Fusion Manager allowing you to request reports for the specified range if pre generated reports are not available under the Hourly Daily and Weekly categories See Generating reports page 94 Using the Statistics tool NOTE If the old active Fusion Manager is not available pingable for more than two days the historical statistics database is not transferred to the current active Fusion Manager e If configurable parameters were set before the failover the parameters are reta
237. lly all NDMP actions are controlled from the DMA However if the DMA cannot resolve a problem or you suspect that the DMA may have incorrect information aboutthe NDMP environment take the following actions from the GUI or e Cancel one or more NDMP sessions on a file serving node Canceling a session stops all spawned sessions processes and frees their resources if necessary e Reset the NDMP server on one or more file serving nodes This step stops all spawned session processes stops the ndmpd and session monitor daemons frees all resources held by NDMP and restarts the daemons Viewing or canceling NDMP sessions To view information about active NDMP sessions select Cluster Configuration from the Navigator and then select NDMP Backup gt Active Sessions For each session the Active NDMP Sessions panel lists the host used for the session the identifier generated by the backup application the 62 Configuring system backups status of the session backing up data restoring data or idle the start time and the IP address used by the DMA To cancel a session select that session and click Cancel Session Canceling a session kills all spawned sessions processes and frees their resources if necessary Active NDMP Sessions Cancel Session xd pis Hostname Identifier Type Start Time DMA IP Address 3 NDMP Backup imvm2 15543 IDLE Wed May 26 22 46 39 2010 192 168 10 2 mi Active Sessions imvm2 16769 DATA BACK
238. local hardware clock on the agile Fusion Manager node is used as the time source This configuration method ensures that the time is synchronized on all cluster nodes even in the absence of an external time source On 9000 clients the time is not synchronized with the cluster nodes You will need to configure NTP servers on 9000 clients List the currently configured NTP servers ibrix clusterconfig i N Specify a new list of NTP servers Configuring NTP servers 23 ibrix clusterconfig c N SERVER1 SERVERn Configuring HP Insight Remote Support on IBRIX 9000 systems IMPORTANT In the IBRIX software 6 1 release the default port for the IBRIX SNMP agent changed from 5061 to 161 This port number cannot be changed Prerequisites The required components for supporting IBRIX systems are preinstalled on the file serving nodes You must install HP Insight Remote Support on a separate Windows system termed the Central Management Server CMS e Insight Manager HP SIM This software manages HP systems and is the easiest and least expensive way to maximize system uptime and health e Insight Remote Support Advanced IRSA This version is integrated with HP Systems Insight Manager SIM It provides comprehensive remote monitoring notification advisories dispatch and proactive service support IRSA and HP SIM together are referred to as the CMS The following versions of the software are supported e HP SIM 6
239. log back on or reboot the machine You can also open a DOS command window and access the drive manually Mounted drive not visible when using Terminal Server Refresh the browser s view of the system by logging off and then logging back on 9000 client auto startup interferes with debugging The 9000 client 15 set to start automatically which can interfere with debugging a Windows 9000 client problem To prevent this reboot the machine in safe mode and change the Windows 9000 client service mode to manual which enables you to reboot without starting the client 1 Open the Services control manager Control Panel gt Administrative Tools gt Services 2 Right click 9000 client Services and select Properties 3 Change the startup type to Manual and then click OK 4 Debug the client problem When finished switch the Windows 9000 client service back to automatic startup at boot time by repeating these steps and changing the startup type to Automatic Mode 1 or mode 6 bonding HP recommends the use of 10 Gbps networking and mode 1 bonding with the 9720 system If 1 Gbps networking must be used and network bandwidth appears to be a limiting factor even with all VirtualConnect ports X1 to X6 populated you may consider using mode 6 active active Troubleshooting specific issues 159 bonding for additional bandwidth However mode 6 bonding 15 more sensitive to issues in the 5 network topology and has been seen to cause storms of ARP
240. logged in from host 16 213 41 14 F Snapshot Space A Jun1518 21 35 Running on Instant On license System Status The System Status section lists the number of cluster events that have occurred in the last 24 hours There are three types of events C Disruptive events that can result in loss of access to file system data Examples are a segment that is unavailable or a server that cannot be accessed Warnings Potentially disruptive conditions where file system access is not lost but if the situation is not addressed it can escalate to an alert condition Examples are a very high server CPU utilization level or a quota limit close to the maximum Information Normal events that change the cluster Examples are mounting a file system or creating a segment Cluster Overview The Cluster Overview provides the following information Capacity The amount of cluster storage space that is currently free or in use File systems The current health status of the file systems in the cluster The overview reports the number of file systems in each state healthy experiencing a warning experiencing an alert or unknown Segment Servers The current health status of the file serving nodes in the cluster The overview reports the number of nodes in each state healthy experiencing a warning experiencing an alert or unknown 18 Getting started Services Whether the specified file system
241. lowing is an example of the server components that are displayed root ib121 121 rootBibi21 121 hpsp fmt lc SERVER Integrated Lights Out 1101 3 Server Blade Power Management Controller art Array Ctlr HP Smart Array Controller ed Network idapter orage Ctlr SERVER HDD Enclosure IO Mod Storage Enclosure I O Module Enclosure HDD Storage Enclosure Hard Disk Drives Administrator for C7000 Enet Module Switch Steps for upgrading the firmware IMPORTANT The 10Gb NIC driver is updated during the IBRIX v 2 X software upgrade However the new driver is not utilized loaded until the server had been rebooted If you run the upgrade firmware tool hpsp_ fmt before you reboot the server the tool detects that the old driver is still being used To upgrade the firmware for components Steps for upgrading the firmware 137 1 Run opt hp platform bin hpsp fmt fr command to verify that the firmware on this node and subsequent nodes in this cluster is correct and up to date This command should be performed before placing the cluster back into service The following figure shows an example of the firmware recommendation output and corrective component upgrade flash O IMPORTANT Upgrade the firmware in the following order 1 Server 2 Chassis 3 Storage 2 Do the following based on the Proposed Action and Severity Status in Proposed Action column Status i
242. lows you to create edit and view documents written in different locales using UTF 8 IBRIX software supports modifying the etc sysconfig i18n configuration file for your locale The following example sets the LANG and SUPPORTED variables for multiple character sets LANG ko KR utf8 SUPPORTED en US utf8 en US en ko KR utf8 ko KR ko zh CN utf8 zh CN zh SYSFONT lat0 sunl6 SYSFONTACM iso15 Logging in to the system Using the network Use ssh to log in remotely from another host You can log in to any server using any configured site network interface eth1 eth2 or bond With ssh and the root user after you log in to any server your ssh known hosts file will work with any server in the cluster The original server blades in your cluster are configured to support password less ssh After you have connected to one server you can connect to the other servers without specifying the root password again To enable the same support for other server blades or to access the system itself without specifying a password add the keys of the other servers to ssh authorized keys on each server blade Using the TFT keyboard monitor If the site network is down you can log in to the console as follows 1 Pull out the keyboard monitor See Front view of a base cabinet page 215 2 Access the on screen display OSD main dialog box by pressing Print Sern or by pressing Ctrl twice within one second Logging in to the system 15 3
243. luster page 40 for more information about creating standby backup pairs where each server in a pair is the standby for the other Use one of the following schemes for the reboot e Reboot the file serving nodes one at a time e Divide the file serving nodes into two groups with the nodes in the first group having backups in the second group and the nodes in the second group having backups in the first group You can then reboot one group at a time To perform the rolling reboot complete the following steps on each file serving node 1 Reboot the node directly from Linux Do not use the Power Off functionality in the GUI as it does not trigger failover of file serving services The node will fail over to its backup 2 Wait for the GUI to report that the rebooted node is Up 3 From the GUI failback the node returning services to the node from its backup Run the following command on the backup node ibrix server f U h HOSTNAME HOSTNAME is the name of the node that you just rebooted Starting and stopping processes You can start stop and restart processes and can display status for the processes that perform internal IBRIX software functions The following commands also control the operation of PostgreSQL on the machine The PostgreSQL service is available at usr local ibrix init To start and stop processes and view process status on the Fusion Manager use the following command etc init d ibrix fusionmanage
244. m file serving nodes must have current information about the file system HP recommends that you execute ibrix health a regular basis to monitor the health of this information If the information becomes outdated on a file serving node execute ibrix dbck o to resynchronize the server s information with the configuration database For information on ibrix health see Monitoring cluster health page 88 NOTE The ibrix dbck command should be used only under the direction of HP Support To run a health check on a file serving node use the following command ibrix health i h HOSTLIST IF the last line of the output reports Passed the file system information on the file serving node and Fusion Manager is consistent To repair file serving node information use the following command ibrix dbck o f FSNAME h HOSTLIST To repair information on all file serving nodes omit the HOSTLIST argument Troubleshooting an Express Query Manual Intervention Failure MIF An Express Query Manual Intervention Failure MIF is a critical error that occurred during Express Query execution These are failures Express Query cannot recover from automatically After a MIF Synchronizing information on file serving nodes and the configuration database 163 occurrence the specific file system 15 logically removed from the Express Query and it requires a manual intervention to perform the recovery Although these errors inhibit the normal funct
245. m run the ibrix nic 1 and ibrix fm f commands Verify that the TYPE for bond1 is set to Cluster and that the IP_ADDRESS for both nodes matches the subnet or network on which your management consoles are registered For example rooteib121 121 fmt ibrix nic 1 HOST IFNAME TYPE STATE IP ADDRESS MAC ADDRESS BACKUP HOST ROUTE VLAN TAG LINKMON ib121 12 bondi Cluster Up LinkUp 10 10 121 121 10 1 74 35 1 30 No ib121 122 bondi Cluster Up LinkUp 10 10 121 122 10 1 74 35 83 8 No ib121 12 Active FM Nonedit bond1 0 Cluster Up LinkUp Active FM 10 10 121 220 No rooteib121 121 fmt ibrix fm f NAME IP ADDRESS ib121 12 10 10 121 121 ib121 122 10 10 121 122 Upgrading the IBRIX software to the 6 1 release 183 If there is a mismatch on your system you will see errors when connecting id 1234 and 9009 To correct this condition see Moving the Fusion Manager VIF to bond1 page 188 9 Because of a change in the inode format files used for snapshots must either be created on IBRIX 6 0 or later or the pre 6 0 file system containing the files must be upgraded for snapshots For more information about upgrading a file system see Upgrading pre 6 0 file systems for software snapshots page 185 Upgrading Linux 9000 dlients Be sure to upgrade the cluster nodes before upgrading Linux 9000 clients Complete the following steps on each client 1 Download the latest HP 9000 client 6 1 package 2 Exp
246. mation about Fusion Manda cau 40 Configuring High Availability on the cluster coreana tot 40 What happens during soii 40 Contents 3 Configuring automated failover with the HA 41 Configuring automated failover manually iieri eerte teo ts 46 Changing ihe HA configuration manvally eri 48 Failing a server over maven cisne 48 Failing back ai Server sastre 49 Setting up HBA MOON aci 49 Checking the High Availability configura siii 51 Capturing a core dump from a failed Mode ewes 52 Prerequisites for setting up the crash COpture oooooooooccccccoooocnnnncononananonnnononnnnnnnnnnnnonanannnnnnnnnons 53 Setting up nodes for crash cop srt 53 5 Configuring cluster eveni 35 WU cm evens NE E E E A EE E P 55 Setting up email notification of cluster venir ii 55 Associating events and email addresses cooooooccocccooonononcnocooonnnnnnncccononon 55 Configuring email notification settings carros 56 Dissociating events and email addresses eee 56 Testing email address a 56 Viewing email notification Setings cccoooooococnccononononnnononnnnnonnncncnnonannnnnnnnnona 56 Setting SNMP notification Sissies eiai ii a a 57 Configuring the SNMP 57 Configuring rapaces
247. ment for example personal computers The FCC requires devices in both classes to bear a label indicating the interference potential of the device as well as additional operating instructions for the user FCC rating label The FCC rating label on the device shows the classification A or B of the equipment Class B devices have an FCC logo or ID on the label Class A devices do not have an FCC logo or ID on the label After you determine the class of the device refer to the corresponding statement Class A equipment This equipment has been tested and found to comply with the limits for a Class A digital device pursuant to Part 15 of the FCC rules These limits are designed to provide reasonable protection against harmful interference when the equipment is operated in a commercial environment This equipment generates uses and can radiate radio frequency energy and if not installed and used in accordance with the instructions may cause harmful interference to radio communications Operation of this equipment in a residential area is likely to cause harmful interference in which case the user will be required to correct the interference at personal expense Class B equipment This equipment has been tested and found to comply with the limits for a Class B digital device pursuant to Part 15 of the FCC Rules These limits are designed to provide reasonable protection against harmful interference in a residential installation This equipment
248. more hostgroups Setting host tunings on a hostgroup is a convenient way to tune a set of clients all at once To set the same host tunings on all clients specify the clients hostgroup CAUTION Changing host tuning settings alters file system performance Contact HP Support before changing host tuning settings Use the ibrix host tune command to list or change host tuning settings To list default values and valid ranges for all permitted host tunings ibrix host tune L To tune host parameters on nodes or hostgroups ibrix host tune S h HOSTLIST g GROUPLIST o OPTIONLIST Contact HP Support to obtain the values for OPTIONLIST List the options as option value pairs separated by commas To set host tunings on all clients include the g clients option To reset host parameters to their default values on nodes or hostgroups ibrix host tune U h HOSTLIST g GROUPLIST n OPTIONS To reset all options on all file serving nodes hostgroups and 9000 clients omit the h HOSTLIST and n OPTIONS options To reset host tunings on all clients include the g clients option The values that are restored depend on the values specified for the h HOSTLIST command o serving nodes The default file serving node host tunings are restored o 9000 clients The host tunings that are in effect for the default clients hostgroup are restored Hostgroups The host tunings that are in effect for the parent of the specified hostgr
249. moving any access covers for non hot pluggable areas e Do not replace non hot pluggable components while power is applied to the product Power off the device and then disconnect all AC power cords e Do not exceed the level of repair specified in the procedures in the product documentation All troubleshooting and repair procedures are detailed to allow only subassembly or module level repair Because of the complexity of the individual boards and subassemblies do not attempt to make repairs at the component level or to make modifications to any printed wiring board Improper repairs can create a safety hazard Device warnings and precautions 235 A WARNING To reduce the risk of personal injury or damage to the equipment the installation of non hot pluggable components should be performed only by individuals who are qualified in servicing computer equipment knowledgeable about the procedures and precautions and trained to deal with products capable of producing hazardous energy levels WARNING reduce the risk of personal injury or qu to the zu ment observe local occupational health and safety requirements and guidelines for manually handling material A CAUTION Protect the installed solution from power fluctuations and temporary interruptions with a regulating Uninterruptible Power Supply UPS This device protects the hardware from damage caused by power surges and voltage spikes and keeps the system in operation duri
250. n Iad and Fusion Manager PASSED Viewing logs Logs are provided for the Fusion Manager file serving nodes and 9000 clients Contact HP Support for assistance in interpreting log files You might be asked to tar the logs and email them to HP Viewing and clearing the Integrated Management Log IML The IML logs hardware errors that have occurred on a server blade View or clear events using the hpasmcli 4 command Viewing operating statistics for file serving nodes Periodically the file serving nodes report the following statistics to the Fusion Manager e Summary General operational statistics including CPU usage disk throughput network throughput and operational state For information about the operational states see Monitoring the status of file serving nodes page 86 e 10 Aggregate statistics about reads and writes e Network Aggregate statistics about network inputs and outputs e Memory Statistics about available total free and swap memory e Statistics about processor and CPU activity e NFS Statistics about NFS client and server activity 90 Monitoring cluster operations The GUI displays most of these statistics on the dashboard See Using the GUI page 16 for more information To view the statistics from the use the following command ibrix stats 1 s c i n f h HOSTLIST Use the options to view only certain statistics or to view statistics for specific file ser
251. n Manager installed on different file serving nodes in the cluster The migration procedure configures the current Management Server blade as a host for an agile Fusion Manager and installs another instance of the agile Fusion Manager on a file serving node After completing the migration to the agile Fusion Manager configuration you can use the original Management Server blade as follows Use the blade only as host for the agile Fusion Manager e Convert the blade to a file serving node to support high availability the cluster must have an even number of file serving nodes The blade can continue to host the agile Fusion Manager To perform the migration the IBRIX installation code must be available As delivered this code is provided in tmp X9720 ibrix If this directory no longer exists download the installation code from the HP support website for your storage system O IMPORTANT The migration procedure can be used only on clusters running HP IBRIX software 5 4 or later Backing up the configuration Before starting the migration to the agile Fusion Manager configuration make a manual backup of the Fusion Manager configuration ibrix fm B The resulting backup archive is located at usr local ibrix tmp fmbackup zip Save a copy of this archive in a safe remote location in case recovery is needed Performing the migration Complete the following steps on the blade currently hosting the Fusion Manager 1 The agile F
252. n Severity column Go to UPGRADE MANDATORY Step 3 UPGRADE RECOMMENDED Step 3 is optional However it is recommended to perform step 3 for system stability and to avoid any known issues NONE or DOWNGRADE MANDATORY Step 4 NONE or DOWNGRADE RECOMMENDED Step 4 is optional However it is recommended to perform step 4 for system stability and to avoid any known issues 3 Perform the flash operation by entering the following command and then go to step 5 hpsp fmt flash c lt components name gt The following screen shot displays a successful flash operation 138 Upgrading firmware root ib149 124 1 sp fmt on the configuration it may take 10 minutes Integrated Lights Out iLO using 1 No jt Required Perform the flash operation by entering the following command and then go to step 5 hpsp fmt flash c lt components name gt force If the components require a reboot on flash failover the FSN for continuous operation as described in the following steps Y Although the following steps are based on a two node cluster all steps can be used ina a multiple node clusters a Determine whether the node to be flashed is the active Fusion Manager by enter the following command ibrix fm i b Perform a manual FM failover on the local node by entering the following command from the active Fusion Manager ibrix fm m nofmfailover serverl The FM failover will take approximately one minu
253. n about the commands see the HP IBRIX 9000 Network Storage System CLI Reference Guide When using ssh to access the machine hosting the Fusion Manager specify the IP address of the Fusion Manager user VIF Starting the array management software Depending on the array type you can launch the array management software from the GUI In the Navigator select Vendor Storage select your array from the Vendor Storage page and click Launch Storage Management 9000 client interfaces 9000 clients can access the Fusion Manager as follows e Linux clients Use Linux client commands for tasks such as mounting or unmounting file systems and displaying statistics See the HP IBRIX 9000 Storage CLI Reference Guide for details about these commands e Windows clients Use the Windows client GUI for tasks such as mounting or unmounting file systems and registering Windows clients Using the Windows 9000 client GUI The Windows 9000 client GUI is the client interface to the Fusion Manager To open the GUI double click the desktop icon or select the 9000 client program from the Start menu on the client The dient program contains tabs organized by function NOTE The Windows 9000 client GUI can be started only by users with Administrative privileges e Status Shows the client s Fusion Manager registration status and mounted file systems and provides access to the IAD log for troubleshooting e Registration Registers the client with the Fusion M
254. n use the Administrator or the 9720 Storage username You can also access the OA serial port using the supplied dongle from a blade This can be useful if you accidently misconfigure the VC networking so that you cannot access the OA through the network You access the serial port as follows 1 Connect the dongle to the front of one blade 2 Connect a serial cable from the OA serial port to the serial connector on the dongle 3 Login to the server via the TFT keyboard mouse monitor 4 Run minicom as follows Mincom 5 Press Ctrl A then p The Comm Parameters menu is displayed 6 Select 9600 baud 7 Press Enter to save 8 Press Ctrl A then m to reinitialize the modem You are now connected to the serial interface of the OA 9 Press Enter 10 When you are finished press Ctrl A then q to exit minicom Accessing the OA through the service port Each OA has a service port this is the right most Ethernet port on the OA This allows you to use a laptop to access the OA command line interface See HP BladeSystem c7000 Enclosure Setup and Installation Guide for instructions on how to connect a laptop to the service port Using hpacucli Array Configuration Utility ACU A The 11 command 15 a command line interface to the X9700c controllers lt can also be used to configure the E200i and P700m controllers although HP does not recommend this 9720 capacity blocks come preconfigured However the hpacuc1i util
255. n your system contact HP Support IBRIX Filesystem Drivers loaded ibrcud is running pid 23325 IBRIX IAD Server pid 23368 running Verify that the ibrix and ipfs services are running lsmod grep ibrix ibrix 2323332 0 unused lsmod grep ipfs ipfsl1 102592 0 unused If either grep command returns empty contact HP Support Upgrading the IBRIX software to the 5 5 release 201 7 From the management console verify that the new version of IBRIX software FS IAS has been installed on the file serving node lt ibrixhome gt bin ibrix version 1 S 8 the upgrade was successful failback the file serving node lt ibrixhome gt bin ibrix server f U h HOSTNAME 9 Repeat steps 1 through 8 for each remaining file serving node in the cluster After all file serving nodes have been upgraded and failed back complete the upgrade Completing the upgrade 1 From the node hosting the active management console turn automated failover back on lt ibrixhome gt bin ibrix server m 2 Confirm that automated failover is enabled ibrixhome bin ibrix server 1 In the output the HA column should display on 3 Verify that all version indicators match for file serving nodes Run the following command from the active management console ibrixhome bin ibrix version 1 If there is a version mismatch run the ibrix ibrixupgrade f script again on the affected node and then recheck the versions The installation is success
256. nagement console turn automated failover back on ibrixhome bin ibrix server m 3 Confirm that automated failover is enabled lt ibrixhome gt bin ibrix server 1 In the output HA should display on 4 From the node hosting the active management console perform a manual backup of the upgraded configuration ibrixhome bin ibrix fm B 5 Verify that all version indicators match for file serving nodes Run the following command from the active management console ibrixhome bin ibrix version 1 there is a version mismatch run the ibrix ibrixupgrade f script again on the affected node and then recheck the versions The installation is successful when all version indicators match If you followed all instructions and the version indicators do not match contact HP Support 6 Verify the health of the cluster ibrixhome bin ibrix health 1 204 Cascading Upgrades The output should show Passed on Troubleshooting upgrade issues Automatic upgrade fails Check the upgrade log file to determine the source of the failure The log file is located in the installer directory If it is not possible to perform the automatic upgrade continue with the manual upgrade procedure ibrixupgrade hangs The installation can hang because the RPM database is corrupted This is caused by inconsistencies in the Red Hat Package Manager Rebuild the RPM database using the following commands and then attempt the installation a
257. nce marking 240 Laser compliance Notices x eee mr De roe v bte ple eo e etc DE 240 English laser E TTE E 240 da tese AAA eb s 24 French laser OL CS esa edis ad Manda 241 German laser TIBIICE a cede en d EC ERR AS SNA SAN 241 ltalian laser notice asser hir et dir s iie redes dada 242 Japa se ect ec d EM M E EM E 242 Spanish l ser Holl ee teca n a de la 242 Recycling Dod A DS 243 English recycling suivi ute peer ride bie ptas tieu e debe ste feb iassa eed 243 Bulgarian recycling Hollee eed teh x ut eed eR es Ide 243 Cech recycling ees bte ER RU eR ee E M UU e d uU 243 Danish recycling his tede id 243 Ditch recycling not rr oe PIE de ERA 243 Estonian recycling oll oi ai 244 Finnish recycling molle ooo tee e ab GER RO 244 French recycling A A A demde 244 recycling ecd ee cd os ee 244 Greek fecy ling A getddsebuoelencadaendseGteenkeusedaas 244 Hungarian al 244 ES a A A 245 latvian recycling A 245 Lithuanian her o e pcd as 245 Polish recycling tolle ea ca teo dels siete Date delest s ecu um Uu Dalia a Re 245 Poriuquese
258. nd then rediscover the devices Getting started Testing the Insight Remote Support configuration To determine whether the traps are working properly send a generic test trap with the following command snmptrap 1 c public lt CMS IP gt 1 3 6 1 4 1 232 lt Managed System IP gt 6 11003 1234 1 3 6 1 2 1 1 5 0 s test 1 3 6 1 4 1 232 11 2 11 1 0 i 0 1 3 6 1 4 1 232 11 2 8 1 0 s IBRIX remote support testing For example if the CMS IP address is 99 2 2 2 and the IBRIX node is 99 2 2 10 enter the following snmptrap v1 c public 99 2 2 2 1 3 6 1 4 1 232 99 2 2 10 6 11003 1234 21 3 6 1 2 1 1 5 0 test 1 3 6 1 4 1 232 11 2 11 1 0 i 0 1 3 6 1 4 1 232 11 2 8 1 0 s IBRIX remote support testing Updating the Phone Home configuration The Phone Home configuration should be synchronized after you add or remove devices in the cluster The operation enables Phone Home on newly added devices servers storage and chassis and removes details for devices that are no longer in the cluster On the GUI select Cluster Configuration in the upper Novigator select Phone Home in the lower Navigator and click Rescan on the Phone Home Setup panel On the CLI run the following command ibrix phonehome s Disabling Phone Home When Phone Home is disabled all Phone Home information is removed from the cluster and hardware and software are no longer monitored To disable Phone Home on the GUI click Disable on the Phone Home Setup panel
259. nd execute the following command ibrixupgrade f Upgrading the IBRIX software to the 5 5 release 197 The upgrade automatically stops services and restarts them when the process completes 4 When the upgrade is complete verify that the IBRIX software services are running on the node etc init d ibrix server status The output should be similar to the following example If the IAD service is not running on your system contact HP Support IBRIX Filesystem Drivers loaded ibrcud is running pid 23325 IBRIX IAD Server pid 23368 running 5 Execute the following commands to verify that the ibrix and ipfs services are running lsmod grep ibrix ibrix 2323332 0 unused lsmod grep ipfs ipfsl 102592 0 unused If either grep command returns empty contact HP Support 6 From the management console verify that the new version of IBRIX software FS IAS has been installed on the file serving nodes lt ibrixhome gt bin ibrix version 1 S Completing the upgrade 1 Remount all file systems lt ibrixhome gt bin ibrix mount f fsname m lt mountpoint gt 2 From the management console turn automated failover back on lt ibrixhome gt bin ibrix server m 3 Confirm that automated failover is enabled lt ibrixhome gt bin ibrix server 1 In the output HA displays on 4 From the management console perform a manual backup of the upgraded configuration lt ibrixhome gt bin ibrix fm B 5 Verify that all version indicat
260. nder the Storage Clusters node to obtain additional information Drive Enclosure The Drive Enclosure panel provides detailed information about the drive enclosure Expand the Drive Enclosure node to view information about the power supply and sub enclosures See Monitoring drive enclosures for a storage cluster page 80 for more information Pool The Pool panel provides detailed information about a pool in a storage cluster Expand the Pool node to view information about the volumes in the pool See Monitoring pools for a storage cluster page 82 for more information Storage Controller The Storage Controller panel provides detailed information about the storage controller Expand the Storage Controller node to view information about batteries and IO cache modules for a storage controller See Monitoring storage controllers for a storage cluster page 84 for more information Monitoring 9720 9730 hardware 79 ch OSUSET1SSEAON vs1 Summary ZE Servers 03 Drive Enclosure Pool Pool Pool Pool Pool Pool Pool 6 Pool ot Storage Controller eot Storage Controller BEUBBBBBBBEBE Monitoring drive enclosures for a storage cluster Each 9730 CX has a single drive enclosure That enclosure includes two sub enclosures which are shown under the Drive Enclosure node Select one of the Sub Enclosure nodes to display information about one of the sub enclosures ch_OSUSE133EA0N 51 i Sur
261. neral it is better to assign a user network for protocol NFS SMB HTTP FTP traffic because the cluster network cannot host the virtual interfaces VIFs required for failover HP recommends that you use a Gigabit Ethernet port or faster for user networks When creating user network interfaces for file serving nodes keep in mind that nodes needing to communicate for file system coverage or for failover must be on the same network interface Also nodes set up as a failover pair must be connected to the same network interface For a highly available cluster HP recommends that you put protocol traffic on a user network and then set up automated failover for it see Configuring High Availability on the cluster page 40 This method prevents interruptions to the traffic If the cluster interface is used for protocol traffic and that interface fails on a file serving node any protocol clients using the failed interface to access a mounted file system will lose contact with the file system because they have no knowledge of the cluster and cannot reroute requests to the standby for the node 112 Maintaining the system Link aggregation and virtual interfaces When creating a user network interface you can use link aggregation to combine physical resources into a single VIF VIFs allow you to provide many named paths within the larger physical resource each of which can be managed and routed independently as shown in the following diagram See th
262. network interfaces add routing table entries 116 bonded and virtual interfaces 113 defined 112 delete 116 delete routing table entries 116 guidelines 35 viewing 116 Network Storage System configuration 14 management interfaces 16 NIC failover 36 NTP servers 23 Onboard Administrator access 149 P passwords change GUI password 22 Phone Home 26 ports open 22 POST error messages 150 power failure system recovery 101 power sources server 47 Q QuickRestoreDVD 166 rack stability warning 177 recycling notices 243 regulatory compliance Canadian notice 238 European Union notice 238 identification numbers 237 Japanese notices 239 Korean notices 239 laser 240 recycling notices 243 Taiwanese notices 240 related documentation 176 rolling reboot 102 routing table entries add 116 delete 116 254 Index S segments evacuate from cluster 109 migrate 107 server blades booting 16 overview 220 server blades 9720 add 141 servers configure standby 36 crash capture 52 failover 40 tune 102 shut down hardware and software 99 SNMP event notification 57 SNMP MIB 59 spare parts list 9720 228 spare parts list 9730 212 Statistics tool 92 enable collection and synchronization 92 failover 96 Historical Reports GUI 93 install 92 log files 98 maintain configuration 96 processes 97 reports 94 space requirements 95 troubleshooting 97 uninstall 98 upgrade
263. ng a power failure CAUTION To system you must provide at least 7 6 centimeters 3 0 inches of clearance at the front and back of the device CAUTION When replacing hot pluggable components in an operational IBRIX 9720 Storage allow approximately 30 seconds between removing the failed component and installing the replacement This time is needed to ensure that configuration data about the removed component is cleared from the system registry To minimize airflow loss do not pause for more than a tew minutes To prevent overheating due to an empty chassis bay use a blanking panel or leave the slightly disengaged component in the chassis until the replacement can be made CAUTION Schedule physical configuration changes during periods of low or no activity If the system is performing rebuilds RAID migrations array expansions LUN expansions or experiencing heavy I O avoid physical configuration changes such as adding or hard drives or a controller or any other component For example hot adding or replacing a controller while under heavy I O could cause a momentary pause performance decrease or loss of access to the device while the new controller 15 starting up When the controller completes the startup process full functionality is restored CAUTION gt Before replacing a hot pluggable component ensure that steps have been taken to prevent loss of data 236 Warnings and precau
264. ng for the UP eo 190 Saving the node conri 191 A 191 Restoring the node configuratiON oooooocccccocooonocccnncononononncccononnononnncnonnn non rnccnnannnnnnnnno 191 Completing the Upgrade vns 191 Troubleshooting upgrade ISE oi 192 Automatic UBOItIUE Gra 192 Manual Ue taa 193 Upgrading the IBRIX software to the 5 5 release oooooooooooocccccccooononnonocoononanonnnnccconanononnnccnnnns 193 A ETT 194 Manual OS eae 194 Standard upgrade for clusters with a dedicated Management Server machine or blade 195 Standard online parade asii 195 Standard offline tacto 196 Agile upgrade for clusters with an agile management console configuration 198 Agile online IPs 199 Agile offline Ops e o idad 202 Troubleshooting upgrade ISSUES as 205 B IBRIX 9730 component and cabling diagrams cocine 206 Back view On ihe main FOE anat 206 Back view of LR ore Lo DE TU MR IT 207 IBRIX 9730 CX I O modules and SAS port connectors 207 IBRIX 9730 CX 1 connections to the SAS switches 208 IBRIX 9730 CX 2 connections to the SAS 209 IBRIX 9730 CX 3 connections to the SAS Switches 2 a 210 8 Contents IBRIX 9730 CX 7 connections to the SAS switches in the expansion 211
265. nload the software on the HP StoreAll Download Drivers and Software web page Use a DVD 1 2 O 3 4 Burn the ISO image to a DVD Insert the Quick Restore DVD into a USB DVD drive cabled to the Onboard Administrator or to the Dongle connecting the drive to the front of the blade IMPORTANT Use an external USB drive that has external power do not rely on the USB bus for power to drive the device Restart the blade to boot from the DVD When the HP Storage screen appears enter qr to install the software Use a USB key 1 2 3 5 7 Copy the ISO to a Linux system Insert a USB key into the Linux system Execute cat proc partitions to find the USB device partition which is displayed as dev sdX For example cat proc partitions major minor blocks name 8 128 15633408 sdi Execute the following dd command to make USB the QR installer dd if ISO file name with path of dev sdi oflag direct bs 1M For example dd if29000 ORDVD 6 2 96 1 x86 64 iso of dev sdi oflag direct bs 1M 4491 0 records in 4491 0 records out 4709154816 bytes 4 7 GB copied 957 784 seconds 4 9 MB s Insert the USB key into the blade Boot the blade from USB key When the Storage screen appears enter qr to install the software Preparing for the recovery If a NIC monitor is configured on the user network remove the monitor To determine if NIC monitoring is configured run the following command ibrix ni
266. nmary O Servers Cj storage Cruster a Enclosure Qui Power Supply B Os suo Enciosure Qa Sub Enclosure m Expand the Drive Enclosure node to provide additional information about the power supply and sub enclosures Table 3 Details provided for the drive enclosure Power Supply d the power supply for a storage cluster page Sub Enclosure Monitoring sub enclosures page 81 80 Monitoring cluster operations Monitoring the power supply for a storage cluster Each drive enclosure also has power supplies Select the Power Supply node to view the following information for each power supply in the drive enclosure e Status e e Name e UUID The Power Supply panel displayed in the following image provides information about four power supplies in an enclosure Power Supply Status Status Type Name UUID OK powerSupply Power Supply 1 USE134ELDK_PS_1 powersupply Power Supply 2 USE134ELDK_PS_2 OK powerSupply Power Supply 3 USE134ELDK_PS_3 powerSupply Power Supply 4 USE13MELDK_PS_4 Monitoring sub enclosures Expand the Sub Enclosure node to obtain information about the following components for each sub enclosure e Drive The Drive panel provides the following information about the drives in a sub enclosure Status o Volume Name o o UUID o Serial Number o Model Firmware Version o Location This column displays where the drive is located
267. nn es zum Austritt gefahrlicher Strahlung kommen Zur Vermeidung der Freisetzung gef hrlicher Strahlungen sind die folgenden Punkte zu beachten Versuchen Sie nicht die Abdeckung des Lasermoduls zu ffnen Im Inneren befinden sich keine Komponenten die vom Benutzer gewartet werden k nnen Benutzen Sie das Laserger t ausschlie lich gem den Anleitungen und Hinweisen in diesem Dokument Lassen Sie das Ger t nur von einem HP Servicepartner reparieren Laser compliance notices 241 Italian laser notice AN AVVERTENZA AVVERTENZA Questo dispositivo contenere un laser classificato come prodotto laser di Classe 1 in conformit alle normative US FDA e IEC 60825 1 Questo prodotto non emette radiazioni laser pericolose L eventuale esecuzione di comandi regolazioni o procedure difformi a quanto specificato nella presente documentazione o nella quida di installazione del prodotto pu causare l esposizione a radiazioni nocive Per ridurre i rischi di esposizione a radiazioni pericolose attenersi alle seguenti precauzioni Non cercare di aprire il contenitore del modulo All interno non vi sono componenti soggetti a manutenzione da parte dell utente Non esequire operazioni di controllo regolazione o di altro genere su un dispositivo loser ad eccezione di quelle specificate da queste istruzioni Affidare gli interventi di riparazione dell unit esclusivamente ai tecnici dell Assistenza autorizzata HP Japan
268. not access the fabric replace it on affected server blades and run exds stdiag again X9700c enclosure front panel fault ID LED is amber IF the X9700c enclosure fault ID LED is amber check to see if the power supplies and controllers are amber If they are not wait until a suitable time and power cycle the capacity block In the meantime the enclosure fault LED can be ignored If the power supplies and controllers are amber see the HP 9720 Extreme Data Storage System Controller User Guide for troubleshooting steps Replacement disk drive LED is not illuminated green O When a disk drive is replaced and the LUN is rebuilt the online activity LED on the replacement disk drive might not be illuminated green However activity on the disk will cause the online activity LED to flicker green Note that a disk drive could be in use even if the online activity LED is not illuminated green IMPORTANT Do not remove a disk drive unless the fault UID LED is amber See the HP 9720 Storage Controller User Guide for more information about the LED descriptions X9700cx GSI LED is amber IF the global service indicator GSI light on the front panel of the hard drive drawer is lit amber there is a problem with one of the enclosure components such as a power supply fan or I O module Occasionally the GSI light goes amber even though the power supply fan or O module components are lit green In this situation try swapping out each component
269. not migrated root 0 2012 03 13 11 57 35 0332880 INFO 1090169152 Segment 3 orphan inodes 0 2012 03 13 11 57 35 0332886 INFO 1090169152 segment 3 chunk inode 3099CC002 8E2124C4 poid 3099CC002 8E2124C4 primary 807F5C010 36B5072B poid 807F5C010 36B5072B 2012 03 13 11 57 35 0332894 INFO 1090169152 segment 3 chunk inode 3099AC007 8E2125A1 poid 3099AC007 8E2125A1 primary 60A1D8024 42966361 poid 60A1D8024 42966361 2012 03 13 11 57 35 0332901 INFO 1090169152 segment 3 chunk inode 3015A4031 C34A99FA poid 3015A4031 C34A99FA primary 40830415E 7793564B poid 40830415E 7793564B 2012 03 13 11 57 35 0332908 INFO 1090169152 segment 3 chunk inode 3015A401B C34A97F8 poid 3015A401B C34A97F8 primary 4083040D9 77935458 poid 4083040D9 77935458 2012 03 13 11 57 35 0332915 INFO 1090169152 segment 3 chunk inode Managing segments T 3015A4021 C34A994C poid 3015A4021 C34A994C primary 4083040FF 7793558E poid 4083040FF 7793558E Use the inum2name utility to translate the primary inode ID into the file name Removing a node from the cluster Use the following procedure to remove a node from the cluster 1 the node is hosting a passive Fusion Manager go to step 2 If the node is hosting the active Fusion Manager move the Fusion Manager to nofmfailover node ibrix fm m nofmfailover 2 the node hosting the active Fusion Manager unregister the node to be removed ibrix fm u server n
270. ns however you can usually run without options The h option displays the available options It is normal for the escalate command to take a long time over 20 minutes When the escalate tool finishes it generates a report and stores it in a file such as exds gloryl escalate tgz gz Copy this file to another system and send it to HP Services Useful utilities and processes exds stdiag utility The exds stdiag utility probes the SAS storage infrastructure attached to an IBRIX 9720 Storage The utility runs on a single server Since all the SAS fabric is connected together it means that exds stdiag can access all pieces of storage data from the server where it runs Having probed the SAS fabric the exds_stdiag utility performs a number of checks including e Checks there is more than one path to every disk and LUN e Checks that devices are in same order through each path This detects cabling issues for example reversed cables e Checks for missing or bad disks e Checks for broken logical disks RAID sets e Checks firmware revisions e Reports failed batteries The exds stdiag utility prints a report showing a summary of the storage layout called the map It then analyzes the map and prints information about each check as it is performed Any line starting with the asterisk character indicates a problem The exds stdiag utility does not access the utility file system so it can be run even if storage problems prevent
271. nt 16 39 40 201 trying to connect 4002 User joe md5 mode login failed ALERT events Indicates that an NDMP action has failed For example 1102 Cannot start the session monitor daemon ndmpd exiting 7009 Level 6 backup of mnt shares accounts1 failed writing eod header error 8001 Restore Failed to read data stream signature You can configure the system to send email or SNMP notifications when these types of events occur 64 Configuring system backups 7 Creating host groups for 9000 clients A host group is a named set of 9000 clients Host groups provide a convenient way to centrally manage clients You can put different sets of clients into host groups and then perform the following operations on all members of the group e Create and delete mount points e Mount file systems e Prefer a network interface e Tune host parameters Set allocation policies Host groups are optional If you do not choose to set them up you can mount file systems on clients and tune host settings and allocation policies on an individual level How host groups work In the simplest case the host groups functionality allows you to perform an allowed operation on all 9000 clients by executing a command on the default clients host group with the or the GUI The c1ients host group includes all 9000 clients configured in the cluster NOTE The command intention is stored on the Fusion Manager until the next time the clients contact th
272. nterprise NAS product implementation to validate that prerequisites have been met Validate that your file system performance availability and manageability requirements have not changed since the service planning phase Finalize the HP Enterprise NAS product implementation plan and software configuration Implement the documented and agreed upon configuration based on the information you provided on the pre delivery checklist Document configuration details Additional configuration steps When your system is up and running you can continue configuring the cluster and file systems The Management Console GUI and are used to perform most operations Some features described here may be configured for you as part of the system installation Cluster Configure the following as needed Firewall ports See Configuring ports for a firewall page 22 HP Insight Remote Support and Phone Home See Configuring HP Insight Remote Support on IBRIX 9000 systems page 24 Virtual interfaces for client access See Configuring virtual interfaces for client access page 35 Cluster event notification through email or SNMP See Configuring cluster event notification page 55 Fusion Manager backups See Backing up the Fusion Manager configuration page 61 NDMP backups See Using NDMP backup applications page 61 Statistics tool See Using the Statistics tool page 92 Ibrix Collect See Collecting information for HP
273. o function properly If your servers have an earlier version of the iLO2 firmware run CP014256 scexe script as described in the following steps 1 Mount the ISO image and copy the entire directory structure to the 1ocal ibrix directory The following is an example of the mount command mount o loop local pkg ibrix pkgfull FS 6 2 374 IAS 6 2 374 x86 64 iso mnt 2 Execute the firmware binary at the following location local ibrix distrib firmware CP014256 scexe 122 Upgrading the IBRIX software to the 6 2 release Table 4 Prerequisites checklist for all upgrades continued Step 7 Description Make sure IBRIX 15 running the latest firmware For information on how to find out the version of firmware that IBRIX is running see the Administrator Guide for your release Step completed O If your FSN network bonded interfaces are currently configured for mode configure them for mode 4 bonding LACP Make sure your Network Administrator reconfigures the network switch for LACP support on all effected ports Mode 6 has been found to cause ARP storms which create unreliable intracluster communication on the HP IBRIX 9720 9730 Platform Mode 4 has also been found to outperform mode 6 This finding has resulted in changing the recommendation of mode 6 to mode 4 O Verify that all file system nodes can see and access every segment logical volume that the file system node is configure
274. odotto con i normali rifi uti domestici Rispettare la salute umana e l ambiente conferendo l apparecchiatura dismessa a un centro di raccolta designato per il riciclo di apparecchiature elettroniche ed elettriche Per ulteriori informazioni rivolgersi al servizio per lo smaltimento dei rifi uti domestici Latvian recycling notice Europos Sajungos nam kio vartotoj jrangos atlieky alinimas Sis simbolis nurodo kad gaminio negalima i amp mesti kartu su kitomis buitin mis atliekomis Kad apsaugotum te moni sveikat ir aplinkg pasenusig nenaudojamq jrangg turite nuve ti elektrini ir elektroniniy atlieky surinkimo punkt Daugiau informacijos teiraukit s buitiniy atlieky surinkimo tarnybos Lithuanian recycling notice Nolietotu iek rtu izn cin anas noteikumi lietot jiem Eiropas Savien bas priv taj s m jsaimniec b s is simbols nor da ka ier ci nedr kst utiliz t kop ar citiem m jsaimniec bas atkritumiem Jums j r p jas par cilv ku vesel bas un vides aizsardz bu nododot lietoto apr kojumu otrreiz jai p rstr dei pa lietotu elektrisko un elektronisko ier u sav k anas punkt Lai ieg tu pla ku inform ciju l dzu sazinieties ar savu m jsaimniec bas atkritumu likvid anas dienestu Polish recycling notice Utylizacja zu ytego przez uzytkownik w w prywatnych gospodarstwach domowych krajach Unii Europejskiej Ten symbol oznacza ze nie wolno wyrzuca produkt
275. odule cannot operate with mixed versions of firmware Plan for system downtime before inserting a new X9700cx I O module 1 Verify that SAS cables are connected to the correct controller and I O module The following diagram shows the correct wiring of the SAS cables 16802 X9700c X9700cx primary I O module drawer 2 X9700cx secondary I O module drawer 2 X9700cx primary I O module drawer 1 X9700cx secondary I O module drawer 1 a RO gt As indicated in the figure above the X9700c controller 1 left is connected to the primary top X9700cx I O modules and the controller 2 right is connected to the secondary bottom I O modules If possible trace one of the SAS cables to validate that the system is wired correctly 2 Check the seven segment display and note the following as it applies to your situation e seven segment display shows on then both X9700c controllers are operational e seven segment displays shows on but there are path errors as described earlier in this document then the problem could be with the SAS cables connecting the X9700c controller to the SAS Switch in the blade chassis Replace the SAS cable and run the 154 Troubleshooting exds stdiag command which should report two controllers If not try connecting the SAS cable to a different port of the SAS switch e seven segment displays does not show on it shows an alphanumeric code The number represent
276. of an X9700cx 1 Power supply 5 In SAS port 2 Primary I O module drawer 2 6 Secondary I O module drawer 1 3 Primary I O module drawer 1 7 Secondary module drawer 2 4 Out SAS port 8 Fan Cabling diagrams Capacity block cabling Base and expansion cabinets A capacity block is comprised of the X9700c and X9700cx A CAUTION Correct cabling of the capacity block is critical for proper IBRIX 9720 Storage operation 224 The IBRIX 9720 component and cabling diagrams 1 2 3 4 5 X9700c X9700cx primary I O module drawer 2 X9700cx secondary I O module drawer 2 X9700cx primary I O module drawer 1 X9700cx secondary I O module drawer 1 16802 Virtual Connect Flex 10 Ethernet module cabling Base cabinet EE lg f cida Q 3 gt 9 c q S555 y es 2 EAS ES CED CED LM o Em 8 me Site network Available uplink port Management switch 2 7 Management switch 1 8 Bay 1 Virtual Connect Flex 10 10 Ethernet 9 Module for connection to site network Bay 2 Virtual Connect Flex 10 10 Ethernet 10 Module for connection to site network Bay 3 SAS switch 11 Bay 4 SAS switch 12 Onboard Administrator Bay 5 reserved for future use Bay 6 reserved for future use Bay 7 reserved for optional components Bay 8 reserved for optional components Onboard Administrator 1 Onboard Administra
277. ollowing tables list spare parts both customer replaceable and non customer replaceable for the IBRIX 9720 Storage components The spare parts information is current as of the publication date of this document For the latest spare parts information go to http partsurfer hp com Spare parts are categorized as follows e Mandatory Parts for which customer self repair is mandatory If you ask HP to replace these parts you will be charged for the travel and labor costs of this service Optional Parts for which customer self repair is optional These parts are also designed for customer self repair If however you require that HP replace them for you there may or may not be additional charges depending on the type of warranty service designated for your product NOTE Some HP parts are not designed for customer self repair To satisfy the customer warranty HP requires that an authorized service provider replace the part These parts are identified as No in the spare parts lists The IBRIX 9720 Storage Base AW548A Description Spare part number Customer self repair Accessories Kit 5069 6535 Mandatory CABLE CONSOLE D SUB9 RJ45 5188 3836 Mandatory CABLE CONSOLE D SUB9 RJ45 L 5188 6699 Mandatory PWR CORD OPT 903 3 COND 2 3 M 8120 6805 Mandatory SPS BRACKETS PDU 252641 001 Optional SPS RACK UNIT 10642 10KG2 385969 001 Mandatory SPS PANEL SIDE 10642 10KG2 385971 001 Mandatory SPS
278. om cnostiach v Eur pskej nii Tento symbol znamen e tento produkt sa nem likvidova s ostatn m domov m odpadom Namiesto toho by ste mali chr ni ludsk zdravie a ivotn prostredie odovzdan m odpadov ho zariadenia na zbernom mieste ktor je ur en na recykl ciu odpadov ch elektrick ch a elektronick ch zariaden Dal ie inform cie z skate od spolo nosti zaoberaj cej sa likvid ciou domov ho odpadu Spanish recycling notice Eliminaci n de los equipos que ya no se utilizan en entornos dom sticos de la Uni n Europea Este s mbolo indica que este producto no debe eliminarse con los residuos dom sticos En lugar de ello debe evitar causar da os a la salud de las personas y al medio ambiente llevando los equipos que no utilice a un punto de recogida designado para el reciclaje de equipos el ctricos y electr nicos que ya no se utilizan Para obtener m s informaci n p ngase en contacto con el servicio de recogida de residuos dom sticos Swedish recycling notice Hantering av elektroniskt avfall f r hemanv ndare inom EU Den h r symbolen inneb r att du inte ska kasta din produkt i hush llsavfallet V rna i st llet om natur och milj genom att l mna in uttj nt utrustning p anvisad insamlingsplats Allt elektriskt och elektroniskt avfall g r sedan vidare till tervinning Kontakta ditt tervinningsf retag f r mer information 246 Regulatory compliance notices Battery replacement not
279. om support downloads Rack stability Rack stability protects personnel and equipment A WARNING To reduce the risk of personal injury or damage to equipment e Extend leveling jacks to the floor e Ensure that the full weight of the rack rests on the leveling jacks e Install stabilizing feet on the rack e In multiple rack installations fasten racks together securely e only one rack component at a time Racks can become unstable if more than one component is extended Product warranties For information about HP product warranties see the warranty information website http www hp com go storagewarrant Subscription service HP recommends that you register your product at the Subscriber s Choice for Business website http www hp com go e updates After registering you will receive email notification of product enhancements new driver versions firmware updates and other product resources HP websites 177 18 Documentation feedback 15 committed to providing documentation that meets your needs To help us improve the documentation send any errors suggestions or comments to Documentation Feedback docsfeedback hp com Include the document title and part number version number or the URL when submitting your feedback 178 Documentation feedback A Cascading Upgrades If foll ou are running an IBRIX version earlier than 5 6 do incremental upgrades as described in the owing table
280. one at a time checking the CSI light after each replacement X9700cx drive LEDs are amber after firmware is flashed IF the X9700cx drive LEDs are amber after the firmware is flashed try power cycling the X9700cx again Configuring the Virtual Connect domain Once configured the Virtual Connect domain should not need any reconfiguration However if the domain is somehow lost or damaged this section provides enough information for you to reconstruct it The examples in this section use the Virtual Connect CLI The system has 3 networks as follows gt show network output script2 Name Status Smart Link State Connection Mode Native VLAN Private VLAN Tunnel Preferred Speed Max Speed man lan OK Disabled Enabled Failover Disabled Disabled Disabled 1 1 Port Name Status Type Speed Role 1 enc0 1 X7 Linked Active 1 SFP RJ45 Auto Primary 2 enc0 2 X7 Linked Standby 1 SFP RJ45 Auto Secondary Name Status Smart Link State Connection Mode Native VLAN Private VLAN Tunnel Preferred Speed Max Speed ext1 Degraded Enabled Enabled Auto Disabled Disabled Disabled 9 9 Port Name Status Speed 1 enc0 2 X1 Linked Active 10 CX4 Auto 2 enc0 2 X2 Not Linked SFP RJ45 Auto 3 0 2 3 Linked absent Auto 162 Troubleshooting 4 enc0 2 X4 Not Linked absent Auto 5 enc0 2 X5 Not Linked absent Auto 6 enc0 2 X6 Not Linked absent Auto Name Status Smart Link State Connection Mode Native VLAN Private VLAN Tunnel P
281. or When you select Chassis or Servers the GUI displays the current Entitlements for that type of device The following example shows Entitlements for the servers in the cluster Cluster Configuration Servers 42 Email amp Events IP Hostname Product Name SerialNumber Product Number Customer Entered Serial Number Customer Entered Product Number 1 2 Phone Home x9730 node1 ProLiant BL460c G7 SGH107X60H 603718 B21 SGH107X60H 603718 B21 Chassis x9730 node2 ProLiant BL460c G7 SGH107X60K 603718 B21 SGH107X60K 603718 B21 wil Servers Storage 1 2 sump To configure Entitlements select a device and click Modify to open the dialog box for that type of device The following example shows the Server Entitlement dialog box The customer entered serial number and product number are used for warranty checks at HP Support Phone Home Settings Server Entitlement IP HostName x9730 node1 Product Name ProLiant BL460c G7 Serial Number SGH107X60H Product Number 603718 B21 Customer Entered Serial Number SGH107X60H Customer Entered Product Number 603718 21 Required Value Use the following commands to entitle devices from The commands must be run for each device present in the cluster Entitle a server ibrix phonehome e h Host Name b Customer Entered Serial Number g Customer Entered Product Number Getting started Enter the Host Name parameter exactl
282. or later If you want to enable data retention for file systems created with IBRIX 6 0 or earlier run the ibrix reten adm u command as described in this section To enable data retention 1 If you have a pre 6 0 file system run the upgrade60 sh utility as described in Section page 185 2 Runthe following command on a node that has the file system mounted ibrix reten adm u f FSNAME In this instance FSNAME is the name of the file system you want to upgrade for data retention features The command enables data retention and unmounts the file system on the node 3 After the command finishes upgrading the file system re mount the file system 4 Enter the ibrix fs command to set the file system s data retention and autocommit period to the desired values See the HP IBRIX 9000 Storage CLI Reference Guide for additional information about the ibrix fs command Troubleshooting upgrade issues If the upgrade does not complete successfully check the following items For additional assistance contact HP Support Automatic upgrade Check the following e Ifthe initial execution of usr local ibrix setup upgrade fails check usr local ibrix setup upgrade log for errors It is imperative that all servers are up and running the IBRIX software before you execute the upgrade script e If the install of the new OS fails power cycle the node Try rebooting If the install does not begin after the reboot power cycle the machine and
283. ord for a power source you must update the configuration database with the changes The user name and password options are needed only for remotely managed power sources Include the s option to have the Fusion Manager skip BMC ibrix powersrc m I IPADDR u USERNAME p PASSWORD s h POWERSRCLIST The following command changes the IP address for power source ps1 ibrix powersrc m I 192 168 3 153 h ps1 Disassociate a server from a power source You can dissociate a file serving node from a power source by dissociating it from slot 1 its default association on the power source Use the following command ibrix hostpower d s POWERSOURCE h HOSTNAME Delete a power source To conserve storage delete power sources that are no longer in use If you are deleting multiple power sources use commas to separate them ibrix powersrc d h POWERSRCLIST Delete NIC monitoring To delete NIC monitoring use the following command ibrix nic m h MONHOST D DESTHOST IFNAME Delete NIC standbys To delete a standby for a NIC use the following command ibrix nic b U HOSTNAME1 IFNAME1 For example to delete the standby that was assigned to interface etn2 on file serving node s l hp com ibrix nic b U s1 hp com eth2 Turn off automated failover ibrix server m U h SERVERNAME To specify a single file serving node include the h SERVERNAME option a server over manually The server to be failed over must belong to a bac
284. ors match for file serving nodes Run the following command from the management console lt ibrixhome gt bin ibrix version 1 If there is a version mismatch run the ibrix ibrixupgrade f script again on the affected node and then recheck the versions The installation is successful when all version indicators match If you followed all instructions and the version indicators do not match contact HP Support 6 Verify the health of the cluster lt ibrixhome gt bin ibrix health 1 The output should show Passed on Agile upgrade for clusters with an agile management console configuration Use these procedures if your cluster has an agile management console configuration The IBRIX software 5 4 x to 5 5 upgrade can be performed either online or offline Future releases may require offline upgrades NOTE sure to read all instructions before starting the upgrade procedure 198 Cascading Upgrades Agile online upgrade Perform the agile online upgrade in the following order File serving node hosting the active management console File serving node hosting the passive management console Remaining file serving nodes and 9000 clients Upgrading the file serving nodes hosting the management console Complete the following steps 1 10 On the node hosting the active management console force a backup of the management console configuration lt ibrixhome gt bin ibrix fm B The output is stored at usr local i
285. ot during the previous IBRIX installation on this node the installer is in root ibrix On the node hosting the passive agile management console expand the distribution tarball or mount the distribution DVD in a directory of your choice Expanding the tarball creates a subdirectory named ibrix that contains the installer program For example if you expand the tarball in xoot the installer is in root ibrix Change to the installer directory if necessary and run the upgrade ibrixupgrade f The installer upgrades both the management console software and the file serving node software on the node Verify the status of the management console etc init d ibrix fusionmanager status The status command confirms whether the correct services are running Output will be similar to the following 200 Cascading Upgrades 22 23 24 Fusion Manager Daemon pid 18748 running Also run the following command which should report that the console 15 passive lt ibrixhome gt bin ibrix fm i Check usr local ibrix 1log fusionserver 1og for errors If the upgrade was successful fail back the node Run the following command on the node with the active agile management console lt ibrixhome gt bin ibrix server f U h HOSTNAME Verify that the agile management console software and the file serving node software are now upgraded on the two nodes hosting the agile management console ibrixhome bin ibrix version 1 S Followin
286. oth ports in a monitored set of standby paired ports fail Because all standby pairs were identified in the configuration database the Fusion Manager knows that failover is required only when both ports fail Amonitored single port HBA fails Because no standby has been identified for the failed port the Fusion Manager knows to initiate failover immediately Discovering HBAs You must discover HBAs before you set up HBA monitoring when you replace an HBA and when you add a new HBA to the cluster Discovery adds the WWPN for the port to the configuration database ibrix hba a h HOSTLIST Adding standby paired HBA ports Identifying standby paired HBA ports to the configuration database allows the Fusion Manager to apply the following logic when they fail e f one port in a pair fails do nothing Traffic will automatically switch to the surviving port as configured by the HBA vendor or the software e f both ports in a pair fail fail over the server s segments to the standby server Use the following command to identify two HBA ports as a standby pair bin ibrix b P WWPN1 WWPN2 h HOSTNAME Enter the WWPN as decimal delimited pairs of hexadecimal digits The following command identifies port 20 00 12 34 56 78 9a bc as the standby for port 42 00 12 34 56 78 9a bc for the HBA on file serving node s1 hp com ibrix hba b P 20 00 12 34 56 78 9a bc 42 00 12 34 56 78 9a bc h s1 hp com Turning HBA monitoring on or of
287. ou must log out and log back in to see the drive mounted Troubleshooting upgrade issues If the upgrade does not complete successfully check the following items For additional assistance contact HP Support Automatic upgrade Check the following e the initial execution of usr local ibrix setup upgrade fails check usr local ibrix setup upgrade log for errors It is imperative that all servers are up and running the IBRIX software before you execute the upgrade script e If the install of the new OS fails power cycle the node Try rebooting If the install does not begin after the reboot power cycle the machine and select the upgrade line from the grub boot menu e After the upgrade check usr local ibrix setup logs postupgrade 1og for errors or warnings e If configuration restore fails on any node look at usr local ibrix autocfg logs appliance log on that node to determine which 132 Upgrading the IBRIX software to the 6 2 release feature restore failed Look at the specific feature log file under usr local ibrix setup logs for more detailed information To retry the copy of configuration use the following command usr local ibrix autocfg bin ibrixapp upgrade f s e the install of the new image succeeds but the configuration restore fails and you need to revert the server to the previous install run the following command and then reboot the machine This step causes the server to boot from the old version
288. oups are restored Tuning file serving nodes and 9000 clients 105 e list host tuning settings on file serving nodes 9000 clients and hostgroups use the following command Omit the h argument to see tunings for all hosts Omit the argument to see all tunings ibrix host tune 1 h HOSTLIST n OPTIONS e set the communications protocol on nodes and hostgroups use the following command To set the protocol on all 9000 clients include the g clients option e ibrix host tune p UDP TCP h HOSTLIST g GROUPLIST e set server threads file serving nodes hostgroups and 9000 clients ibrix host tune t THREADCOUNT h HOSTLIST g GROUPLIST e To set admin threads on file serving nodes hostgroups and 9000 clients use this command To set admin threads on all 9000 clients include the g clients option ibrix host tune a THREADCOUNT h HOSTLIST g GROUPLIST Tuning 9000 clients locally Linux clients Use the ibrix lwhost command to tune host parameters For example to set the communications protocol ibrix lwhost protocol p tcp udp To list host tuning parameters that have been changed from their defaults ibrix lwhost list See the ibrix_lwhost command description in the HP IBRIX 9000 Storage Reference Guide for other available options Windows clients Click the Tune Host tab on the Windows 9000 client GUI Tunable parameters include the NIC to prefer the default is the cluster in
289. owing command ibrix collect C r NUMBER 2 To configure emails containing a summary of collected information of each node to be sent automatically to your desktop after every data collection event a Select Cluster Configuration and then select Ibrix Collect b Click Modify Collecting information for HP Support with Ibrix Collect 145 c Under Email Settings enable or disable sending cluster configuration by email by checking or unchecking the appropriate box d Fill in the remaining required fields for the cluster configuration and click Okay To set up email settings to send cluster configurations using the CLI use the following command ibrix collect C m lt Yes No gt s SMTP server f lt From gt t lt To gt NOTE More than one email ID can be specified for t option separated by a semicolon The From and To command for this SMTP server are Ibrix Collect specific Viewing data collection information To view data collection history from the use the following command ibrix collect 1 To view data collection details such as date of creation size description state and initiator use the following command ibrix collect v n Name Viewing data collection configuration information To view data collection configuration information use the following command ibrix collect i Adding deleting commands or logs in the XML file To add or change the logs that are collected or command
290. point On Linux 9000 clients run the following command ibrix lwmount f fsname m lt mountpoint gt 5 Enable HA on the file serving nodes Run the following command on the file serving node hosting the active Fusion Manager ibrix server m 6 the node hosting the passive agile Fusion Manager move the console back to passive mode ibrix fm m passive The IBRIX software is now available and you can now access your file systems Powering file serving nodes on or off When file serving nodes are connected to properly configured power sources the nodes can be powered on or off or can be reset remotely To prevent interruption of service set up standbys for the nodes see Configuring High Availability on the cluster page 40 and then manually fail them over before powering them off see Failing a server over manually page 48 Remotely powering off a file serving node does not trigger failover To power on power off or reset a file serving node use the following command ibrix server P on reset off h HOSTNAME Starting up the system 101 Performing a rolling reboot The rolling reboot procedure allows you to reboot all file serving nodes in the cluster while the cluster remains online Before beginning the procedure ensure that each file serving node has a backup node and that IBRIX HA 15 enabled See Configuring virtual interfaces for client access page 35 and Configuring High Availability on the c
291. prise Distribution OSD On screen display OU Active Directory Organizational Units RO Read only access RPC Remote Procedure Call RW Read write access SAN Storage area network A network of storage devices available to one or more servers SAS Serial Attached SCSI 250 Glossary SELinux Security Enhanced Linux SFU Microsoft Services for UNIX SID Secondary controller identifier number SNMP Simple Network Management Protocol TCP IP Transmission Control Protocol Internet Protocol UDP User Datagram Protocol UID Unit identification USM SNMP User Security Model VACM SNMP View Access Control Model vc HP Virtual Connect VIF Virtual interface WINS Windows Internet Naming Service WWN World Wide Name A unique identifier assigned to a Fibre Channel device WWNN World wide node name A globally unique 64 bit identifier assigned to each Fibre Channel node process WWPN World wide port name A unique 64 bit address used in a FC storage network to identify each device in a FC network 251 Index Symbols etc syscontig i18n file 15 9000 clients add to host group 66 change IP address 115 identify a user network interface 113 monitor status 86 prefer a user network interface 114 start or stop processes 102 troubleshooting 158 tune 102 tune locally 106 user interface 21 view process status 102 A agile Fusion Manager 39 Array Configuration Utility 150 AutoPass 135 B backups fi
292. privacy through message encryption and decryption Both authentication and privacy and their passwords are optional and will use default settings where security is less of a concern e With users validated the VACM determines which managed objects these users are allowed to access The VACM includes an access scheme to control user access to managed objects context matching to define which objects can be accessed and MIB views defined by subsets of IOD subtree and associated bitmask entries which define what a particular user can access in the MIB Steps for setting up SNMP include e Agent configuration all SNMP versions e Trapsink configuration all SNMP versions e Associating event notifications with trapsinks all SNMP versions e View definition V3 only e Group and user configuration V3 only IBRIX software implements an SNMP agent that supports the private IBRIX software MIB The agent can be polled and can send SNMP traps to configured trapsinks Setting up SNMP notifications is similar to setting up email notifications You must associate events to trapsinks and configure SNMP settings for each trapsink to enable the agent to send a trap when an event occurs NOTE When Phone Home is enabled you cannot edit or change the configuration of the IBRIX SNMP agent with the ibrix snmpagent However you can add trapsink IPs with ibrix snmtrap and can associate events to the trapsink IP with ibrix event Config
293. r start stop restart status To start and stop processes and view process status on a file serving node use the following command In certain situations a follow up action is required after stopping starting or restarting a file serving node etc init d ibrix server start stop restart status To start and stop processes and view process status on an 9000 client use the following command etc init d ibrix client start stop restart status Tuning file serving nodes and 9000 clients Typically HP Support sets the tuning parameters on the file serving nodes during the cluster installation and changes should be needed only for special situations A CAUTION The default values for the host tuning parameters are suitable for most cluster environments Because changing parameter values can alter file system performance HP recommends that you exercise caution before implementing any changes or do so only under the guidance of HP technical support Host tuning changes are executed immediately for file serving nodes For 9000 clients a tuning intention is stored in the Fusion Manager When IBRIX software services start on a client the client queries the Fusion Manager for the host tunings that it should use and then implements them If IBRIX software services are already running on a client you can force the client to query the Fusion Manager by executing ibrix client oribrix lwhost a client or by reboot
294. r IBRIX software 6 x to 6 2 Preparing for the upgrade To prepare for the upgrade complete the following steps 1 2 3 Make sure you have completed all steps in the upgrade checklist Table 4 page 122 Stop all client O to the cluster or file systems On the Linux client use 150 lt mountpoint gt to show open files belonging to active processes Verify that all IBRIX file systems can be successfully unmounted from all FSN servers ibrix umount f fsname Performing the upgrade This upgrade method is supported only upgrades from IBRIX software 6 x to the 6 2 release Complete the following steps 1 To obtain the latest HP IBRIX 6 2 1 pkg fu11 150 ISO image register to download the software on the HP StoreAll Download Drivers and Software web page Mount the ISO image and copy the entire directory structure to the 1ocal ibrix directory on the disk running the OS The following is an example of the mount command mount o loop local pkg ibrix pkgfull FS 6 2 37441AS 6 2 374 x86 64 iso mnt Change directory to 1ocal ibrix on the disk running the OS and then run chmod R 777 on the entire directory structure Run the following upgrade script auto ibrixupgrade The upgrade script automatically stops the necessary services and restarts them when the upgrade is complete The upgrade script installs the Fusion Manager on all file serving nodes The Fusion Manager is in active mode on the node whe
295. r existing view use the following format ibrix snmpview a v VIEWNAME t include exclude o 010 SUBTREE m MASK BITS The subtree is added in the named view For example to add the 9000 software private MIB to the view named hp enter ibrix snmpview a v hp o 1 3 6 1 4 1 18997 m 1 1 1 1 1 1 1 Configuring groups and users A group defines the access control policy on managed objects for one or more users All users must belong to a group Groups and users exist only in SNMPv3 Groups are assigned a security level which enforces use of authentication and privacy and specific read and write views to identify which managed objects group members can read and write The command to create a group assigns its SNMPv3 security level read and write views and context name A context is a collection of managed objects that can be accessed by an SNMP entity A related option m determines how the context is matched The format follows ibrix snmpgroup c g GROUPNAME s noAuthNoPriv authNoPriv authPriv r READVIEW w WRITEVIEW For example to create the group group2 to require authorization no encryption and read access to the hp view enter ibrix snmpgroup c g group2 s authNoPriv r hp The format to create a user and add that user to a group follows Setting up SNMP notifications 59 ibrix snmpuser c n USERNAME g GROUPNAME j MD5 SHA k AUTHORIZATION PASSWORD y DES AES z PRIVACY PASSWORD Authenti
296. re newly allocated by the file system at the time of the interruption will be lost Running ibrix fsck in corrective mode will recover the blocks NOTE The upgrade60 sh utility cannot upgrade segments in an INACTIVE state If a node is rebooted or shuts down with an unmounted file system the file system segments owned by that node will be in an INACTIVE state To move the segments to ACTIVE states mount the file system with ibrix mount Then unmount the file system with ibrix umount and resume running upgrade60 sh You can verify segment states with the Linux 1vscan command Migrating large files The uparade60 sh utility does not upgrade files larger than 3 8 TB After the upgrade is complete and the file system is mounted migrate the file to another segment in the file system using the following command Upgrading the IBRIX software to the 6 1 release 185 ibmigrate f filesystem m 1 d destination segment file The following example migrates file 9 from its current segment to destination segment 2 ibmigrate f ibfs m 1 d 2 mnt ibrix test dir dirl file 9 After the file is migrated you can snap the file Synopsis Run the upgrade utility upgrade60 sh v n file system The n option lists needed conversions but does not attempt them The v option provides more information Upgrading pre 6 1 1 file systems for data retention features Data retention was automatically enabled for file systems created with IBRIX 6 1 1
297. re the upgrade was run and is in 126 Upgrading the IBRIX software to the 6 2 release passive mode on the other file serving nodes If the cluster includes a dedicated Management Server the Fusion Manager is installed in passive mode on that server Upgrade Linux 9000 clients See Upgrading Linux 9000 clients page 131 If you received a new license from HP install it as described in the Licensing chapter in this guide After the upgrade Complete the following steps 1 2 3 If your cluster nodes contain any 10Gb NICs reboot these nodes to load the new driver You must do this step before you upgrade the server firmware as requested later in this procedure Upgrade your firmware as described in Upgrading firmware page 136 Mount file systems on Linux 9000 clients If you have a file system version prior to version you might have to make changes for snapshots and data retention as mentioned in the following list e Snapshots Files used for snapshots must either be created on IBRIX software 6 0 or later or the pre 6 0 file system containing the files must be upgraded for snapshots To upgrade a file system use the upgrade60 sh utility For more information see Upgrading pre 6 0 file systems for software snapshots page 185 e Data retention Files used for data retention including WORM and auto commit must be created on IBRIX software 6 1 1 or later or the pre 6 1 1 file system containing t
298. referred Speed Max Speed ext2 Degraded Enabled Enabled Auto Disabled Disabled Disabled 9 9 Port Name Status Type Speed 1 enc0 1 X1 Linked Active 10 CX4 Auto 2 enc0 1 X2 Not Linked absent Auto 3 enc0 1 X3 Not Linked absent Auto 4 enc0 1 X4 Not Linked absent Auto 5 enc0 1 X5 Not Linked absent Auto 6 enc0 1 X6 Not Linked absent Auto gt There are 16 identical profiles assigned to servers As an example a profile called 1 15 created and assigned to enclosure device bay 1 gt show profile 1 output script2 1 Bay Server Status Serial Number UUID bayl enc0 1 ProLiant BL460c G6 Degraded 8920RFCC XXXXXX8920RFCC Connection Type Port Network Name Status PXE MAC Address Allocated Speed Ethernet 1 man lan OK USseBIOS Factory Default 1 Ethernet 2 extl1 Degraded UseBIOS Factory Default 9 Ethernet 3 ext2 Degraded UseBIOS Factory Default 9 Ethernet 4 man lan OK UseBIOS Factory Default 1 As a convention the domain name is created using the system name but any domain name can be used The domain is given an IP on the management network for easy access Domain Name kudos vc domain Checkpoint Status Valid Domain IP Status Enabled IP Address 172 16 2 1 Subnet Maskz255 255 248 0 Gateway MAC Address Type Factory Default WWN Address Type Factory Default Synchronizing information on file serving nodes and the configuration database To maintain access to a file syste
299. require high I O bandwidth high IOPS throughput and scalable configurations Some of the key features and benefits are as follows e Scalable configuration You can add servers to scale performance and add storage devices to scale capacity e Single namespace All directories and files are contained in the same namespace Product description Multiple environments Operates in both the SAN and DAS environments High availability The high availability software protects servers Tuning capability The system can be tuned for large or small block I O Flexible configuration Segments can be migrated dynamically for rebalancing and data tiering High availability and redundancy The segmented architecture 15 the basis for fault resilience loss of access to one or more segments does not render the entire file system inaccessible Individual segments can be taken offline temporarily for maintenance operations and then returned to the file system To ensure continuous data access IBRIX software provides manual and automated failover protection at various points Server A failed node is powered down and a designated standby server assumes all of its segment management duties Segment Ownership of each segment on a failed node 15 transferred to a designated standby server Network interface The address of a failed network interface 15 transferred to a standby network interface until the original network interface is operational
300. rs 146 Adding deleting commands or logs in the XML 146 Troubleshooting 9720 Sy ES ataud 146 Escalating UE 146 Useful utilities and proce ri 147 o E T 147 exds netdiag cea 148 A 148 Accessing the Onboard Admito ii 149 Accessing the OA through the network oocooooooccccccoooooocccccocononononncnonnnnnnnnncnnnona nor 149 Access the OA Web based administration interface 149 Accessing the OA through the serial port eene 150 Accessing the OA through the Service PO ias 150 Using hpacucli Array Configuration Utility re nnt taste tenentis 150 POST dedo E 150 IBRIX 9730 controller error ladridos Doa 150 IBRIX 9720 LUN Toyota ici 153 IBRIX 9720 component montar id 153 Identifying failed I O modules on an X9700cx ChassiS ooooooocccccccoooooooccccccconanononccconnnononcnccnnnns 153 154 Identifying the failed 154 X9700c controller ai 157 Viewing software versioni UMD ri 158 Trou bleshGotitig species cani 158 SOM Ware SECS airline idad diia 158 liada 158 Windows 9000 clienta 159 Mode 1 or mode 6 bondi 159 Onboard Administrator
301. run the ibrix fm i command If the output reports the status as quorum is not configured your cluster does not have an agile configuration Be sure to use the upgrade procedure corresponding to your management console configuration e For standard upgrades use Page 195 e For agile upgrades use Page 198 Online and offline upgrades Online and offline upgrade procedures are available for both the standard and agile upgrades e Online upgrades This procedure upgrades the software while file systems remain mounted Before upgrading a file serving node you will need to fail the node over to its backup node allowing file system access to continue This procedure cannot be used for major upgrades but is appropriate for minor and maintenance upgrades e Offline upgrades This procedure that you first unmount file systems and stop services Each file serving node may need to be rebooted if NFS or SMB causes the unmount operation 194 Cascading Upgrades to fail You can then perform the upgrade Clients will experience a short interruption to file system access while each file serving node is upgraded Standard upgrade for clusters with a dedicated Management Server machine or blade Use these procedures if your cluster has a dedicated Management Server machine or blade hosting the management console software The IBRIX software 5 4 x to 5 5 upgrade can be performed either online or offline Future releases may require offline upgrades
302. rver blades on 9720 systems NOTE This requires the use of the Quick Restore DVD See Recovering the 9720 9730 Storage page 166 for more information 1 On the front of the blade chassis in the next available server blade bay remove the blank 2 3 Install the server blade Steps for upgrading the firmware 141 Install the software on the server blade The Quick Restore DVD 15 used for this purpose See Recovering the 9720 9730 Storage page 166 for more information Set up fail over For more information see the HP IBRIX 9000 Storage File System User Guide Enable high availability automated failover by running the following command on server 1 ibrix server m Discover storage on the server blade ibrix pv To enable health monitoring on the server blade first unregister the vendor storage ibrix vs d n vendor storage name Next re register the vendor storage In the command lt sysName gt is for example x710 The hostlist is a range inside square brackets such as x710s 2 4 ibrix vs r n sysName t exds 172 16 1 1 U exds P password h hostlist If you made any other customizations to other servers you may need to apply them to the newly installed server 142 Upgrading firmware 15 Troubleshooting Collecting information for HP Support with Collect Ibrix Collect is a log collection utility that allows you collect relevant information for di
303. s 1 2 Ensure that all nodes are up and running To determine the status of your cluster nodes check the dashboard on the GUI or use the ibrix health command Verify that ssh shared keys have been set up To do this run the following command on the node hosting the active instance of the agile Fusion Manager ssh server name Repeat this command for each node in the cluster Note any custom tuning parameters such as file system mount options When the upgrade is complete you can reapply the parameters Ensure that no active tasks are running Stop any active Remote E data tiering or Rebalancer tasks running on the cluster Use ibrix task 1 to list active tasks When the upgrade is complete you can start the tasks again The 6 1 release requires that nodes hosting the agile management be registered on the cluster network Run the following command to verify that nodes hosting the agile Fusion Manager have IP addresses on the cluster network ibrix fm f If a node is configured on the user network see Node is not registered with the cluster network page 133 for a workaround Stop all client O to the cluster or file systems On the Linux client use 1sof lt mountpoint gt to show open files belonging to active processes On all nodes hosting the passive Fusion Manager place the Fusion Manager into maintenance mode ibrixhome bin ibrix fm m nofmfailover Upgrading the IBRIX software to the 6 1 release
304. s IBRIX installation on this node the installer is in root ibrix 2 Expand the distribution tarball or mount the distribution DVD in a directory of your choice Expanding the tarball creates a subdirectory named ibrix that contains the installer program For example if you expand the tarball in root the installer is in root ibrix 3 Change to the installer directory if necessary and execute the following command ibrixupgrade The upgrade automatically stops services and restarts them when the process is complete 4 When the upgrade is complete verify that the IBRIX software services are running on the node etc init d ibrix server status The output should be similar to the following example If the IAD service is not running on your system contact HP Support IBRIX Filesystem Drivers loaded ibrcud is running pid 23325 IBRIX IAD Server pid 23368 running 5 Execute the following commands to verify that the ibrix and ipfs services are running lsmod grep ibrix ibrix 2323332 0 unused lsmod grep ipfs ipfs1 102592 0 unused If either grep command returns empty contact HP Support 6 From the active management console node verify that the new version of IBRIX software FS IAS is installed on the file serving nodes ibrixhome bin ibrix version 1 S Completing the upgrade 1 Remount the IBRIX file systems lt ibrixhome gt bin ibrix mount f fsname m lt mountpoint gt 2 From the node hosting the active ma
305. s and describes details about checks performed each High Availability feature By default the report includes details only about checks that received a Failed or a Warning result You can expand the report to include details about checks that received a Passed result Viewing a summary report Use the ibrix haconfig 1 command to see a summary of all file serving nodes To check specific file serving nodes include the h HOSTLIST argument To check standbys include the b argument To view results only for file serving nodes that failed a check include the argument ibrix haconfig 1 h HOSTLIST f b For example to view a summary report for file serving nodes xs01 hp com and xs02 hp com Configuring High Availability on the cluster 51 ibrix haconfig 1 h xs01 hp com xs02 hp com Host HA Configuration Power Sources Backup Servers Auto Failover Nics Monitored Standby Nics HBAs Monitored xs01 hp com FAILED PASSED PASSED PASSED FAILED PASSED FAILED xs02 hp com FAILED PASSED FAILED FAILED FAILED WARNED WARNED Viewing a detailed report Execute the ibrix haconfig i command to view the detailed report ibrix haconfig i h HOSTLIST f b s vl The h HOSTLIST option lists the nodes to check To also check standbys include the b option To view results only for file serving nodes that failed a check include the argument The s option expands the report to include information about
306. s enabled and data can be obtained from the nodes without prompting for a password run the following command usr local ibrix stats bin stmanage testpull Troubleshooting the Statistics tool 97 Other conditions e Data is not collected If data is not being gathered in the common directory for the Statistics Manager usr local statstool histstats by default restart the Statistics tool processes on all nodes See Controlling Statistics tool processes page 97 e Installation issues Check the tmp stats install log and try to fix the condition or send the tmp stats install log to HP Support e Missing reports for file serving nodes If reports are missing on the Stats tool web page check the following o Determine whether collection is enabled for the particular file serving node If not see Enabling collection and synchronization page 92 o Check for time synchronization All servers in the cluster should have the same date time and time zone to allow proper collection and viewing of reports Log files See usr local ibrix log statstool stats 1log for detailed logging for the Statistics tool The information includes detailed exceptions and traceback messages The logs are rolled over at midnight every day and only seven days of statistics logs are retained The default var log messages log file also includes logging for the Statistics tool but the messages are short Uninstalling the Statistics too The
307. s on your system Hardware passwords See the documentation for the specific hardware for more information Root password Use the passwd 8 command on each server IBRIX software user password This password is created during installation and is used to log in to the GUI The default is ibrix You can change the password using the Linux passwd command passwd ibrix You will be prompted to enter the new password Configuring ports for a firewall 22 O IMPORTANT To avoid unintended consequences HP recommends that you configure the firewall during scheduled maintenance times When configuring a firewall you should be aware of the following SELinux should be disabled By default NFS uses random port numbers for operations such as mounting and locking These ports must be fixed so that they can be listed as exceptions in a firewall configuration file For example you will need to lock specific ports for rpc statd rpc lockd rpc mountd and rpc quotad It is best to allow all ICMP types on all networks however you can limit ICMP to types 0 3 8 and 11 if necessary Be sure to open the ports listed in the following table Port Description 22 tcp SSH 9022 tcp SSH for Onboard Administrator only for 9720 9730 blades 123 tcp 123 upd NTP 5353 udp Multicast DNS 224 0 0 251 12865 tcp tool 80 tcp Fusion Manager to file serving nodes 443 tcp 5432 tcp Fusion Manager an
308. s that are executed during data collection you can modify the Ibrix Collect xml files that are stored in the directory usr local ibrix ibrixcollect The usr local ibrix ibrixcollect commands executed and the logs collected during data collection are maintained in the following files under usr local ibrix ibrixcollect directory e summary xml Commands pertaining to the Fusion Manager node e summary xml Commands pertaining to the file serving node e common summary xml Commands and logs common to both Fusion Manager and file serving nodes NOTE These xml files should be modified carefully Any missing tags during modification might cause Ibrix Collect to not work properly Troubleshooting 9720 systems When troubleshooting 9720 systems take the following steps 1 exds stdiag storage diagnostic utility 2 Evaluate the results 3 To report a problem to HP Support see Escalating issues Escalating issues The 9720 Storage escalate tool produces a report on the state of the system When you report a problem to HP technical support you will always be asked for an escalate report so it saves time if you include the report up front Run the exds_escalate command as shown in the following example rooteglory1 exds escalate 146 Troubleshooting The escalate tool needs the root password to perform some actions Be prepared to enter the root password when prompted There are a few useful optio
309. s the controller that has an issue For example C1 indicates the issue is with controller 1 the left controller Press the down button beside the seven segment display This display now shows a two digit number The following table describes the codes where n is 1 or 2 depending on the affected controller Code Explanation Next steps Hn 67 Controller n is halted because there is a Continue to next step connectivity problem with an X9700cx I O module Cn 02 Controller n is halted because there is a Continue to next step connectivity problem with an X9700cx I O module Other code Fault is in the X9700c controller The fault is Re seat the controller as described later in this not in the X9700cx or the SAS cables document If the fault does not clear report to HP connecting the controller to the I O modules Support to obtain a replacement controller Check the SAS cables connecting the halted X9700c controller and the X9700cx I O modules Disconnect and re insert the SAS cables at both ends In particular ensure that the SAS cable is fully inserted into the I O module and that the bottom port on the X9700cx I O module is being used If there are obvious signs of damage to a cable replace the SAS cable Re seat the halted X9700c controller a Push the controller fully into the chassis until it engages b Reattach the SAS cable that connects the X9700c to the SAS switch in the cClass blade enclosure This is plug
310. s were not running before the upgrade started you must start them manually after the upgrade completes e Statistics tool was not previously installed the IBRIX software upgrade installs the tool but the Statistic processes are not started For information about starting the processes see Controlling Statistics tool processes page 97 e Configurable parameters such as age retain files 24h setin the etc ibrix stats conf file before the upgrade are not retained after the upgrade e After the upgrade historical data and reports are moved from the var lib ibrix histstats folder to the local statstool histstats folder e The upgrade retains the Statistics tool database but not the reports You can regenerate reports for the data stored before the upgrade by specifying the date range See Generating reports page 94 Using the Historical Reports GUI You can use the GUI to view or generate reports for the entire cluster or for a specific file serving node To open the CUI select Historical Reports on the GUI dashboard NOTE By default installing the Statistics tool does not start the Statistics tool processes The GUI displays a message if the processes are not running on the active Fusion Manager No message appears if the processes are already running on the active Fusion Manager or if the processes are not running on any of the passive management consoles See Controlling Statistics tool processes page 97 for in
311. se of your product with your other household waste Instead you should protect human health and the environment by handing over your waste equipment to a designated collection point for the recycling of waste electrical and electronic equipment For more information please contact your household waste disposal service Bulgarian recycling notice Czech notice Likvidace za zen v dom cnostech v Evropsk unii Tento symbol znamen e nesm te tento produkt likvidovat spolu s jin m domovn
312. segment is unavailable or a server is unreachable e Warnings A potentially disruptive condition where file system access is not lost but if the situation is not addressed it can escalate to an alert condition Some examples are reaching a very high server CPU utilization or nearing a quota limit e Information An event that changes the cluster such as creating a segment or mounting a file system but occurs under normal or nonthreatening conditions Events are written to an events table in the configuration database as they are generated To maintain the size of the file HP recommends that you periodically remove the oldest events See Removing events from the events database table page 88 You can set up event notifications through email see Setting up email notification of cluster events page 55 or SNMP traps see Setting up SNMP notifications page 57 Viewing events The GUI dashboard specifies the number of events that have occurred in the last 24 hours Click Events in the GUI Navigator to view a report of the events You can also view events that have been reported for specific file systems or servers On the use the ibrix event command to view information about cluster events To view events by alert type use the following command ibrix event q e ALERT WARN INFO Theibrix event 1 command displays events in a short format event descriptions are truncated to fit on one line The option specifies t
313. select the upgrade line from the grub boot menu e After the upgrade check usr local ibrix setup logs postupgrade log for errors or warnings e If configuration restore fails any node look at usr local ibrix autocfg logs appliance log on that node to determine which feature restore failed Look at the specific feature log file under usr local ibrix setup logs for more detailed information To retry the copy of configuration use the following command usr local ibrix autocfg bin ibrixapp upgrade f s 186 Cascading Upgrades the install of the new image succeeds but the configuration restore fails and you need to revert the server to the previous install run the following command and then reboot the machine This step causes the server to boot from the old version the alternate partition usr local ibrix setup boot_info r the public network interface is down and inaccessible for any node power cycle that node NOTE Each node stores its ibrixupgrade log file in tmp Manual upgrade Check the following If the restore script fails check usr local ibrix setup logs restore log for details If configuration restore fails look at usr local ibrix autocfg logs appliance log to determine which feature restore failed Look at the specific feature log file under usr local ibrix setup logs for more detailed information To retry the copy of configuration use the following command usr local ibrix
314. sh SNMP Settings ova 3G SAS Blade Switch in IC Bay 5 VSM Active v System Status 0 0 0 View Status Alerts cm Dsatied System Location Colorado Lab US a n Blade Enclosure ex2235s ExDS ENC Cla HP3GSAS Blade Switch 5 VSM Active System Contact Shankar i HP 05800 Disk System at Port 51 Box 1 i HP 05500 Disk System at Port 52 1 wu HP 3G SAS Blade Switch 6 VSU Passive AlwtCosmunky 1 public Read Communty pubic Alert Address 1 102475 Alert Community 2 Ajert Address 2 Configuring the Virtual Connect Manager To configure the Virtual Connect Manager on an IBRIX 9720 9730 system complete the following steps 1 From the Onboard Administrator select OA IP gt Interconnet Bays gt HP VC 10 gt Management Console 2 the HP Virtual Connect Manager open the SNMP Configuration tab Configuring HP Insight Remote Support on IBRIX 9000 systems 25 26 3 Configure the SNMP Trap Destination e Enter the Destination Name and IP Address the CMS IP e Select SNMPv1 as the SNMP Trap Format e Specify public as the Community String 4 Select all trap categories VCM traps and trap severities SNMP Configuration SNMP Trap Destination fume Cus Pus 1102475 eve SHUP Trap Format 9 anai 7 DAM Comemunty String pubic Select Trap Categories Setect Trap Sevormes Vou car drag est drop trap severities fom Ihe ie sede 9420463 t
315. software 6 x to 6 2 Online upgrades are supported only from the IBRIX 6 x release Upgrades from earlier IBRIX releases must use the appropriate offline upgrade procedure When performing an online upgrade note the following e File systems remain mounted and client O continues during the upgrade e upgrade process takes approximately 45 minutes regardless of the number of nodes e The total I O interruption per node IP is four minutes allowing for a failover time of two minutes and a failback time of two additional minutes e Client I O having a timeout of more than two minutes is supported Preparing for the upgrade To prepare for the upgrade complete the following steps ensure that high availability is enabled on each node in the cluster by running the following command ibrix haconfig 1 IF the command displays an Overall HA Configuration Checker Results PASSED status high availability is enabled on each node in the cluster the command returns Overall HA Configuration Checker Results FAILED complete the following list items based on the result returned for each component 1 Make sure you have completed all steps in the upgrade checklist Table 4 page 122 2 f Failed was displayed for the HA Configuration or Auto Failover columns or both perform the steps described in the section Configuring High Availability on the cluster in the administrator guide for your current release 3 If Failed was display
316. stance xml property name currentFmName value ib50 86 gt lt property gt 12 From the active Fusion Manager verify that both management consoles are in the cluster ibrix fm f For example rootex109s3 ibrix ibrix fm f NAME IP ADDRESS x109s1 172 16 3 100 x109s3 172 16 3 3 Command succeeded 13 Verify that the newly installed Fusion Manager is in passive mode ibrix fm i For example rootex109s3 ibrix ibrix fm i FusionServer x109s3 passive quorum is running Command succeeded 14 Enable HA on the server hosting the agile Fusion Manager ibrix server m NOTE If iLO was not previously configured on the server the command will fail with the following error com ibrix ias model BusinessException x467s2 is not associated with any power sources Use the following command to define the parameters into the IBRIX cluster database ibrix powersrc a t ilo h HOSTNAME I IPADDR u USERNAME p PASSWORD See the HP IBRIX 9000 Storage Installation Guide for more information about configuring Testing failover and failback of the agile Fusion Manager Complete the following steps 120 Migrating to an agile Fusion Manager configuration On the node hosting the active Fusion Manager place the Fusion Manager into maintenance mode This step fails over the active Fusion Manager role to the node currently hosting the passive agile Fusion Manager lt ibrixhome gt bin ibrix fm m nofmfailov
317. t 1b128 1681 Power Events Health Status DEGRADED El 1 Haroware E A Blade Enclosure As Server Monitoring blade enclosures 72 To view summary information about the blade enclosures in the chassis 1 Expand the Hardware node 2 Select the Blade Enclosure node under the Hardware node The following summary information is displayed for the blade enclosure e Status e e Name e UUID e Serial number Detailed information of the hardware components in the blade enclosure is provided by expanding the Blade Enclosure node and clicking one of the sub nodes Monitoring cluster operations ib128 16s1 t Mountpoints Nes 3 crs 44 Power Events El Hardware al A Blade Enclosure OD Bay a Bay C8 Fan Que OA Module OA Module Power Supply Shared Interconnect OF Temperature Sensor As Server 4 4 When you select one of the sub nodes under the Blade Enclosure node additional information 15 provided For example when you select the Fan node additional information about the Fan for the blade enclosure is provided in the Fan panel AAA AAA AA Status Status Type Name Location Properties Bay 1 Active Cool 200 Fan OSUSE1SSEAONFan1 Fan Bay 1 speed 5500 RPM O ok fan Bay 2 Active Cool 200 Fan OSISEI3IEADNFan2 Fan Bay 2 speed 5499 RPM O ok fan Bay 3 Active Cool 200 Fan QSUSE133EAONFana Fan Bay 3 speed 6531 RPM oK fan Bay 4 Active Coo
318. t endpoint found Hn13 Bad core reset state for HLDPLB bit Hn14 Bad core reset state for RSTGU bit Hn15 Bad core reset state for RDY bit 16 Bad core reset state for RSTDL bit Hn17 Bad core reset state for RSTPYN bit Hn18 Bad core reset state for SHUTDW bit Hn19 Core link width 15 invalid Hn20 PCI X failure Hn21 ICL failed Hn30 Fatal ECC error Hn31 OS detected a fatal error Hn32 Unhandled interrupt Hn34 PLL failed to lock Hn35 Unexpected interrupt Hn36 I2C hardware failed Hn45 Post memory test fail LOCKUP Hn46 Post memory Tuning fail LOCKUP Hn47 Post No Memory Found LOCKUP Hn48 Post Unsupported Memory LOCKUP Hn49 Post Invalid Memory SPD Data IOCKUP Hn50 Post PLB Bus Error LOCKUP Hn 0 SAS chip timed out Hn l SAS core received invalid frame type Hn 2 SAS core received invalid address reply Hn63 SAS core interrupt appears stuck Hn64 SAS core appears to have faulted LOCKUP Hn65 SAS core not responsive HANG 152 Troubleshooting Lockup code Description Hn 6 SAS core killed intentionally Hn67 SAS expander appears to have failed Hn 8 SAS core reported invalid index Hn70 EMU thermal shutdown imminent Hn71 EMU fan failure thermal shutdown IBRIX 9720 LUN layout The LUN layout is presented here for troubleshooting purposes For a capacity block with 1 TB HDDs e 2x 1 GB LUNs These were used by the X9100 for membership partitions and remain in the 9720 for backwards compa
319. t immediately The failed over Fusion Manager remains in nofmfailover mode until it is moved to passive mode using the following command ibrix fm m passive NOTE A Fusion Manager cannot be moved from nofmfailover mode to active mode Viewing information about Fusion Managers To view mode information use the following command ibrix fm i NOTE If the Fusion Manager was not installed in an agile configuration the output will report FusionServer fusion manager name not set active quorum is not configured When a Fusion Manager is installed it is registered in the Fusion Manager configuration To view a list of all registered management consoles use the following command ibrix fm 1 Configuring High Availability on the cluster IBRIX High Availability provides monitoring for servers NICs and HBAs Server HA Servers are configured in backup pairs with each server in the pair acting as a backup for the other server The servers in the backup pair must see the same storage When a server is failed over the ownership of its segments and its Fusion Manager services if the server is hosting the active FM move to the backup server NIC HA When server is enabled NIC HA provides additional triggers that cause a server to fail over to its backup server For example you can create a user VIF such as bondo 2 to service SMB requests on a server and then designate the backup server as a standby NIC for bondo
320. t move segments from their physical locations in the storage system Segment ownership is recorded on the physical segment itself and the ownership data is part of the metadata that the Fusion Manager distributes to file serving nodes and 9000 clients so that they can locate segments To migrate segments on the CUI select the file system on the Filesystems panel select Segments from the lower Navigator and then click Ownership Migration on the Segments panel to open the Segment Ownership Migration Wizard Segment Ownership Migration Wizard gt Welcome Welcome Change Ownership Summary This Wizard allows you to migrate segment ownership between servers in your filesystems to optimize load balancing and utilization by all available servers Segment migration transfers segment ownership but it does not move segments from their physical locations in networked storage systems 9000 Software already attempts to maintain proper load balancing and utilization in two ways 1 When servers are added to the cluster the ownership of existing segments are distributed between available servers 2 When storage is added ownership of the new segments is distributed among available servers However the purpose of this wizard is to allow you to manually migrate segments in order to take account for other server workload factors The Change Ownership dialog box reports the status of the servers in the cluster and lists
321. task is running the system prevents other tasks from running on the file system Similarly if another task is running on the file system the evacuation task cannot be scheduled until the first task is complete e file system must be quiescent no active O while a segment is being evacuated Running this utility while the file system is active may result in data inconsistency or loss To evacuate a segment complete the following steps 1 Identify the segment residing on the physical volume to be removed Select Storage from the Navigator on the GUI Note the file system and segment number on the affected physical volume 2 Locate other segments on the file system that can accommodate the data being evacuated from the affected segment Select the file system on the GUI and then select Segments from the lower Navigator If segments with adequate space are not available add segments to the file system 3 Evacuate the segment Select the file system on the GUI select Segments from the lower Navigator and then click Rebalance Evacuate on the Segments panel When the Segment Rebalance and Evacuation Wizard opens select Evacuate as the mode Managing segments 109 Select Mode Select Mode Evacuate Advanced Summary This wizard will guide you in starting a segment data rebalancer or evacuation task Rebalancing segment data involves redistributing files among segments in a filesystem to balance utilization and server
322. te C If server is not the active Fusion Manager proceed to step e to fail over server to server2 d see which node is now the active Fusion Manager enter the following command ibrix fm i e Move to your new active Fusion Manager node and then enter the following command to perform the failover ibrix server f p h serverl Steps for upgrading the firmware 139 NOTE The p switch in the failover operation lets you reboot for the effected node and in turn the flash of the following components e BIOS e NIC e Power_Mgmt_Ctlr e SERVER HDD e Smart Cilr e Storage Cllr Once the FSN boots up verify the software reports the FSN as Up FailedOver by enter the following command ibrix server 1 Confirm the recommended flash was completed successfully by enter the following command hpsp fmt fr server o tmp fwrecommend out Verify that the Proposed Action column requires no more actions and the Active FW Version and Qualified FW Version columns display the same values Fail back your updated server by entering the following command ibrix server f U h serverl The failed over Fusion Manager remains in nofmfailover mode until it is moved to passive mode by using the following command ibrix fm m passive NOTE A Fusion Manager cannot be moved from nofmfailover mode to active mode Repeat steps a through h for the backup server component in this example s
323. te the agent s system name and that system s physical location ibrix snmpagent u v 2 w private n agenthost domain com o DevLab B3 U6 The SNMPv3 format adds an optional engine id that overrides the default value of the agent s host name The format also provides the y and z options which determine whether a v3 agent can process v1 v2 read and write requests from the management station The format is ibrix snmpagent u v 3 e engineldl p PORT r READCOMMUNITY w WRITECOMMUNITY t SYSCONTACT n SYSNAME o SYSLOCATION y yes no z yes no c s on o Configuring trapsink settings A trapsink is the host destination where agents send traps which are asynchronous notifications sent by the agent to the management station A trapsink is specified either by name or IP address IBRIX software supports multiple trapsinks you can define any number of trapsinks of any SNMP version but you can define only one trapsink per host regardless of the version At a minimum trapsink configuration requires a destination host and SNMP version All other parameters are optional and many assume the default value if no value is specified The format for creating a v1 v2 trapsink is ibrix snmptrap c h HOSTNAME v 1 2 p PORT m COMMUNITY s on off If a port is not specified the command defaults to port 162 If a community is not specified the command defaults to the community name public The s
324. tents 7 Subseripilan BN ds 177 18 Documentation feedback cin 178 A Cascading Us ames laa 179 Upgrading the IBRIX software to the 6 1 release eene 179 Upgrading 9720 chassis MW escindida 179 Online upgrades for IBRIX software 6 x to nts teni receta 180 Preparing the USNC NITET TT ER 180 Performing the A dcelbnd teen dnt Deidend 180 After the WGI ida 181 Offline upgrades for IBRIX software 5 6 x or 6 0 x to 6 1 181 Preparing for the taa 181 Performing the 182 After the ads ada 183 Upgrading Linux 618 leia iii vt 184 Installing a minor kernel update on Linux clients eret riim 184 Upgrading Windows 9000 lis cdi ea 184 Upgrading pre 6 0 file systems for software 185 Upgrading pre 6 1 1 file systems for data retention 186 Troubleshooting upgrade Issues dde 186 Atomatic leia 186 Manual upon ataca 187 Offline upgrade fails because firmware is out 187 Node is not registered with the cluster 187 File system unmount ISSUES arado 188 Moving the Fusion Manager VIF to 1 188 Upgrading the IBRIX software to the 5 6 release e erem its 189 Automatic UPA enemistad 190 Manval upgrades scared 190 Prepari
325. terface the communications protocol UDP or TCP and the number of server threads to use See the online help for the client if necessary Managing segments When a file system is created the servers accessing the file system are assigned ownership of the storage segments used for the file system Each server is responsible for managing the segments it owns When the cluster is expanded the 9000 software attempts to maintain proper load balancing and utilization in the following ways e When servers are added ownership of the existing segments is redistributed among the available servers e When storage is added ownership of the new segments is distributed among the available servers Occasionally you may need to manage the segments manually e Migrate segments This operation transfers ownership of segments to other servers For example if a server is overloaded or unavailable you can transfer its segments to another server that can see the same storage e Rebalance segments This operation redistributes files across segments and can be used if certain segments are filling up and affecting file system performance See Maintaining file systems in the for more information e Evacuate segments This operation moves the data in a segment to another segment It is typically used before removing storage from the cluster 106 Maintaining the system Migrating segments Segment migration transfers segment ownership but it does no
326. the segments owned by each server In the Segment Properties section of the dialog box select the segment whose ownership you are transferring and click Change Owner Managing segments 107 Welcome Change Ownership Change Ownership Summary Observe the current server status in the grid below Note that this is a snapshot of server performance you can get more detailed historical data using the Statistics Tool Based on the server data you may choose to change segment ownership to another server that can see the same storage segment Server Status Server State CPU Hetwork 1 0 MB s Disk 1 0 MB s 6954 Up 30 0 01 0 00 6955 Up 1 0 00 0 00 Segment Properties Change Owner Segment LUN UUID Filesystem Tier Size TB Used Storage State Owner 1 DOcOffd7e myFS1 TierOneSixer 1 6 TB myMSA 106954 4 0 myFS1 TierFourT 2 0 TB i 5 6955 3 DOcOftd7e myFS1 TierFourT 20TB mymMSa oK ib69s4 4 DOcOftd7e myFS1 2 0 TB oK ib69s5 5 D cOffd7e myFS1 TierOneSixer 1 6 TB myMSA OK 166954 The new owner of the segment must be able to see the same storage as original owner The Change Segment Owner dialog box lists the servers that can see the segment you selected Select one of these servers to be the new owner Current Owner ib69s4 Filesystem myFS1 Segment a Select a new owner for this segment below The available servers for selection are r
327. the clients host group ifs2 on host group A ifs3 on host group C and i s4 on host group D in any order Then set Tuning 1 on the clients host group and Tuning 2 on host group B The end result is that all clients in host group will mount 51 and implement Tuning 2 The clients in host group A will mount ifs2 and implement Tuning 1 The clients in host groups C and D respectively will mount i s3 and ifs4 and implement Tuning 1 The following diagram shows an example of these settings in a host group tree How host groups work 65 Default clients hostgroup sri ifs1 FA Mount ifs2 B Tuning 2 Mountifs3 C D Mountifs4 To create one level of host groups beneath the root simply create the new host groups You do not need to declare that the root node is the parent To create lower levels of host groups declare a parent element for host groups Do not use a host name as a group name To create a host group tree using the CLI l Create the first level of the tree ibrix hostgroup c g GROUPNAME 2 Create all other levels by specifying a parent for the group ibrix hostgroup c g GROUPNAME p PARENT Adding an 9000 client to a host group You can add an 9000 client to a host group or move a client to a different host group All clients belong to the default clients host group To add or move a host to a host group use the ibrix hostgroup command as follows ibrix hostgroup m g GROUP h MEMBER For example to a
328. the serial number of an array and asks if it should be updated If the serial number displayed is not the array to be updated select N for no The command will continue to display serial numbers When it reaches the desired array select Y to update the firmware NOTE If you reply Y to the wrong array let the command finish normally This can do no harm since I O has been suspended as described above and the I O modules should already be at the level included in the 9720 Storage c After the array has been flashed you can exit the update utility by entering q to quit Press the power buttons to power off the affected X9700c and X9700cx e power to the capacity block Power on the X9700cx first then the associated X9700c The firmware update occurs during reboot so the reboot could take longer than usual up to 25 minutes Wait until the seven segment display of all X9700c enclosures goes to the on state before proceeding If the seven segment display of an X9700c has not returned to on after 25 minutes power cycle the complete capacity block again 12 Run the exds_stdiag command to verify the firmware version Check that the firmware is the same on both drawers boxes of the X9700cx Following is an example of exds_stdiag output ctlr P89A40C9SW705J ExDS9100cc in 01 SGA830000M slot 1 fw 0126 2008120502 boxes 3 disks 22 luns 5 box 1 ExDS9100c sn SGA830000M fw 1 56 fans OK OK OK OK temp OK power OK OK bo
329. this health check on both file serving nodes and 9000 clients e Determines whether information maps on the tested hosts are consistent with the configuration database If you include the option the command also checks the health of standby servers if configured check reports The summary report provides an overall health check result for all tested file serving nodes and 9000 clients followed by individual results If you include the b option the standby servers for all tested file serving nodes are included when the overall result is determined The results will be one of the following e Passed All tested hosts and standby servers passed every health check e Failed One or more tested hosts failed a health check The health status of standby servers is not included when this result is calculated e Warning A suboptimal condition that might require your attention was found on one or more tested hosts or standby servers The detailed report consists of the summary report and the following additional data e Summary of the test results e Host information such as operational state performance data and version data 88 Monitoring cluster operations e Nondefault host tunings e Results of the health checks By default the Result Information field in a detailed report provides data only for health checks that received a Failed or a Warning result Optionally you can expand a detailed report to provide data about checks th
330. throughput of the parallel test is probably limited by client s network interface The test is run as follows e Copy the contents of opt hp mxso diags netperf 2 1 p13 to an x86 64 client host e the test scripts to one client from which you will be running the test The scripts required are exds netperf diags lib bash and nodes 1lib bash from the opt hp mxso diags bin directory e Runexds netserver s lt server_list gt to start a receiver for the test on each 9720 Storage server blade as shown in the following example exds netserver s glory l 8 e Read the README txt file to build for instructions on building exds_netperf and build and install exds netperf Install on every client you plan to use for the test e On the client host run exds netperf in serial mode against each 9720 Storage server in turn For example if there are two servers whose eth2 addresses are 161231231 and 16 123 123 2 use the following command exds netperf serial server 16 123 123 1 16 123 123 2 e client host run exds netpert in parallel mode as shown in the following example In this example hosts blue and red are the tested clients exds netpert itself could be one of these hosts or on a third host 4 exds netperf parallell A server 16 123 123 1 16 123 123 2 clients red blue Normally the IP addresses you use are the IP addresses of the host interfaces eth2 eth3 and so on Accessing the Onboar
331. tibility Customers may use them as they see fit but HP does not recommend their use for normal data storage due to performance limitations e 1 100 GB LUN This is intended for administrative use such as backups Bandwidth to these disks is shared with the 1 GB LUNs above and one of the data LUNs below e 8x 8 TB LUNs These are intended as the main data storage of the product Each is supported by ten disks in a RAID6 configuration the first LUN shares its disks with the three LUNs described above For capacity blocks with 2 TB HDDs e The 1 GB and 100 GB LUNs are the same as above e 16x 8 TB LUNs These are intended as the main data storage of the product Each pair of LUNs is supported by a set of ten disks in a RAID6 configuration The first pair of LUNs shares its disks with the three LUNs described above IBRIX 9720 component monitoring The system actively monitors the following components in the system e Blade Chassis Power Supplies Fans Networking Modules SAS Switches Onboard Administrator modules e Blades Local hard drives access to all 9100cc controllers e 9100 Power Supplies Fans Hard Drives 9100cc controllers and LUN status e 9100 Power Supplies Fans I O modules and Hard Drives If any of these components fail an event is generated Depending on how you have Events configured each event will generate an e mail or SNMP trap Some components may generate multiple events if they fail Fail
332. tion about High Availability failover server tuning VLAN tagging segment migration and evacuation upgrades SNMP Contents M o ES A HD 12 ESI o I REIP A A NR ETE 12 System COMPONEN aa 12 HP IBRIX software evita dd ti 12 High availability and redunda astas 13 Eo AA 14 Setting up the IBRIX 9720 9730 Storage oooooocccccoooooocnconocononononcccnononononnnnnnnnnnnnnnnnnnonanannnnnnnninns 14 Installation MS D Dm 14 Additional configuration 14 logging in f ANS Meseta ti 15 Using eiei 15 Using the TFT kesboord moifolose octets sued imita pem 15 Using the serial link on the Onboard Administrator 16 Booting the system and individual server blades 16 Management encerado 16 Using A 16 MT 20 Adding user accounts for GUI access 20 Using he Oli aiii 21 Starting the array management software eene nennen 21 9000 client INEHOCeS necia 21 soltwate 22 Changing Pasas 22 Configuring ports era Wallis 22 Configuring NIP Seras 23 Con
333. tion fails HP recommends that you run phase 1 of the ibrix fsck command in corrective mode on the segment that failed the evacuation For more information see Checking and repairing file systems in the HP IBRIX 9000 Storage File System User Guide The segment evacuation process fails if a segment contains chunk files bigger than 3 64 T you need to move these chunk files manually The evacuation process generates a log reporting the chunk files on the segment that were not moved The log file is saved in the management console log directory the default is usr local ibrix log and is named Rebalance jobID FS ID info for example Rebalance 29 ibfs1 info Run the inum2name command to identify the symbolic name of the chunk file d inum2name fsname ibfs 500000017 ibfs sliced dir file3 bin After obtaining the name of the file use a command such as cp to move the file manually Then run the segment evacuation process again The analyzer log lists the chunks that were left on segments Following is an example of the log 2012 03 13 11 57 35 0332834 INFO 1090169152 segment 3 not migrated chunks 462 2012 03 13 11 57 35 0332855 INFO 1090169152 segment 3 not migrated replicas O0 2012 03 13 11 57 35 0332864 INFO 1090169152 segment 3 not migrated files 0 2012 03 13 11 57 35 0332870 INFO 1090169152 segment 3 not migrated directories 0 2012 03 13 11 57 35 0332875 INFO 1090169152 Segment 3
334. tions G Regulatory compliance notices Regulatory compliance identification numbers For the purpose of regulatory compliance certifications and identification this product has been assigned a unique cad model number The regulatory model number can be found on the edic nameplate label along with all required approval markings and information When requesting compliance information for this product always refer to this regulatory model number The regulatory model number is not the marketing name or model number of the product Product specific information HP Regulatory model number FCC and CISPR classification These products contain laser components See Class 1 laser statement in the Laser compliance notices section Federal Communications Commission notice Part 15 of the Federal Communications Commission FCC Rules and Regulations has established Radio Frequency RF emission limits to provide an radio frequency spectrum Many electronic devices including computers generate RF energy incidental to their intended function and are therefore covered by these rules These rules place computers and related peripheral devices into two classes A and B depending upon their intended installation Class A devices are those that may reasonably be expected to be installed in a business or commercial environment Class B devices are those that may reasonably be expected to be installed in a residential environ
335. tor 2 Cabling diagrams 225 SAS switch cabling Base cabinet NOTE Callouts 1 through indicate additional X9700c components X9700c 4 X9700c 3 X9700c 2 X9700c 1 SAS switch ports Through 4 in interconnect bay 3 of the c Class Blade Enclosure Ports 2 through 4 are reserved for additional capacity blocks A U N 6 SAS switch ports 5 through 8 in interconnect bay 3 of the cClass Blade Enclosure Reserved for expansion cabinet use 7 SAS switch ports 1 through 4 in interconnect bay 4 of the cClass Blade Enclosure Ports 2 through 4 are reserved for additional capacity blocks 8 SAS switch ports 5 through 8 in interconnect bay 4 of the cClass Blade Enclosure Reserved for expansion cabinet use SAS switch cabling Expansion cabinet NOTE Callouts 1 through indicate additional X9700c components 226 The IBRIX 9720 component and cabling diagrams PEL all 1 X9700c 8 5 SAS switch ports 1 through 4 in interconnect bay 3 of the c Class Blade Enclosure Used by base cabinet 2 X9700c 7 6 SAS switch ports 5 through 8 in interconnect bay 3 of the c Class Blade Enclosure 3 X9700c 6 7 SAS switch ports 1 through 4 in interconnect bay 4 of the c Class Blade Enclosure 4 X9700c 5 8 SAS switch ports 5 through 8 in interconnect bay 4 of the c Class Blade Enclosure Used by base cabinet Cabling diagrams 227 E The IBRIX 9720 spare parts list The f
336. tput HA should display on From the node hosting the active management console perform a manual backup of the upgraded configuration ibrixhome bin ibrix fm B Verify that all version indicators match for file serving nodes Run the following command from the active management console ibrixhome bin ibrix version 1 If there is a version mismatch run the ibrix ibrixupgrade f script again on the affected node and then recheck the versions The installation is successful when all version indicators match If you followed all instructions and the version indicators do not match contact HP Support Verify the health of the cluster lt ibrixhome gt bin ibrix health 1 The output should show Passed on For an agile configuration on all nodes hosting the passive management console return the management console to passive mode ibrixhome bin ibrix fm m passive If you received a new license from HP install it as described in the Licensing chapter in this document Troubleshooting upgrade issues If the upgrade does not complete successfully check the following items For additional assistance contact HP Support Automatic upgrade Check the following If the initial execution of usr local ibrix setup upgrade fails check usr local ibrix setup upgrade log for errors It is imperative that all servers are up and running the IBRIX software before you execute the upgrade script If the install of the
337. traffic when deployed Onboard Administrator is unresponsive On systems with a flat network excessive broadcast traffic can cause the OA to be unresponsive Note the following e OA should be connected to a network with a low level of broadcast traffic Failure to follow this guideline can first manifest as timeout errors during installation can later manifest as false alerts from monitoring and in the worst case can cause the OA to hang e n rare cases the OA can become hung when it is overwhelmed by broadcast traffic This condition manifests in various errors from monitoring installation and IBRIX failover To recover proper functionality manually reseat the OA module or power cycle the C7000 To diagnose this issue check the OA s syslog for messages such as the following Feb 1 16 41 56 Kernel Network packet flooding detected Disabling network interface for 2 seconds IBRIX RPC call to host failed In var log messages on a file serving node you may see messages such as ibr process status Err RPC call to host wodao6 failed error 651 func IDE FSYNC prepacked If you see these messages persistently contact HP Services as soon as possible The messages could indicate possible data loss and can cause I O errors for applications that access IBRIX file systems Degrade server blade Power PIC After a server blade or motherboard replacement Insight Manager display on the blade chassis may show an error message ind
338. ts storage resources and networks The segments owned by the server will not be accessible if the server cannot see its storage To fail back a node from the GUI select the node on the Servers panel and then click Failback on the Summary panel On the GUI select the node on the Servers panel and then click Failback on the Summary pane On the CLI run the following command where HOSTNAME is the failed over node ibrix server f U h HOSTNAME After failing back the node check the Summary panel or run the server 1 command to determine whether the failback completed fully If the failback is not complete contact HP Support NOTE A failback might not succeed if the time period between the failover and the failback is too short and the primary server has not fully recovered HP recommends ensuring that both servers are up and running and then waiting 60 seconds before starting the failback Use the ibrix server 1 command to verify that the primary server is up and running The status should be Up Failedover before performing the failback up HBA monitoring You can configure High Availability to initiate automated failover upon detection of a failed HBA HBA monitoring can be set up for either dual port HBAs with built in standby switching or single port HBAs whether standalone or paired for standby switching via software The IBRIX software does not play a role in vendor or software mediated HBA failover traffic moves to th
339. tus of the management console etc init d ibrix fusionmanager status The status command confirms whether the correct services are running Output will be similar to the following Fusion Manager Daemon pid 18748 running Check usr 10ocal ibrix 10g fusionserver 1og for errors Upgrade the remaining management console node Move the ibrix directory used in the previous release to ibrix old Then expand the distribution tarball or mount the distribution DVD in a directory of your choice Expanding the tarball creates a subdirectory named ibrix that contains the installer program For example if you expand the tarball in xoot the installer is in root ibrix Change to the installer directory if necessary and run the upgrade ibrixupgrade f The installer upgrades both the management console software and the file serving node software on the node On the node that was just upgraded and has its management console in maintenance mode move the management console back to passive mode ibrixhome bin ibrix fm m passive The node now resumes its normal backup operation for the active management console Upgrading the IBRIX software to the 5 5 release 203 Upgrading remaining file serving nodes Complete the following steps on the remaining file serving nodes 1 Move the installer dir gt ibrix directory used in the previous release installation to ibrix old For example if you expanded the tarball in root during the previou
340. twork interfaces and HBAs Agile management consoles The agile Fusion Manager maintains the cluster configuration and provides graphical and command line user interfaces for managing and monitoring the cluster The agile Fusion Manager is installed on all file serving nodes when the cluster is installed The Fusion Manager is active on one node and is passive on the other nodes This is called an agile Fusion Manager configuration Agile Fusion Manager modes An agile Fusion Manager can be in one of the following modes e active In this mode the Fusion Manager controls console operations All cluster administration and configuration commands must be run from the active Fusion Manager e passive In this mode the Fusion Manager monitors the health of the active Fusion Manager If the active Fusion Manager fails the a passive Fusion Manager is selected to become the active console e nofmfailover In this mode the Fusion Manager does not participate in console operations Use this mode for operations such as manual failover of the active Fusion Manager IBRIX software upgrades and server blade replacements Changing the mode Use the following command to move a Fusion Manager to passive or nofmfailover mode ibrix fm m passive nofmfailover A h lt FMLIST gt IF the Fusion Manager was previously the active console IBRIX software will select a new active console A Fusion Manager currently in active mode can be moved to either
341. u run the ibrix reten adm u f FSNAME command For more information see Upgrading pre 6 0 file systems for software snapshots page 185 Review the file etc hosts on every IBRIX node file serving nodes and management nodes to ensure the hosts file contains two lines similar to the following 127 0 0 1 hostname localhost localdomain localhost 1 localhost6 localdomain6 localhost6 130 Upgrading the IBRIX software to the 6 2 release In this instance hostname is the name of the IBRIX node as returned by the hostname command If these two lines do not exist or they do not contain all of the information open the etc hosts file with a text editor such as vi and modify the file so it contains the two lines matching the format provided in this step For example if the hostname command returns ss01 then the lines should appear as follows 127 0 0 1 ss01 localhost localdomain localhost 1 localhost6 localdomain6 localhost6 12 After the upgrade the Fusion Manager on each server in the IBRIX cluster must be restarted manually 1 Restart all passive Fusion Managers a Determine if the Fusion Manager is in passive mode by entering the following command ibrix fm i b the command returns passive regardless of failover disabled or not enter the following command to restart Fusion Manager service ibrix fusionmanager restart c Redo steps a and b for each Fusion Manager 2 Restart the Active
342. u wraz z innymi domowymi odpadkami Obowigzkiem u ytkownika jest ochrona zdrowa ludzkiego i rodowiska przez przekazanie zu ytego sprz tu do wyznaczonego punktu zajmuj cego si recyklingiem odpad w powsta ych ze sprz tu elektrycznego i elektronicznego Wi cej informacji mo na uzyska od lokalnej firmy zajmuj cej wywozem nieczysto ci Portuguese recycling notice Descarte de equipamentos usados por utilizadores dom sticos na Uni o Europeia Este s mbolo indica que n amp o deve descartar o seu produto juntamente com os outros lixos domiciliares Ao inv s disso deve proteger a sa de humana e o meio ambiente levando o seu equipamento para descarte em um ponto de recolha destinado reciclagem de res duos de equipamentos el ctricos e electr nicos Para obter mais informac es contacte o seu servico de tratamento de res duos dom sticos Recycling notices 245 Romanian recycling notice Casarea echipamentului uzat de c tre utilizatorii casnici din Uniunea European Acest simbol nseamn s nu se arunce produsul cu alte deseuri menajere In schimb trebuie s proteja i s n tatea uman i mediul pred nd echipamentul uzat la un punct de colectare desemnat pentru reciclarea echipamentelor electrice si electronice uzate Pentru informa ii suplimentare v rug m s contacta i serviciul de eliminare a de eurilor menajere local Slovak recycling notice Likvid cia vyraden ch zariaden pou vate mi v d
343. ully perform the following steps on all servers 1 the following commands chkconfig ibrix server off chkconfig ibrix ndmp off chkconfig ibrix fusionmanager off p Reboot all servers 3 Runthe following commands to move the services back to the on state The commands do not start the services chkconfig ibrix server on chkconfig ibrix ndmp on chkconfig ibrix fusionmanager on 4 Unmount the file systems and continue with the upgrade procedure 134 Upgrading the IBRIX software to the 6 2 release 13 Licensing This chapter describes how to view your current license terms and how to obtain and install new IBRIX software product license keys Viewing license terms The IBRIX software license file is stored in the installation directory To view the license from the GUI select Cluster Configuration in the Navigator and then select License To view the license from the use the following command ibrix license 1 The output reports your current node count and capacity limit In the output Segment Server refers to file serving nodes Retrieving a license key When you purchased this product you received a License Entitlement Certificate You will need information from this certificate to retrieve and enter your license keys You can use any of the following methods to request a license key e Obtain a license key from http webware hp com e Use AutoPass to retrieve and install permanent license keys Se
344. un the following command to verify that automated failover is off In the output the HA column should display of ibrixhome bin ibrix server 1 On the active management console node stop the NFS and SMB services on all file serving nodes to prevent NFS and SMB clients from timing out lt ibrixhome gt bin ibrix server s t cifs c stop ibrixhome bin ibrix server s t nfs c stop Verify that all likewise services are down on all file serving nodes ps ef grep likewise Use kill 9 to kill any likewise services that are still running If file systems are mounted from a Windows 9000 client unmount the file systems using the Windows client GUI Unmount all IBRIX file systems lt ibrixhome gt bin ibrix umount f lt fsname gt Saving the node configuration Complete the following steps on each node starting with the node hosting the active management console 1 Run usr local ibrix setup save cluster config This script creates a tgz file named hostname cluser config tgz which contains a backup of the node configuration Savethe hostname cluser config tgz file which is located in tmp to the external storage media Performing the upgrade Complete the following steps on each node 1 Obtain the latest Quick Restore image from the HP kiosk at http www software hp com kiosk you will need your HP provided login credentials Burn the ISO image to a DVD Insert the Quic
345. uption occurrence might occur To recover from an Express Query Manual Intervention Failure MIF l Check the health of the cluster as described in the Monitoring cluster operations page 68 and clear any pending issues related to the file system lt FSNAME gt 2 Check the health of the file system as described in the Monitoring cluster operations page 68 and clear any pending issues related to the file system lt FSNAME gt 3 Clear the Express Query MIF state by entering the following command ibrix archiving C lt FSNAME gt 4 Monitor the Express Query recovery by entering the folloving command ibrix archiving 1 While the Express Query is recovering from MIF it displays the RECOVERY state Wait for the state to return to OK or MIF If the state returns as OK no additional steps are required The Express Query is updating the database with all the outstanding logged file system changes since the MIF occurrence 5 f you have a MIF condition for one or several file systems and cluster and file system health checks are not OK redo the previous steps 164 Troubleshooting Cluster and file system health checks have an OK status but Express Query is yet in a MIF condition for one or several specific file systems This unlikely situation occurs when some data has been corrupted and it cannot be recovered To solve this situation a If there is a full backup of the file system involved do a restore b If t
346. uring the SNMP agent The SNMP agent is created automatically when the Fusion Manager is installed It is initially configured as an SNMPv2 agent and is off by default Some SNMP parameters and the SNMP default port are the same regardless of SNMP version The default agent port is 161 SYSCONTACT SYSNAME and SYSLOCATION are optional MIB II agent parameters that have no default values NOTE The default SNMP agent port was changed from 5061 to 161 in the IBRIX 6 1 release This port number cannot be changed Setting up SNMP notifications 57 The c and s options are also common to all SNMP versions The c option turns the encryption of community names and passwords on or off There is no encryption by default Using the s option toggles the agent on and off it turns the agent on by starting a listener on the SNMP port and turns it off by shutting off the listener The default is off The format for a v1 or v2 update command follows ibrix snmpagent u v 1 2 p PORT r READCOMMUNITY w WRITECOMMUNITY t SYSCONTACT n SYSNAME o SYSLOCATION c yes no s on off The update command for SNMPv1 and v2 uses optional community names By convention the default READCOMMUNITY name used for read only access and assigned to the agent is public No default WwRITECOMMUNITY name is set for read write access although the name private is often used The following command updates a v2 agent with the write community name priva
347. usion Manager service ibrix fusionmanager restart c Redo steps a and b for each Fusion Manager Automated offline upgrades for IBRIX software 6 x to 6 2 127 2 Restart the Active Fusion Manager by issuing the following commands on the active FM server a Enter the following command to set all instances of Fusion Manager to the nofmfailover mode ibrix fm m nofmfailover b To restart Fusion Manager enter the following command Service ibrix fusionmanager restart c Enter the following command to set all instances of Fusion Manager to the passive mode ibrix fm m passive A Manual offline upgrades for IBRIX software 6 x to 6 2 Preparing for the upgrade To prepare for the upgrade complete the following steps 1 2 Make sure you have completed all steps in the upgrade checklist Table 4 page 122 Verify that ssh shared keys have been set up To do this run the following command on the node hosting the active instance of the agile Fusion Manager ssh lt server_name gt Repeat this command for each node in the cluster Verify that all file system node servers have separate file systems mounted on the following partitions by using the d command 7 e local stage alt Verify that all FSN servers have a minimum of 4 GB of free available storage on the local partition by using the df command Verify that all FSN servers are not reporting any partition as 100 full at least 5 free spac
348. usion Manager uses a virtual interface VIF IP address to enable failover and prevent any interruptions to file serving nodes and IBRIX clients The existing cluster NIC IP address becomes the permanent VIF IP address Identify an unused IP address to use as the Cluster NIC IP address for the currently running management console 2 Disable high availability on the server ibrix server m U 3 Using ssh connect to the management console on the user network if possible e the etc sysconfig network scripts ifcfg bondo file Change the IP address to the new unused address and also ensure that ONBOOT Yes e f you have preferred IBRIX clients over the user bond1 network edit the etc sysconfig network scripts ifcfg bonal file Change the IP address to another unused reserved address Run one of the following commands etc init d network restart service network restart Verify that you can ping the new local IP address 4 Configure the agile Fusion Manager ibrix fm c cluster VIF addr d cluster VIF device n cluster VIF netmask v cluster I local cluster IP addr In the command cluster VIF addr is the old cluster IP address for the original management console and 10cal cluster gt is new IP address you acquired 118 Migrating to an agile Fusion Manager configuration 10 For example rootex109s1 ibrix fm c 172 16 3 1 d bond0 1 n 255 255 248 0 v cluster I 17
349. v toode tuua elektriliste ja elektrooniliste seadmete k itlemisega egelevasse kogumispunkti K simuste korral p rduge kohaliku pr gik itlusettev tte poole Finnish recycling notice Kotitalousj tteiden h vitt minen Euroopan unionin alueella T m symboli merkitsee ett laitetta ei saa h vitt muiden kotitalousj tteiden mukana Sen sijaan sinun on suojattava ihmisten terveytt ja ymp rist toimittamalla k yt st poistettu laite s hk tai elektroniikkaj tteen kierr tyspisteeseen Lis tietoja saat j tehuoltoyhti lt French recycling notice Mise au rebut d quipement par les utilisateurs priv s dans l Union Europ enne Ce symbole indique que vous ne devez pas jeter votre produit avec les ordures m nag res Il est de votre responsabilit de prot ger la sant et l environnement et de vous d barrasser de votre quipement en le remettant une d chetterie effectuant le recyclage des quipements lectriques et lectroniques Pour de plus amples informations prenez contact avec votre service d limination des ordures m nag res German recycling notice Entsorgung von Altger ten von Benutzern in privaten Haushalten in der EU Dieses Symbol besagt dass dieses Produkt nicht mit dem Haushaltsm ll entsorgt werden darf Zum Schutze der Gesundheit und der Umwelt sollten Sie stattdessen Ihre Altger te zur Entsorgung einer daf r vorgesehenen Recyclingstelle f r elektrische und elektronische Ger te
350. ving nodes s Summary statistics CPU statistics m Memory statistics i O statistics n Network statistics t NFS statistics h The file serving nodes to be included in the report Sample output follows LO PORE Summary HOST Status CPU Disk MB s Net MB s lab12 10 hp com Up 0 22528 616 a HOST Read MB s Read IO s Read ms op Write MB s Write IO s Write ms op lab12 10 hp com 22528 2 5 0 0 00 mecs Net HOST In MB s In IO s Out MB s Out IO s lab12 10 hp com 261 3 355 2 Mem HOST MemTotal MB MemFree MB SwapTotal MB SwapFree MB lab12 10 hp com 1034616 703672 2031608 2031360 me iE BU seme HOST User System Nice Idle IoWait Irq SoftIrq lab12 10 hp com 0 0 0 0 97 1 0 ASADAS NFS v3 HOST Null Getattr Setattr Lookup Access Readlink Read Write lab12 10 hp com 0 0 0 0 0 0 0 0 HOST Create Mkdir Symlink Mknod Remove Rmdir Rename lab12 10 hp com 0 0 0 0 0 0 0 HOST Link Readdir Readdirplus Fsstat Fsinfo Pathconf Commit lab12 10 hp com 0 0 0 0 0 0 0 Viewing operating statistics for file serving nodes 91 9 Using the Statistics tool The Statistics tool reports historical performance data for the cluster or for an individual file serving node You can view data for the network the operating system and the file systems including the data for NFS memory and block devices Statistical data is transmitted
351. ware 5 5 x If your system is running an earlier release first upgrade to the 5 5 release and then upgrade to 5 6 e upgrade procedure upgrades the operating system to Red Hat Enterprise Linux 5 5 O IMPORTANT not start new remote replication jobs while a cluster upgrade is in progress If replication jobs were running before the upgrade started the jobs will continue to run without problems after the upgrade completes The upgrade to IBRIX software 5 6 is E only as an offline upgrade Because it requires an upgrade of the kernel the local disk must be reformatted Clients will experience a short interruption to administrative and file system access while the system is upgraded There are two upgrade procedures available depending on the current installation If you have an IBRIX software 5 5 system that was installed through the QR procedure you can use the automatic upgrade procedure If you used an upgrade procedure to install your IBRIX software 5 5 system you must use the manual procedure To determine if your system was installed using the QR procedure run the df command If you see separate tile systems mounted on local stage Upgrading the IBRIX software to the 5 6 release 189 and alt your system was quick restored and you can use the automated upgrade procedure If you do not see these mount points proceed with the manual upgrade process e Automatic upgrades This process uses separate partitioned spa
352. witch server with server2 in the commands Repeat steps a through for each node that requires a firmware upgrade If you are upgrading to 6 2 you must complete the steps provided in the After the upgrade section for your type of upgrade as shown in the following table Type of upgrade Complete the steps in this section Online upgrades After the upgrade page 125 Automated offline After the upgrade page 127 upgrades Manual offline After the upgrade page 130 upgrades Finding additional information on FMT You can find additional information on FMT as follows Online help for FMT To access the online help for FMT enter the hpsp mt command on the file system node console HP FMT User Guide To access the HP HPSP FMT User Guide go to the HP StoreAll Storage Manuals page http www hp com support IBRIXManuals 140 Upgrading firmware Adding performance modules on 9730 systems See the HP IBRIX 9730 Storage Performance Module Installation Instructions for details about installing the module on an IBRIX 9730 cluster See the HP IBRIX 9000 Storage Installation Guide for information about installing IBRIX software on the blades in the module These documents are located on the IBRIX manuals page Browse to http www hp com support manuals In the storage section select NAS Systems and then select HP 9000 Storage from the IBRIX Storage Systems section Adding new se
353. with slide rails outside the rack Extend only one component at a time A rack could become unstable if more than one component is extended for any reason WARNING Verify that the AC power supply branch circuit that provides power to the rack is not overloaded Overloading AC power to the rack power supply circuit increases the risk of personal injury fire or damage to the equipment The total rack load should not exceed 80 percent of the branch circuit rating Consult the electrical authority having jurisdiction over your facility wiring and installation requirements Device warnings and precautions A WARNING reduce the risk of electric shock or damage to the equipment e Allow the product to cool before removing covers and touching internal components e Do not disable the power cord grounding plug The grounding plug is an important safety feature e Plug the power cord into a grounded earthed electrical outlet that is easily accessible at all times e Disconnect power from the device by unplugging the power cord from either the electrical outlet or the device e Do not use non conductive tools that could bridge live parts e Remove all watches rings or loose jewelry when working in hot plug areas of an energized device e Install the device in a controlled access location where only qualified personnel have access to the device e Power off the equipment and disconnect power to all AC power cords before re
354. workload Evacuation of segment data alls the segment to be reallocated to another filesystem or retired for maintenance Evacuation removes the data entirely and rebalances it among remaining segments Note the ability to rebalance data evenly depends on the average file size Select Rebalance Mode Rebalance All as evenly as possible within all same tier segments Rebalance Advanced manually specify source and destination segments Evacuate Advanced manually specify source segments for evacuation and destination segments On the Evacuate Advanced dialog box locate the segment to be evacuated and click Source Then locate the segments that will receive the data from the segment and click Destination If the file system is tiered be sure to select destination segments on the same tier as the source segment Select Mode Evacuate Advanced Evacuate Advanced Summary The table below shows information about segements in the filesystem including their size state and current utilization In this mode data on selected source segments will be evacuated entirely and redistributed to selected destination segments as evenly as possible Note to maintain tier hierarchy in a tiered filesystem only select segments from within the same tier Selected Source Segments Used Space 0 TB Selected Destination Segments Available Space 0 TB Estimated Destination Segment Utilization 0 Storage segments for f
355. x 2 ExDS9100cx sn CN8827002Z fw 1 28 fans OK OK temp OK power OK OK FAILED OK box 3 ExDS9100cx sn CN8827002Z fw 2 03 fans OK OK temp OK power OK OK OK OK In the above example the array serial number box 1 is SGA830000M The firmware level on box 2 left drawer of X9700cx is 1 28 The firmware level on box 3 right drawer is 2 03 This is unsupported because the firmware levels are not the same the firmware must be updated as described in step 11 13 Mount the file systems that were unmounted in step 6 using the GUI Re seating an X9700c controller Make sure you are re seating the correct controller You should observe both a flashing amber LED and the seven segment display An code indicates controller 1 left is halted an H2 or C2 code indicates that controller 2 right should be re seated NOTE There is no need to disconnect the SAS cables during this procedure To re seat the controller 1 Squeeze the controller thumb latch and rotate the latch handle down 2 Pull the controller out until it has clearly disengaged there is no need to fully remove the controller Identifying failed I O modules on an X9700cx chassis 157 3 While the controller 15 still disengaged ensure that the SAS cables are fully inserted 4 Push the controller fully into the chassis so it engages The seven segment display shows different codes as the controller boots After a few minutes the seven segment display should s
356. y Bower Switch gta Dever OM Based Setup Utility Version 3 88 opuright 1982 2818 Hewlett Packard Development Company L P IRQ4 10 3F8h 3FFh ROS D 2F8h 2FF 2 IR y B ZFF 3 IRQ5 I0 3E8h 3EFh led 1 1 gt Changes Configuration Selection Enter gt Saves Selection lt ESC gt to Cancel 3 Highlight the BIOS Serial Console amp EMS option in main menu and then press the Enter key Highlight the BIOS Serial Console Port option and then press the Enter key Select the COM1 port and then press the Enter key 4 Highlight the BIOS Serial Console Baud Rate option and then press the Enter key Select the 115200 Serial Baud Rate 5 Highlight the Server Availability option in main menu and then press the Enter key Highlight the ASR Timeout option and then press the Enter key Select the 30 Minutes and then press the Enter key 6 To exit RBSU press Esc until the main menu is displayed Then at the main menu press F10 The server automatically restarts Setting up nodes for crash capture O IMPORTANT Complete the steps in Prerequisites for setting up the crash capture page 53 before starting the steps in this section To set up nodes for crash capture complete the following steps 1 Enable crash capture Run the following command ibrix host tune S h HOSTLIST g GROUPLIST o trigger crash on failover 1 Capturing a core dump from failed node 53 2 Tune Fusion Manag
357. y as it is listed by the ibrix_fm 1 command Entitle a chassis ibrix phonehome e C OA IP Address of the Chassis b Customer Entered Serial Number g Customer Entered Product Number NOTE The Phone Home gt Storage selection on the GUI does not apply to 9720 9730 systems Discovering devices on HP SIM HP Systems Insight Manager SIM uses the SNMP protocol to discover and identify IBRIX systems automatically On HP SIM open Options Discovery New Select Discover a group of systems and then enter the discovery name and the Fusion Manager IP address on the New Discovery dialog box New Discovery 9 Discover a group of systems Discover a single system Required field Name Fusion Manager Schedule Y Automatically execute discovery every 1 days at 12 30 PM Ping inclusion ranges system hosts names and or hosts files Help w 10 5 59 156 Enter the read community string on the Credentials gt SMTP tab This string should match the Phone Home read community string If the strings are not identical the Fusion Manager IP might be discovered as Unknown SNMP e SNMP Credentials New Discovery Task 1 Use these credentials Read communty sting labaystem 5 Wf these credentials tad try others that may apply This may performance Leam more Devices are discovered as described in the following table Device Discovered as Fusion Manager
358. y information in Warnings and precautions page 233 and Regulatory compliance notices page 237 The Firmware Management Tool FMT is a utility that scans the IBRIX system for outdated firmware and provides a comprehensive report that provides the following information Device found Active firmware found on the discovered device Qualified firmware for the discovered device Proposed action Users are told whether an upgrade is recommended Severity How severe an upgrade is required Reboot required on flash Device information Parent device ID Components for firmware upgrades The HP IBRIX system includes several components with upgradable firmware The following lists the components that can be upgraded Server BIOS Power_Mgmt_Ctlr Smart_Array_Ctlr NIC PCleNIC SERVER_HDD Storage o Enclosure IO Enclosure HDD O IMPORTANT 9730 systems only Upgrading the firmware for storage disk enclosures Enclosure HDD is an OFFLINE process Ensure that all host and array I O is stopped prior to the update and the file system is unmounted See the HP IBRIX 9000 Storage File System User Guide for information on how to unmount a file system Chassis o o VC_Flex 10 6Cb SAS BL SW 136 Upgrading firmware Enter the following command to show which components could be flagged for flash upgrade hpsp fmt 1 The fol
359. ype In addition HP recommends you use 10 links The 9720 Storage uses mode 1 active backup for network bonds No other bonding mode is supported Properly configured this provides a fully redundant network connection to each blade A single failure of NIC Virtual Connect module uplink or site network switch will not fail the network device However it is important that the site network infrastructure is properly configured for a bonded interface to operate correctly both in terms of redundancy and performance Capacity blocks A capacity block comprises an X9700c chassis containing 12 disk drives and an X9700cx JBOD enclosure containing 70 disk drives The X9700cx enclosure actually contains two JBODs one in each pull out drawer left and right drawer Each drawer contains 35 disk drives The serial number is the serial number of the X9700c chassis Every server is connected to every array using a serial attached SCSI SAS fabric The following elements exist e Each server has a P700m SAS host bus adapter HBA which has two SAS ports e Each SAS port is connected by the server blade enclosure backplane to a SAS switch There are two SAS switches such that each server is connected by a redundant SAS fabric e Each array has two redundant controllers Each of the controllers is connected to each SAS switch Within an array the disk drives are assigned to different boxes where box 1 is the X9700c enclosure and boxes 2 and 3
360. ystems remain mounted and client I O continues during the upgrade The upgrade process takes approximately 45 minutes regardless of the number of nodes The total I O interruption per node IP is four minutes allowing for a failover time of two minutes and a failback time of two additional minutes Client O having a timeout of more than two minutes is supported Preparing for the upgrade To prepare for the upgrade complete the following steps 1 2 3 Ensure that all nodes are up and running To determine the status of your cluster nodes check the dashboard on the GUI or use the ibrix health command Ensure that High Availability is enabled on each node in the cluster Verify that ssh shared keys have been set up To do this run the following command on the node hosting the active instance of the agile Fusion Manager ssh lt server_name gt Repeat this command for each node in the cluster and verify that you are not prompted for a password at any time Ensure that no active tasks are running Stop any active Remote data tiering or Rebalancer tasks running on the cluster Use ibrix task 1 to list active tasks When the upgrade is complete you can start the tasks again The 6 1 release requires that nodes hosting the agile management be registered on the cluster network Run the following command to verify that nodes hosting the agile Fusion Manager have IP addresses on the cluster network ibrix fm f If a node
361. z pas de court circuiter les bornes de la pile ou de jeter cette derniere dans le feu ou l eau Remplacez les piles exclusivement par des pi ces de rechange HP pr vues pour ce produit Les piles modules de batteries et accumulateurs ne doivent pas tre jet s avec les d chets m nagers Pour permettre leur recyclage ou leur limination veuillez utiliser les syst mes de collecte publique ou renvoyez les HP votre Partenaire Agr HP ou aux agents agr s Contactez un Revendeur Agr ou Mainteneur Agr pour savoir comment remplacer et jeter vos piles Battery replacement notices 247 German battery notice Hinweise zu Batterien und Akkus AN VORSICHT Dioses Produkt enth lt unter Umst nden eine Batterie oder einen Akku Versuchen Sie nicht Batterien und Akkus au erhalb des Ger tes wieder aufzuladen Sch tzen Sie Batterien und Akkus vor Feuchtigkeit und Temperaturen ber 60 Verwenden Sie Batterien und Akkus nicht missbr uchlich nehmen Sie sie nicht auseinander und vermeiden Sie mechanische Besch digungen jeglicher Art Vermeiden Sie Kurzschl sse und setzen Sie Batterien und Akkus weder Wasser noch Feuer aus Ersetzen Sie Batterien und Akkus nur durch die von HP vorgesehenen Ersatzteile Batterien und Akkus d rfen nicht ber den normalen Hausm ll entsorgt werden Um sie der Wiederverwertung oder dem Sonderm ll zuzuf hren nutzen Sie die ffentlichen Sammelstellen oder setzen Si

Download Pdf Manuals

image

Related Search

Related Contents

設計・建設要求水準書  TE 50 - Hyosung Motors America  et vous ! - Médiateur national de l`énergie  GUIDA DELL`UTENTE      Manual  Wheeled Walker User Manual  Proteções mecânicas dos motores a diesel - DEE  Franke ADG 621-E  

Copyright © All rights reserved.
DMCA: DMCA_mwitty#outlook.com.