Home
        HP StoreAll 9300/9320 Storage Administrator Guide
         Contents
1.           The following table lists examples of events included in each category                    Event Type Trigger Point Name  ALERT User fails to log into GUI login  failure  File system is unmounted filesystem unmounted  File serving node is down restarted server status down  File serving node terminated unexpectedly server  unreachable  WARN User migrates segment using GUI segment  migrated  INFO User successfully logs in to GUI login success       File system is created    filesystem cmd       File serving node is deleted    server  deregistered       NIC is added using GUI    nic added       NIC is removed using GUI    nic removed       Physical storage is discovered and added using    management console    physicalvolume added          Physical storage is deleted using management console   physicalvolume deleted    You can be notified of cluster events by email or SNMP traps  To view the list of supported events   use the command ibrix event  q        NOTE  The StoreAll event system does not report events from the MSA array  Instead  configure  event notification using the SMU on the array  For more information  see    Event notification for  MSA array systems     page 75         Setting up email notification of cluster events    You can set up event notifications by event type or for one or more specific events  To set up  automatic email notification of cluster events  associate the events with email recipients and then    configure email settings to initiate the 
2.         NOTE  Ifthe system discovered on HP SIM does not appear on the Entitlement tab  click  Synchronize RSE        The devices you entitled should be displayed as green in the ENT column on the Remote Support  System List dialog box     Remote Support System List    Action Status Message    EELER       1024 G   2560260174   10 2 4 76 SGH149X2N3   83914 621 US  10 2 59 126 583914 621 US  win te icfhq8tog USE925N3VM 504633 001 IN    If a device is red  verify that the customer entered serial number and part number are correct and  then rediscover the devices     Testing the Insight Remote Support configuration  To determine whether the traps are working properly  send a generic test trap with the following  command     snmptrap  v1  c public  lt CMS IP gt   1 3 6 1 4 1 232  lt Managed System IP gt  6  11003 1234  1 3 6 1 2 1 1 5 0 s test  1 3 6 1 4 1 232 11 2 11 1 0 i 0   1 3 6 1 4 1 232 11 2 8 1 0 s  IBRIX remote support testing    For example  if the CMS IP address is 99 2 2 2 and the StoreAll node is 99 2 2 10  enter the  following    snmptrap  v1  c public 99 2 2 2  1 3 6 1 4 1 232 99 2 2 10 6 11003 1234   1 3 6 1 2 1 1 5 0 s test  1 3 6 1 4 1 232 11 2 11 1 0 i 0   1 3 6 1 4 1 232 11 2 8 1 0 s  IBRIX remote support testing     Updating the Phone Home configuration    The Phone Home configuration should be synchronized after you add or remove devices in the  cluster  The operation enables Phone Home on newly added devices  servers  storage  and chassis   and removes detai
3.        StoreAll version 5 6 StoreAll version 6 1    Upgrading the StoreAll software to the  6 1 release     page 154                    Common issue across all upgrades from StoreAll 5 x    If you are upgrading from a StoreAll 5 x release  ensure that the NFS exports option  subtree check is the default export option for every NFS export  The no subtree check  option is not compatible with the StoreAll OS software     To add the subtree_check option  perform the following steps   1  Unexport the NFS exports   ibrix_exportfs  h  lt HOSTLIST gt   p  lt CLIENT1 PATHNAME1 gt   2  Create NFS exports with the subtree check option   ibrix_exportfs  f  lt FSNAME gt   p  lt CLIENT1 PATHNAME1 gt   o  subtree check        NOTE  Multiple options can be specified by using the  o parameter by separating each  option by a comma  for example   o rw  subtree_check         Complete steps 1 and 2 for every NFS export   Verify that all NFS exports have the subtree_check option set     Rw    ibrix_exportfs  1    Upgrading the StoreAll software to the 6 1 release    This section describes how to upgrade to the latest StoreAll software release  The Fusion Manager  and all file serving nodes must be upgraded to the new release at the same time     Upgrades to the StoreAll software 6 1 release are supported for systems currently running StoreAll  software 5 6 x and 6 x     If your system is currently running StoreAll software 5 4 x  first upgrade to 5 5 x  then upgrade to  5 6 x  and then upgrade to 
4.      Make sure you have completed all steps in the upgrade checklist  Table 1  page 10     Verify that ssh shared keys have been set up  To do this  run the following command on the  node hosting the active instance of the agile Fusion Manager     ssh  lt server_name gt   Repeat this command for each node in the cluster     Verify that all file system node servers have separate file systems mounted on the following  partitions by using the df command     e 2   e  local  e  stage  e  alt    Verify that all FSN servers have a minimum of 4 GB of free available storage on the  local  partition by using the df command     Verify that all FSN servers are not reporting any partition as 100  full  at least 5  free space   by using the df command     Note any custom tuning parameters  such as file system mount options  When the upgrade is  complete  you can reapply the parameters    Stop all client   O to the cluster or file systems  On the Linux client  use lsof  lt  mountpoint gt   to show open files belonging to active processes     Manual offline upgrades for StoreAll software 6 x to 6 3 15    On the active Fusion Manager  enter the following command to place the Fusion Manager  into maintenance mode      lt ibrixhome gt  bin ibrix fm  m nofmfailover  P  A  On the active Fusion Manager node  disable automated failover on all file serving nodes      lt ibrixhome gt  bin ibrix server  m  U      Run the following command to verify that automated failover is off  In the output  the HA
5.      Volume  Displays volume information for each server     Status  Type  Name  UUID    Properties       Storage Controller  Displayed for a storage cluster     Status   Type   UUID   Serial Number  Model   Firmware Version  Message    Diagnostic Message       Battery  Displayed for each storage controller     Status    Type  UUID    Properties       IO Cache Module  Displayed for a storage controller           Status    Type  UUID    Properties       Monitoring 9300 9320 hardware 91    Table 2 Obtaining detailed information about a server  continued     Panel name Information provided  Temperature Sensor  Displays information for each e Status  temperature sensor    e Type   e Name   e UUID   e Locations             e Properties       Monitoring storage and storage components    Select Vendor Storage from the Navigator tree to display status and device information for storage  and storage components        System Status  Vendor Storage  Updated Oct  17  2012  12 39 40 PMEDT    Name Type     a      ch_so0com1ach7000_vs_s00com1acb7000 9320    Event Status  24 hours   13 3 23          iih Dashboard  Ge  Cluster Configuration     Filesysterns  Snapshots  5 Servers  G File Shares      rs     cirs   S FTP      HTTP   E Certificates  T Storage    en Clients    84 Hostgroups       Events    The Summary panel shows details for a selected vendor storage  as shown in the following image     92 Monitoring cluster operations    E         T summery    mame Value  E  Servers Type 9500  Ke
6.     3  Enter the information for the node being restored on the Network Configuration dialog box  and click OK     Network Configuration  Network Configuration for Bond      IP Address   Netmask    Default Gateway     ULAN Tag ID  M       4  Confirm that the information displayed in the Configuration Summary dialog box is correct  and click Commit     Performing the recovery 145    Configuration Summary    Hostname  ib48 242  Timezone  US Eastern    Network s  Configured  1  Bond Name  bond   eth4 eth5   IP Addr  15 226 48 242  Gateway  15 226 48 1    5  On the X9000 Installation     Network Setup Complete dialog box  select Join this IBIRX server  to an existing cluster and click OK        Q  Q    146 Recovering a file serving node    X98080 Installation   Network Setup Complete    Continue with cluster setup at this console  Local web UI session   Continue with cluster setup remotely  Remote web UI session    oin this IBRIX server to an existing cluster  Exit Now  You just wanted to setup networking     The network setup is now complete on this server  The next step is  to create a new IBRIX cluster  However  if you are already in the  middle of a cluster setup wizard on a different machine  this server  is now ready to be added to it by using the configuration    information below in the Add dialog       i  lt F2 gt  ASCII GUI Configuration i  lt tab gt  between elements    The wizard scans the network for existing clusters  On the Join Cluster dialog box  select the  mana
7.     Be sure to upgrade the cluster nodes before upgrading Linux StoreAll clients  Complete the following  steps on each client     l   2   3     Download the latest HP StoreAll client 6 1 package    Expand the tar file    Run the upgrade script      ibrixupgrade  f   The upgrade software automatically stops the necessary services and restarts them when the  upgrade is complete     158 Cascading Upgrades    4  Execute the following command to verify the client is running StoreAll software      etc init d ibrix client status  IBRIX Filesystem Drivers loaded  IBRIX IAD Server  pid 3208  running       The IAD service should be running  as shown in the previous sample output  If it is not  contact HP  Support     Installing a minor kernel update on Linux clients    The StoreAll client software is upgraded automatically when you install a compatible Linux minor  kernel update    If you are planning to install a minor kernel update  first run the following command to verify that  the update is compatible with the StoreAll client software    usr local ibrix bin verify_ client update  lt kernel_update_version gt    The following example is for a RHEL 4 8 client with kernel version 2 6 9 89 ELsmp       usr local ibrix bin verify_client update 2 6 9 89 35 1 ELsmp   Kernel update 2 6 9 89 35 1 ELsmp is compatible    If the minor kernel update is compatible  install the update with the vendor RPM and reboot the  Ree The StoreAll client software is then automatically updated with the new k
8.     ibrix_host_tune  1   h HOSTLIST    n OPTIONS     e To set the communications protocol on nodes and host groups  use the following command   To set the protocol on all StoreAll clients  include the  g clients option     e ibrix_host_tune  p  UDP TCP    h HOSTLIST   g GROUPLIST     e To set server threads on file serving nodes  host groups  and StoreAll clients   ibrix host_tune  t THREADCOUNT   h HOSTLIST   g GROUPLIST    e To set admin threads on file serving nodes  host groups  and StoreAll clients  use this command   To set admin threads on all StoreAll clients  include the  g clients option   ibrix_host_tune  a THREADCOUNT   h HOSTLIST   g GROUPLIST     Tuning StoreAll clients locally   Linux clients  Use the ibrix_lwhost command to tune host parameters  For example  to set the  communications protocol    ibrix lwhost   protocol  p  tcp udp    To list host tuning parameters that have been changed from their defaults    ibrix_lwhost   list    See the ibrix lwhost command description in the HP StoreAll Storage CLI Reference Guide  for other available options     Windows clients  Click the Tune Host tab on the Windows StoreAll client GUI  Tunable parameters  include the NIC to prefer  the default is the cluster interface   the communications protocol  UDP  or TCP   and the number of server threads to use  See the online help for the client if necessary     Managing segments    When a file system is created  the servers accessing the file system are assigned ownership of t
9.    The following versions of the software are supported    e HP SIM 6 3 and IRSA 5 6    e HP SIM 71 and IRSA 5 7     IMPORTANT  Keep in mind the following     e For each file serving node  add the physical user network interfaces  by entering the  ibrix nic command or selecting the Server  gt  NICs tab in the GUI  so the interfaces can  communicate with HP SIM        e   Ensure that all user network interfaces on each file serving node can communicate with the    CMS     IMPORTANT  Insight Remote Support Standard  IRSS   is not supported with StoreAll software  6 1 and later        For product descriptions and information about downloading the software  see the HP Insight  Remote Support Software web page     hito   www hp com go insightremotesupport  For information about HP SIM   http   www hp com products systeminsightmanager    For IRSA documentation     http   www  hp com go insightremoteadvanced docs       36 Getting started          IMPORTANT  You must compile and manually register the StoreAll MIB file by using HP Systems  Insight Manager   1  Download ibrixMib txt from  usr local ibrix doc    2  Rename the file to ibrixMib mib   3  In HP Systems Insight Manager  complete the following steps   a  Unregister the existing MIB by entering the following command    lt BASE gt  mibs gt mxmib  d ibrixMib mib  b  Copy the ibrixMib mib file to the  lt BASE gt  mibs directory  and then enter the following  commands    lt BASE gt  mibs gt mcompile ibrixMib mib     lt BASE gt  mi
10.    To migrate ownership of segments from the CLI  use the following commands    Migrate ownership of specific segments    ibrix fs  m  f FSNAME  s LVLIST  h HOSTNAME   M    F    N    To force the migration  include  M  To skip the source host update during the migration  include   F  To skip server health checks  include  N    The following command migrates ownership of segments ilv2 and ilv3 in file system ifs1 to  server2    ibrix_fs  m  f ifsl  s ilv2 ilv3  h server2   Migrate ownership of all segments owned by specific servers    ibrix fs  m  f FSNAME  H HOSTNAME1 HOSTNAME2   M    F    N     Managing segments 117    For example  to migrate ownership of all segments in file system ifs1 from server  to server2     ibrix_fs  m  f ifsl  H serverl server2    Evacuating segments and removing storage from the cluster    Before removing storage used for a StoreAll software file system  you will need to evacuate the  segments  or logical volumes  storing file system data  This procedure moves the data to other  segments in the file system and is transparent to users or applications accessing the file system   When evacuating a segment  you should be aware of the following restrictions     While the evacuation task is running  the system prevents other tasks from running on the file  system  Similarly  if another task is running on the file system  the evacuation task cannot be  scheduled until the first task is complete     The file system must be quiescent  no active I O while a 
11.    lt  asceconndsrdissneranesnavadeGaaencweneneu nice ddentenayinnsncdedieatennsannveliceaiedens 126  Viewing network interface information  aicc csiteestswececdencratracdscinadienerteeneieteonaemncoeemenss 126  RE DEE 128  Viewing EE 128  Retrieving a license te accesnie ec oydedaic hiccevere tasrasien ceded eednvnddeineaaes to reevaeedes oebieen able ieee aecene 128  Using AutoPass to retrieve and install permanent license ke  128  USR KE 129  Components for UntnWGletU POG IES sie axceanccdestsnensSuconrdgoctelauseacsensehscedsnntemmmineenseldceasamuerebidet dion  129  Steps for upgrading the Medea seed Ee beet 130  Finding additional information on FIV cetea cessor cenvessesanedseaytianeedookinceeneds Gbebteseinxeedrexeieedusniatzens 133  Downloading MSA2000 G2 G3 firmware for 9320 swstems tresserne 133  1 Ay SU SS E 134  Collecting information for HP Support with the ts ollech 134  Collecting  dE eegene 134  Downloading the archive EE 135  Deleting the archive Eeer ege 135  Configuring Nara resale ceiocesctca vs maueyitoroedes doesent aspasioeue rele recacoustestdeebunonmepu ebeeeeiseoute 136  Obtaining custom logging from ibrix_collect add on scrpis 137  Creating an add on E EE 137  Running an add on SEP a 26ceatscirn cede eesnan ie cece dase cen neemeaale cane nenatencdioadedenmnmenanedendecsndeus 138  Viewing the output from an add on SCNPlcistiserrtmcactscreinin etme ces 138  Viewing data collection nkormotton   139  Adding deleting commands or logs in the XML he 140 
12.    page 72       Viewing events    The GUI dashboard specifies the number of events that have occurred in the last 24 hours  Click  Events in the GUI Navigator to view a report of the events  You can also view events that have  been reported for specific file systems or servers     On the CLI  use the ibrix_event command to view information about cluster events   To view events by alert type  use the following command   ibrix_event  q   e ALERT WARN  INFO     The ibrix event  1 command displays events in a short format  event descriptions are truncated  to fit on one line  The  n option specifies the number of events to display  The default is 100     94 Monitoring cluster operations      ibrix_event  l  n 3   EVENT ID TIMESTAMP LEVEL TEXT  1983 Feb 14 15 08 15 INFO File system ifsl created  1982 Feb 14 15 08 15 INFO Nic eth0 99 224 24 03  on host ix24 03 ad hp com up  1981 Feb 14 15 08 15 INFO Ibrix kernel file system is up on ix24 03 ad hp com    The ibrix_ event  i command displays events in long format  including the complete event                         description     ibrix_event  i  n 2  Event   EVENT ID   1981  TIMESTAMP   Feb 14 15 08 15  LEVEL   INFO  EX   Ibrix kernel file system is up on ix24 03 ad hp com  FILESYSTEM  HOST   ix24 03 ad hp com  USER NAME  OPERATION  SEGMENT NUMBER  PV NUMBER  NIC  HBA  RELATED EVENT 0  Event  EVENT ID   1980  TIMESTAMP   Feb 14 15 08 14  LEVEL   ALERT  TEXT category CHASSIS  name  9730 chl  overallStatus DEGRADED     component  OA
13.    rr   e HTTP 90    90   a Certificates Seament Servers    J 60    60     Storage 4 Ge      Healthy  2 Yo    J  Vendor Storage Ss S    30            _ o   _ o_o _0   _0  a E warn  0  o  La   a O     pa  Si cients E Alert  0   98 EESE  ee E   PS P  Aj Hostgroups Unknown  0  SS E SS ZS ZS E E E E E E   amp  Everts ss SS SS 8 sS S  SSSS  Services   avo    avo   1 Data Tiering       Recent Events MM  Rebalancer Gi 4  Sa       Jun 16 09 59 37 User ibrix logged in from host 16 213 41 14   Remote     Replication      Jun 16 09 27 02 User ibrix logged in from host 16 213 41 14             FJ Snapshot Space 3    Jun1518 21 35 Running on Instant On license   R                          System Status    The System Status section lists the number of cluster events that have occurred in the last 24 hours   There are three types of events           Alerts  Disruptive events that can result in loss of access to file system data  Examples are a segment that is  unavailable or a server that cannot be accessed        4   Warnings  Potentially disruptive conditions where file system access is not lost  but if the situation is not  addressed  it can escalate to an alert condition  Examples are a very high server CPU utilization level or a  quota limit close to the maximum        Gi  Information  Normal events that change the cluster  Examples are mounting a file system or creating a  segment           Cluster Overview  The Cluster Overview provides the following information   Capacity  The amoun
14.   7  From the management console  verify that the new version of StoreAll software FS IAS has  been installed on the file serving node      lt ibrixhome gt  bin ibrix version  1  S   8  Ifthe upgrade was successful  failback the file serving node    lt ibrixhome gt  bin ibrix server  f  U  h HOSTNAME   9  Repeat steps 1 through 8 for each remaining file serving node in the cluster    After all file serving nodes have been upgraded and failed back  complete the upgrade     Upgrading the StoreAll software to the 5 5 release 175    Completing the upgrade   1  From the node hosting the active management console  turn automated failover back on    lt ibrixhome gt  bin ibrix server  m   2  Confirm that automated failover is enabled    lt ibrixhome gt  bin ibrix_ server  1  In the output  the HA column should display on     3  Verify that all version indicators match for file serving nodes  Run the following command from  the active management console      lt ibrixhome gt  bin ibrix version  1    If there is a version mismatch  run the  ibrix ibrixupgrade  f script again on the  affected node  and then recheck the versions  The installation is successful when all version  indicators match  If you followed all instructions and the version indicators do not match   contact HP Support     4  Propagate a new segment map for the cluster    lt ibrixhome gt  bin ibrix dbck  I  f FSNAME  5  Verity the health of the cluster    lt ibrixhome gt  bin ibrix health  1  The output should specify P
15.   In this mode  the Fusion Manager controls console operations  All cluster administration  and configuration commands must be run from the active Fusion Manager     e passive  In this mode  the Fusion Manager monitors the health of the active Fusion Manager   If the active Fusion Manager fails  the a passive Fusion Manager is selected to become the  active console     e  nofmfailover  In this mode  the Fusion Manager does not participate in console operations   Use this mode for operations such as manual failover of the active Fusion Manager  StoreAll  software upgrades  and server blade replacements     Changing the mode  Use the following command to move a Fusion Manager to passive or nofmfailover mode   ibrix_fm  m passive   nofmfailover   P    A    h  lt FMLIST gt      If the Fusion Manager was previously the active console  StoreAll software will select a new active  console  A Fusion Manager currently in active mode can be moved to either passive or nofmfailover  mode  A Fusion Manager in nofmfailover mode can be moved only to passive mode     With the exception of the local node running the active Fusion Manager  the  A option moves all  instances of the Fusion Manager to the specified mode  The  h option moves the Fusion Manager  instances in  lt FMLIST gt  to the specitied mode     Viewing information about Fusion Managers  To view mode information  use the following command     ibrix fm  i       NOTE  If the Fusion Manager was not installed in an agile configuratio
16.   and specify an authorization password  for authorization and no encryption  enter     ibrix_snmpuser  c  n user3  g group2  k auth passwd  s authNoPriv    Deleting elements of the SNMP configuration  All SNMP commands use the same syntax for delete operations  using  d to indicate the object is  to delete  The following command deletes a list of hosts that were trapsinks   ibrix snmptrap  d  h lab15 12 domain com  lab15 13 domain com 1lab15 14 domain com    There are two restrictions on SNMP object deletions   e A view cannot be deleted if it is referenced by a group     e A group cannot be deleted if it is referenced by a user     Listing SNMP configuration information    All SNMP commands employ the same syntax for list operations  using the  1 flag  For example   ibrix_snmpgroup  1    This command lists the defined group settings for all SNMP groups  Specifying an optional group  name lists the defined settings for that group only     Event notification for MSA array systems    The StoreAll event system does not report events for MSA array systems  Instead  configure event  notification for the MSA using the SMU configuration wizard  In the SMU Configuration View  panel  right click the system and select either Configuration  gt  Configuration Wizard or Wizards  gt   Configuration Wizard  Configure up to four email addresses and three SNMP trap hosts to receive  notifications of system events     In the Email Configuration section  set the options     e Notification Leve
17.   automated failover  is configured on the servers  disable it  Run the following  command on the Fusion Manager     ibrix server  m  U  2  Identify the bondo   1 VIF     ibrix_nic  a  n bond0 1  h nodel node2 node3 node4  3  Assign an IP address to the bond1 1 VIFs on each node  In the command   I specifies the  IP address   M specifies the netmask  and  B specifies the broadcast address   ibrix nic  c  n bondo   ibrix_nic  c  n bondo     ibrix_nic  c  n bondo   ibrix nic  c  n bondo      h nodel  I 16 123 200 201  M 255 255 255 0  B 16 123 200 255    h node2  I 16 123 200 202  M 255 255 255 0  B 16 123 200 255    hb nodes  I 16 123 200 203  M 255 255 2550  B I6 123 200 255  0    1  1  1  1  h node4  I 16 123 200 204  M 255 255 255   B 16 123 200 255    se He He HE    Configuring backup servers    The servers in the cluster are configured in backup pairs  If this step was not done when your cluster  was installed  assign backup servers for the bondo   1 interface  In the following example  node1  is the backup for node2  node2 is the backup for node1  node3 is the backup for node4  and   node4 is the backup for node3     L Add the VIF     ibrix nic  a  n bond0 2  h nodel node2 node3 node4    2  Setup a backup server for each VIF       ibrix nic  b  H nodel bond0 1 node2 bond0     ibrix_nic  b  H node2 bond0 1 node1 bond0     ibrix nic  b  H node3 bond0 1 node4 bondo0     ibrix_nic  b  H node4 bond0 1 node3 bond0     A NN ND    Configuring NIC failover  NIC monitoring should 
18.   e Programmable power source   e Standby server or standby segments   e   Cluster and user network interface monitors   e Standby network interface for each user network interface  e HBA port monitoring    e Status of automated failover  on or off     66 Configuring failover    For each High Availability feature  the summary report returns status for each tested file serving  node and optionally for their standbys     e Passed  The feature has been configured     e Warning  The feature has not been configured  but the significance of the finding is not clear   For example  the absence of discovered HBAs can indicate either that the HBA monitoring  feature was not configured or that HBAs are not physically present on the tested servers     e Failed  The feature has not been configured     The detailed report includes an overall result status for all tested file serving nodes and describes  details about the checks performed on each High Availability feature  By default  the report includes  details only about checks that received a Failed or a Warning result  You can expand the report  to include details about checks that received a Passed result     Viewing a summary report    Use the ibrix haconfig  1 command to see a summary of all file serving nodes  To check  specific file serving nodes  include the  h HOSTLIST argument  To check standbys  include the   b argument  To view results only for file serving nodes that failed a check  include the      argument     ibrix_haconf
19.   hot adding or replacing a controller  che under heavy I O could cause a momentary pause  performance decrease  or loss of access  to the device while the new controller is starting up  When the controller completes the startup  process  full functionality is restored     CAUTION  Before replacing a hot pluggable component  ensure that steps have been taken to  prevent loss of data        Device warnings and precautions 195       E Regulatory compliance notices    Regulatory compliance identification numbers    For the purpose of regulatory compliance certifications and identification  this product has been  assigned a unique erg model number  The regulatory model number can be found on the  Ste nameplate label  along with all required approval markings and information  When  requesting compliance information for this product  always refer to this regulatory model number   The regulatory model number is not the marketing name or model number of the product     Product specific information   HP   Regulatory model number   FCC and CISPR classification     These products contain laser components  See Class 1 laser statement in the Laser compliance  notices section     Federal Communications Commission notice    Part 15 of the Federal Communications Commission  FCC  Rules and Regulations has established  Radio Frequency  RF  emission limits to provide an interference free radio frequency spectrum   Many electronic devices  including computers  generate RF energy incidental to th
20.   ibrixFS file system number segments matches on Iad and Fusion Manager PASSED  ibrixFS file system mounted state matches on Iad and Fusion Manager PASSED    Superblock owner for segment 4 of filesystem ibrixFS on bv18 04 matches    on Iad and Fusion Manager PASSE  Superblock owner for segment 3 of filesystem ibrixFS on bv18 04 matches    D    on Iad and Fusion Manager PASSED  Superblock owner for segment 2 of filesystem ibrixFS on bv18 04 matches    on Iad and Fusion Manager PASSED  Superblock owner for segment 1 of filesystem ibrixFS on bv18 04 matches             on Iad and Fusion Manager PASSED    Viewing logs    Logs are provided for the Fusion Manager  file serving nodes  and StoreAll clients  Contact HP    Support for assistance in interpreting log files  You might be asked to tar the logs and email them  to HP     Viewing operating statistics for file serving nodes    98    Periodically  the file serving nodes report the following statistics to the Fusion Manager     e Summary  General operational statistics including CPU usage  disk throughput  network  throughput  and operational state  For information about the operational states  see    Monitoring  the status of file serving nodes     page 93      e 10  Aggregate statistics about reads and writes    e Network  Aggregate statistics about network inputs and outputs   e Memory  Statistics about available total  free  and swap memory   e   CPU  Statistics about processor and CPU activity    e   NFS  Statistics about N
21.   ifs3 on host group C  and ifs4 on host group D  in any order  Then  set Tuning 1 on the  clients host group and Tuning 2 on host group B  The end result is that all clients in host group  B will mount ifs1 and implement Tuning 2  The clients in host group A will mount ifs2 and  implement Tuning 1  The clients in host groups C and D respectively  will mount ifs3 and ifs4  and implement Tuning 1  The following diagram shows an example of these settings in a host  group tree     How host groups work 81    Default clients hostgroup    Mount ifs  by    Mountifs2 A   B  Tuning 2  Mount ifs3 i C D Mount ifs4    To create one level of host groups beneath the root  simply create the new host groups  You do  not need to declare that the root node is the parent  To create lower levels of host groups  declare  a parent element for host groups  Do not use a host name as a group name     To create a host group tree using the CLI    1  Create the first level of the tree   ibrix_hostgroup  c  g GROUPNAME   2  Create all other levels by specitying a parent for the group   ibrix_hostgroup  c  g GROUPNAME   p PARENT     Adding a StoreAll client to a host group    You can add a StoreAll client to a host group or move a client to a different host group  All clients  belong to the default clients host group     To add or move a host to a host group  use the ibrix_hostgroup command as follows   ibrix_hostgroup  m  g GROUP  h MEMBER  For example  to add the specified host to the finance group     
22.   node  To open the GUI  select Historical Reports on the GUI dashboard     NOTE  By default  installing the Statistics tool does not start the Statistics tool processes  The GUI  displays a message if the processes are not running on the active Fusion Manager   No message  appears if the processes are already running on the active Fusion Manager  or if the processes  are not running on any of the passive management consoles   See    Controlling Statistics tool  processes     page 105  for information about starting the processes        The statistics home page provides three views  or formats  for listing the reports  Following is the  Simple View  which sorts the reports according to type  hourly  daily  weekly  detail      Upgrading the Statistics tool from StoreAll software 6 0 101    reports   ime View   Table Vew  Simple View  Toots   Reauast New Report    Hourly Daily Weekly    2012 10 22 11 00 12 00 2012 10 21 to 2012 10 22 e 2012 10 14 to 2012 10 21  2012 10 22 10 00 11 00 2012 10 20 to 2012 10 21  2012 10 22 09 00 10 00 2012 10 19 to 2012 10 20  2012 10 22 08 00 09 00 2012 10 18 to 2012 10 19  2012 10 22 07 00 08 00 2012 10 17 to 2012 10 18  2012 10 22 06 00 07 00 2012 10 16 to 2012 10 17  2012 10 22 05 00 06 00 2012 10 15 to 2012 10 16  2012 10 22 04 00 05 00   2012 10 22 03 00 04 00   2012 10 22 02 00 03 00   2012 10 22 01 00 02 00   2012 10 22 00 00 01 00   2012 10 21 23 00 00 00   2012 10 21 22 00 23 00   2012 10 21 21 00 22 00   2012 10 21 20 00 21 00   2012 10
23.   page 137   2  Run the add on script     Running an add on script     page 138   3  View the output from the add on script     Viewing the output from an add on script      page 138   Creating an add on script    To create an add on script     l     N    Add on Scripts names should be in the defined format  The file name of the script should strictly  followed this format  lt release Number gt _ lt add on_script_name gt  sh     When you provide the release number in the file name  remove the period between the first  and second digit of the version number     For example  assume you obtained the version number from the ibrix version  1  command     The output of the command displays the version number    root host2    ibrix_version  1  Fusion Manager version  6 3 33    host2 6 3 33 internal rev 132818 in SVN  6 3 33 6 3 33 GNU Linux  2 6 18 194 e15 x86 64    In this example  the version displayed is 6 3 33  Use the first two digit of the version number   6 3 for example  as a prefix to    Add on Script File Name    without          Dot   so that the name  of an add on script named AddOnTest   sh would be 63 AddOnTest  sh        IMPORTANT  The version provided in the name must match the version of StoreAll on which  you plan to run the script  For example  any add on script that you want to run on StoreAlll  version 6 3  must have 63_ in its file name  otherwise  the script will not run  For example  if  you prefix the name with another version  such as 6  2_  and you attempt
24.  21 19 00 20 00   2012 10 21 18 00 19 00   2012 10 21 16 00 17 00   2012 10 21 15 00 16 00   2012 10 21 14 00 15 00   2012 10 21 13 00 14 00   2012 10 21 12 00 13 00    The Time View lists the reports in chronological order  and the Table View lists the reports by cluster  or server  Click a report to view it     Graph covers period from 2010 03 22 01 00 te 2010 03 23 61 00   Current graph  Rebtes disk throughput  west time and network activity  using normalized values    Stats Home    Single Summary Graph  Cluster Activity Report Index  cluster     Stacked Summary Graph        Disk  0 50 KB s              In this report       Disk Wait  1 00 ms req    d d           Net  20 KB s    i A i f   Spit Read Write Summary Graph    Servers  Single Summary Gre    G  il          Generating reports    To generate a new report  click Request New Report on the StoreAll Management Console Historical  Reports GUI     102 Using the Statistics tool    Reports  Table View    Simple View Tools    Request New Report    Report Generation    Whole Cluster  Report    Report  Granularity hourly      Specify the start and end times for the report to be generated  The times  are specified in the format YYYY MM DD HH MM  where the letters stand  for year  month  day  hour and minute  Hours and mintues may be left off     Start Date Time 2012 10 22 10 00  End Date Time 2012 10 22 11 00    After clicking submit it may take a few moments to assemble the new  reports  If you are trying to generate up to th
25.  Active    lt ibrixhome gt  bin ibrix fm  i   Check  usr local ibrix log fusionserver  log for errors     If the upgrade was successtul  failback the file serving node  Run the following command on  the node with the active agile management console      lt ibrixhome gt  bin ibrix server  f  U  h HOSTNAME      From the node on which you failed back the active management console in step 8  change    the status of the management console from maintenance to passive    lt ibrixhome gt  bin ibrix fm  m passive    If the node with the passive management console is also a file serving node  manually fail  over the node from the active management console      lt ibrixhome gt  bin ibrix server  f  p  h HOSTNAME    Wait a few minutes for the node to reboot  and then run the following command to verify that  the failover was successful  The output should report Up  FailedOver      lt ibrixhome gt  bin ibrix server  1l      On the node with the passive agile management console  move the  lt installer dir gt      ibrix directory used in the previous release installation to ibrix old  For example  if you  expanded the tarball in  root during the previous StoreAll installation on this node  the  installer is in    root ibrix    On the node hosting the passive agile management console  expand the distribution tarball  or mount the distribution DVD in a directory of your choice  Expanding the tarball creates a  subdirectory named ibrix that contains the installer program  For example  if you e
26.  Fusion Manager cannot be moved from nofmtailover mode to active mode        j  Repeat steps a through h for the backup server component  in this example switch server   with server2 in the commands     k  Repeat steps a through   for each node that requires a firmware upgrade     132 Upgrading firmware    6     If you are upgrading to 6 3  you must complete the steps provided in the    After the upgrade     section for your type of upgrade  as shown in the following table     Type of upgrade   Complete the steps in this section    Online upgrades      After the upgrade     page 13        Automated offline      After the upgrade     page 15   upgrades       Manual offline    After the upgrade     page 17   upgrades          Finding additional information on FMT    You can find additional information on FMT as follows     Online help for FMT  To access the online help for FMT  enter the hpsp_ fmt command on  the file system node console     HP HPSP_FMT User Guide  To access the HP HPSP_FMT User Guide  go to the HP StoreAlll  Storage Manuals page     htto   www hp com support StoreAllManuals    Downloading MSA2000 G2 G3 firmware for 9320 systems    To obtain the firmware  complete the following steps     1          en SS oa    Go to the following HP web site    http   www  hp com go support   Select Download drivers and software   enter your model number in the search box  and  begin your search    Select your array type    Select your operating system    Find the firmware vers
27.  IP address on the New Discovery  dialog box     40 Getting started    New Discovery      Discover a group of systems    Discover a single system    Required field      Name    Fusion Manager    Schedule   Wi Automatically execute discovery every   1 days   a 12 30 PM e    Ping inclusion ranges  system  hosts  names  and or hosts files Help w  10 5  59 156      Enter the read community string on the Credentials  gt  SNMP tab  This string should match the Phone  Home read community string  If the strings are not identical  the Fusion Manager IP might be  discovered as    Unknown        Credentials  Contigure Repait    SNMP Credentials  New Discovery Task 1       Use these credentials    Read community string  labaystem ee ROE    Hf these credentials tal  try others that may apply  This may empact performance Leam more    Configuring HP Insight Remote Support on StoreAll systems 4    Devices are discovered as described in the following table                 Device Discovered as  Fusion Manager IP System Type  Fusion Manager  System Subtype  9000  Product Model  HP 9000 Solution  File serving nodes System Type  Storage Device  System Subtype  9000  Storage  HP ProLiant  Product Model  HP 9320 NetStor FSN ProLiant DL380 G7   HP 9320 NetStor FSN ProLiant DL380 G6   HP 9300 NetStor FSN ProLiant DL380 G7   HP 9300 NetStor FSN ProLiant DL380 G6        The following example shows discovered devices on HP SIM 7 1     HS Summa    3 0 Critical Y 3 Major    0 Minor    2 Normal  10 Disable
28.  Identifying standby paired HBA ports to the configuration database allows the Fusion Manager to  apply the following logic when they fail     e If one port in a pair fails  do nothing  Traffic will automatically switch to the surviving port  as  configured by the HBA vendor or the software    e If both ports in a pair fail  fail over the server s segments to the standby server    Use the following command to identify two HBA ports as a standby pair    ibrix hba  b  P WWPN1 WWPN2  h HOSTNAME    Enter the WWPN as decimal delimited pairs of hexadecimal digits  The following command  identifies port 20 00 12 34 56 78 9a bc as the standby for port 42 00 12 34 56 78 9a be for  the HBA on file serving node s1 hp com     ibrix hba  b  P 20 00 12 34 56 78 9a bc 42 00 12 34 56 78 9a bc  h sl hp com    Turning HBA monitoring on or off    If your cluster uses single port HBAs  turn on monitoring for all of the ports to set up automated  failover in the event of HBA failure  Use the following command     ibrix hba  m  h HOSTNAME  p PORT    Configuring High Availability on the cluster 65    For example  to turn on HBA monitoring for port 20 00 12 34 56 78 9a bc on node s1 hp com   ibrix_hba  m  h sl hp com  p 20 00 12 34 56 78 9a be   To turn off HBA monitoring for an HBA port  include the  U option    ibrix_hba  m  U  h HOSTNAME  p PORT    Deleting standby port pairings    Deleting port pairing information from the configuration database does not remove the standby  pairing of the ports 
29.  JRE 1 5 or later installed and with a desktop manager  running  for example  a Linux based system running X Windows   The ssh client must also be  installed     128    l    K    Licensing    On the Linux based system  run the following command to connect to the Fusion Manager     ssh  X root  lt management_console_IP gt     When prompted  enter the password for the Fusion Manager    Launch the AutoPass GUI     usr local ibrix bin fusion license manager   In the AutoPass GUI  go to Tools  select Configure Proxy  and configure your proxy settings   Click Retrieve Install License  gt  Key and then retrieve and install your license key    If the Fusion Manager machine does not have an Internet connection  retrieve the license from    a machine that does have a connection  deliver the file with the license to the Fusion Manager  machine  and then use the AutoPass GUI to import the license        13 Upgrading firmware    The Firmware Management Tool  FMT  is a utility that scans the StoreAll system for outdated  firmware and provides a comprehensive report that provides the following information     e Device found   e Active firmware found on the discovered device   e Qualified firmware for the discovered device   e Proposed action     Users are told whether an upgrade is recommended  e Severity     How severe an upgrade is required   e   Reboot required on flash   e Device information    e   Parent device ID    Components for firmware upgrades    The HP StoreAll system includes s
30.  Manual offline upgrades for StoreAll software   Gxio      15  Preparing for E e E 15  Performing the upgrade Mona i0 c2  1 asnccagueee a esesesterinainateeainelen ian ate 16  After the MIS beth icc  anaizelam darn tec c a5 ens uses sinseetvca ssp ttre rsen EnEn Enss erener neeese neneeese nea 17  Upgrading Linux StoreAll cette  EE 18  Installing a minor kernel update on Linux dente    18  Upgrading Windows teuer aerecrxeiiaenenarinicendiaeodiniia eee 19  Upgrading pre 6 3 Express Query enabled file zwstemz 19  Required steps before the StoreAll Uporode 19  Required steps after the StoreAll Heem  zcestvdegugetegeteeguroeeege E  r eEENegENee Eege gege 20  Troubleshooting upgrade Teste  e eene eeneg 21  ENEE eegener 21  RE 22  Offline upgrade fails because iLO firmware is out of dote  22  Node is not registered with the cluster network              cccecceeeceeeeseeneeeeeeecesneeeeeeeeeesstseeeeeesenaes 22  EE 23  File system in MIF state after StoreAll software 6 3 uporode 23   2 Product descrgtion  sssini iaraa aar ni EE INI TE EEE Ei 25  9300 STORAGE  AGI WY E 25  9320  SIS Syste genee 25  e 25  HP StoreAll software Jeoiures 25  High availability and Appel ENeeENte 26  e cee ou DEE 27  EE 27  Installation SEENEN 27  Additional configuration SES 2css20 Ssnnsneeciivasnedadnwnansuahycere esenaneded ege ACEN ege 27  e Et EE 28  Using the StoreAll Management Consclke 29  Customizing  the HIE 31  Adding user accounts for Management Console occess  32  Usingthe EEN 32  Starting the
31.  NDMP process management    All NDMP actions are usually controlled from the DMA  However  if the DMA cannot resolve a  problem or you suspect that the DMA may have incorrect information about the NDMP environment   take the following actions from the GUI or CLI     e Cancel one or more NDMP sessions on a file serving node  Canceling a session stops all  spawned sessions processes and frees their resources if necessary     e Reset the NDMP server on one or more file serving nodes  This step stops all spawned session  processes  stops the ndmpd and session monitor daemons  frees all resources held by NDMP   and restarts the daemons     Viewing or canceling NDMP sessions    To view information about active NDMP sessions  select Cluster Configuration from the Navigator   and then select NDMP Backup  gt  Active Sessions  For each session  the Active NDMP Sessions  panel lists the host used for the session  the identifier generated by the backup application  the  status of the session  backing up data  restoring data  or idle   the start time  and the IP address  used by the DMA     To cancel a session  select that session and click Cancel Session  Canceling a session kills all  spawned sessions processes and frees their resources if necessary                 Active NDMP Sessions  wad pice Hostname Identifier Type Start Time DMA IP Address    NDMP Backup imvm2 15543 IDLE Wed May 26 22 46 39 2010 192 168 10 2  pl Active Sessions imvm2 16769 DATA_BACKUP Thu May 27 01 33 19 2010 192 
32.  O     Flexible configuration  Segments can be migrated dynamically for rebalancing and data  tiering     High availability and redundancy    The segmented architecture is the basis for fault resilience   loss of access to one or more segments  does not render the entire file system inaccessible  Individual segments can be taken offline  temporarily for maintenance operations and then returned to the file system     26    To ensure continuous data access  StoreAll software provides manual and automated failover  protection at various points     Server  A failed node is powered down and a designated standby server assumes all of its  segment management duties     Segment  Ownership of each segment on a failed node is transferred to a designated standby  server     Network interface  The IP address of a failed network interface is transferred to a standby  network interface until the original network interface is operational again     Storage connection  For servers with HBA protected Fibre Channel access  failure of the HBA  triggers failover of the node to a designated standby server     Product description       3 Getting started          IMPORTANT  Follow these guidelines when using your system     Do not modify any parameters of the operating system or kernel  or update any part of the  9320 Storage unless instructed to do so by HP  otherwise  the system could fail to operate  properly    File serving nodes are tuned for file serving operations  With the exception of suppo
33.  REST API  Object API  by entering the following    command   ibrix httpshare  d  f  lt FSNAME gt   In this instance  lt FSNAME gt  is the file system   e Delete a specific REST API  Object API  share by entering the following command   ibrix httpshare  d  lt SHARENAME gt   c  lt PROFILENAME gt   t  lt VHOSTNAME gt   In this instance       lt SHARENAME gt  is the share name        lt PROFILENAME gt  is the profile name        lt VHOSTNAMEs is the virtual host name  Disable Express Query by entering the following command   ibrix fs  T  D  f  lt FSNAME gt   Shut down Archiving daemons for Express Query by entering the following command   ibrix_archiving  S  F  delete the internal database files for this file system by entering the following command     rm  rf  lt FS_MOUNTPOINT gt   archiving database  In this instance  lt FS_MOUNTPOINT gt  is the file system mount point     Required steps after the StoreAll Upgrade  These steps are required after the StoreAll Upgrade     l   2     Restart the Archiving daemons for Express Query   Re enable Express Query on the file systems you disabled it from before by entering the  following command     ibrix fs  T  E  f  lt FSNAME gt    In this instance  lt FSNAME gt  is the file system    Express Query will begin resynchronizing  repopulating  a new database for this filesystem   Re enable auditing if you had it running before  the default  by entering the following command   ibrix_ fs  A  f  lt FSNAME gt   oa audit_mode on   In this instan
34.  SNMP agent  72  configure SNMP notification  72  configure SNMP trapsinks  73  define MIB views  74  delete SNMP configuration elements  75  enable or disable email notification  71  list email notification settings  72  list SNMP configuration  75  monitor  94  MSA array systems  75  remove  95  types  70  view   94   exporting  NFS  154    F    failover  automated  50  configure automated failover manually  62  crash capture  68  fail back a node  64  manual  64  NIC  49  server  55  troubleshooting  141  Federal Communications Commission notice  196  file serving node  recover  144  file serving nodes  fail back  64  failover manually  64  health checks  96  maintain consistency with configuration database  142  migrate segments  115  monitor status  93  operational states  94  power management  109  prefer a user network interface  124  remove from cluster  120  rolling reboot  109  run health check  142  start or stop processes  110  statistics  98  troubleshooting  140  tune  110    211    view process status  110  file system   migrate segments  115  firewall configuration  34  firmware  upgrade  129  Fusion Manager   agile  53   back up configuration  77   failover  53    G   grounding  methods  192   GUI  add users  32  change password  33  customize  31  Details panel  31  Navigator  31  open  29    view events  94    H    hardware  power off  108  hazardous conditions  symbols on equipment  193  HBAs  display information  66  monitor for high availability  64  hea
35.  The format is   ibrix_snmpagent  u  v 3   e engineId    p PORT    r READCOMMUNITY      w WRITECOMMUNITY    t SYSCONTACT    n SYSNAME    o SYSLOCATION      y  yes no     z  yes no     c  yes no     s  on off      Configuring trapsink settings    A trapsink is the host destination where agents send traps  which are asynchronous notifications  sent by the agent to the management station  A trapsink is specified either by name or IP address   StoreAll software supports multiple trapsinks  you can define any number of trapsinks of any SNMP  version  but you can define only one trapsink per host  regardless of the version     At a minimum  trapsink configuration requires a destination host and SNMP version  All other  parameters are optional and many assume the default value if no value is specified    The format for creating a v1 v2 trapsink is    ibrix_snmptrap  c  h HOSTNAME  v  1 2    p PORT    m COMMUNITY    s  on loff     If a port is not specified  the command defaults to port 162  If a community is not specified  the  command defaults to the community name public  The  s option toggles agent trap transmission  on and off  The default is on  For example  to create a v2 trapsink with a new community name   enter    ibrix snmptrap  c  h lab13 116  v 2  m private   For a v3 trapsink  additional options define security settings  USERNAME is a v3 user defined on  the trapsink host and is required  The security level associated with the trap message depends on  which passwords ar
36.  The standby pairing is either built in by the HBA vendor or implemented by  software     To delete standby paired HBA ports from the configuration database  enter the following command   ibrix_hba  b  U  P WWPN1 WWPN2  h HOSTNAME    For example  to delete the pairing of ports 20 00 12 34 56 78 9a bc and  42 00 12 34 56 78 9a bc on node s1 hp com     ibrix_hba  b  U  P 20 00 12 34 56 78 9a bc 42 00 12 34 56 78 9a be   h s1 hp com  Deleting HBAs from the configuration database    Before switching an HBA to a different machine  delete the HBA from the configuration database   ibrix_hba  d  h HOSTNAME  w WWNN    Displaying HBA information   Use the following command to view information about the HBAs in the cluster  To view information  for all hosts  omit the  h HOSTLIST argument    ibrix_hba  1   h HOSTLIST    The output includes the following fields                       Field Description   Host Server on which the HBA is installed    Node WWN This HBA   s WWNN    Port WWN This HBA   s WWPN    Port State Operational state of the port    Backup Port WWN WWPN of the standby port for this port  standby paired HBAs only    Monitoring Whether HBA monitoring is enabled for this port        Checking the High Availability configuration    Use the ibrix haconfig command to determine whether High Availability features have been  configured for specific file serving nodes  The command checks for the following features and  provides either a summary or a detailed report of the results   
37.  Viewing software version number 140  Troubleshooting specific EE eessen Eeer 140  SOMME SONI CES  eseuen S a a Na tee eegene 140  Failove EE 141  TEEN 141  Synchronizing information on file serving nodes and the configuration dotobose 142  Troubleshooting an Express Query Manual Intervention Failure  MIEL    142  1S Recovering ET 144  Obtaining the latest StoreAll software releose  144  Performing the WOCOV CNY ied ogsascteaachaoranosacaieatccvie ctu aise Sacaterewasquniseieruheetonaneieedusniadewedutaeeteeiaieest 144  Completing the restore on a file Serving NOd s ecch  dcsncscesscvrccsidenscdeaeserecenecrendinrweelectarcrnctaicdeaess 149    6 Contents    The ibrix_auth command fails after a restores      cceecccceeccccceeecccccuccccceuecccseuecccesuaeeceeuaeeseeuaeeeceaaees 150    IG Suppart a  nid olher resou Essien EE 151  ENEE deeg eege 15   Related WEE 151   Eege ege 152  e TE 152  E 152  E 152  NEE EE 152   17 Documentation Teele xcsecsesiausesccce ones idecassinsssbs deeds nedelnasinbatdeausbacehedactwnts 133   A Cascading Up Gide 8  c0csccc csastedsessetuesessarainepanesanesnsmmanentomeawenanerabesanasesade 154  Upgrading the StoreAll software to the 6 1 release    154   Online upgrades for StoreAll software 6 x to 6   ics iesecccasdeteiles scexearsexeasieticuaistaieicecineeieneeecaie 155  Preparing Reie EE 155  Performing the Wrap E 155  After the AGS geet  156   Offline upgrades for StoreAll software 5 6 x or Gfsio  l   156  Preparing RE EE 156  Performing E EE 15
38.  a manual backup of the  upgraded configuration      lt ibrixhome gt  bin ibrix fm  B    Verify that all version indicators match for file serving nodes  Run the following command from  the active management console    lt ibrixhome gt  bin ibrix version  1    If there is a version mismatch  run the  ibrix ibrixupgrade  f script again on the  affected node  and then recheck the versions  The installation is successful when all version  indicators match  If you followed all instructions and the version indicators do not match   contact HP Support     Verity the health of the cluster    lt ibrixhome gt  bin ibrix health  1  The output should show Passed   on     Troubleshooting upgrade issues    Automatic upgrade fails    Check the upgrade  log file to determine the source of the failure   The log file is located in the  installer directory   If it is not possible to perform the automatic upgrade  continue with the manual  upgrade procedure     178 Cascading Upgrades    ibrixupgrade hangs  The installation can hang because the RPM database is corrupted  This is caused by inconsistencies  in the Red Hat Package Manager     Rebuild the RPM database using the following commands and then attempt the installation again   Note that rm is followed by a space and then two underscores  and rpm is followed by a space  and then two dashes    cd  var lib rpm   rm 3    rpm   rebuilddb    On the management console  ibrixupgrade may also hang if the NFS mount points are stale   In this case  c
39.  array management SOMWONE sxiar0ecesececevocedssasedsssasancnrewndseausoinsaenesntdashecermesoiasseredes 32  SioreAll client MMEHOCES  geeni E E R 33  StoreAll software IMAI ACES eegene degag  ieren 33  Changing  JOG SSW ON Sasi deacon terested cntnhen Aeaiendd ue tat EEKE EEEE EAE ENEE E EES PER EAE E Guu 33  Configuring E EEN 34  Configuring EE 35  Configuring HP Insight Remote Support on StoreAll ovsiemz 35  Configuring the StoreAll cluster for Insight Remote Support  37  Configuring Insight Remote Support for HP SIM 7 1 ondIReb  39    Contents 3    Configuring Insight Remote Support for HP SIM 6 3 and IRS B   42    Testing the Insight Remote Support configurotion  45  Updating the Phone Home Conigunanon EE 45  Disabling Phone WING egegteeggertenudeegeg ege 46  Troubleshooting Insight Remote Support  wie t reece cccnaceiatvielwerceeeaadeeenetete sears 46   4 Configuring virtual interfaces for client occesg 48  Network WE ET 48  E bonded EE 49  Configuring backup SERVI uarccecuesincienincreddiuayelnepuseueiddiedteunesuiceeiediadiumoouseeieleasramenwdeusiectianed 49  Configuring ele eng 49  Configuring automoted TA OVED EE 50  Example EE eebe been 50  Specifying VIFs in the client Cont Wr Oltvesdvcaseaencinveeaeeeseededuscbuddeucedeiencnuscerterdarauemudsdderscaweds 50  EE CU EE 51  Configuring link state monitoring for iSCSI network Imnierloeces  eiseres ssreressseeeeo 51  Eege SE  Agile RRE eteEbee  es 53  Agile EE 53  Viewing information about Fusion MGNGGGS 2 02 cicedeea
40.  assistance with product installation  contact  your HP authorized reseller        192 Warnings and precautions    Equipment symbols    If the following symbols are located on equipment  hazardous conditions could exist     A WARNING  AA    Any enclosed surface or area of the equipment marked with these symbols indicates the presence  of electrical shock hazards  Enclosed area contains no operator serviceable parts  To reduce the  risk of injury from electrical shock hazards  do not open this enclosure     WARNING  AK    Y RJ 45 receptacle marked with these symbols indicates a network interface connection  To  reduce the risk of electrical shock  fire  or damage to the equipment  do not plug telephone or  telecommunications connectors into this receptacle     WARNING     Any surface or area of the equipment marked with these symbols indicates the presence of a hot  surface or hot component  Contact with this surface could result in injury           A  A WARNING  Ss Gs     Power supplies or systems marked with these symbols indicate the presence of multiple sources of  power     WARNING  AA    Any product or assembly marked with these symbols indicates that the component exceeds the  recommended weight for one individual to handle safely        Rack warnings and precautions    Ensure that precautions have been taken to provide for rack stability and safety  It is important to  follow these precautions providing for rack stability and safety  and to protect both personnel and  
41.  avOpwnivn uyeia ko To nepiPadAov napadidovtas Tov aypnoto  Sonkioud Gas oe ELoucIOSoTNHEVO onpeio GUAAOYIS yia THY avaKUKAWON axpNnoTOU NAEKTPIKOU Kal  nAekTpovikou esonA lopou  Tia nepioodtepes nAnpoopics  enikolvwvjote pe THY UNNpEGIA andppIWNS  ANOPPILEPATWV TNG NEpIoXNs Cac        Hungarian recycling notice  A hullad  k anyagok megsemmis  t  se az Eur  pai Uni   haztartasaiban    Ez a szimb  lum azt jelzi  hogy a k  sz  l  ket nem szabad a h  ztart  si hullad  kkal egy  tt kidobni  Ehelyett  a leselejtezett berendez  seknek az elektromos vagy elektronikus hullad  k   tv  tel  re kijel  lt helyen t  rt  n    beszolg  ltat  s  val meg  vja az emberi eg  szs  get   s a k  rnyezetet  Tov  bbi inform  ci  t a helyi  k  ztisztas  gi v  llalatt  l kaphat        Recycling notices 203    Woon recycling notice  Smaltimento di apparecchiature usate da parte di utenti privati nell Unione Europea    Questo simbolo avvisa di non smaltire il prodotto con i normali rifi uti domestici  Rispettare la salute  umana e l ambiente conferendo l apparecchiatura dismessa a un centro di raccolta designato per il  riciclo di apparecchiature elettroniche ed elettriche  Per ulteriori informazioni  rivolgersi al servizio per  lo smaltimento dei rifi uti domestici        Latvian recycling notice  Europos Sajungos nam     kio vartotoj   jrangos atlieky   alinimas    is simbolis nurodo  kad gaminio negalima i  mesti kartu su kitomis buitin  mis atliekomis  Kad    apsaugotum  te   moni   sveikat   ir apl
42.  be performed manually     Automatic upgrades    The automated upgrade procedure is run as an offline upgrade  When each file serving node is  upgraded  all file systems are unmounted from the node and services are stopped  Clients will  experience a short interruption to file system access while the node is upgraded     All file serving nodes and management consoles must be up when you perform the upgrade  If a  node or management console is not up  the upgrade script will fail and you will need to use a    manual upgrade procedure instead  To determine the status of your cluster nodes  check the  dashboard on the GUI     To upgrade all nodes in the cluster automatically  complete the following steps     Upgrading the StoreAll software to the 5 5 release 167    cl    Check the dashboard on the management console GUI to verify that all nodes are up    2  Verity that you have an even number of FSNs configured in a couplet pair high availability  architecture by running the following command   ibrix server  1   3  On the current active management console  move the  lt installer_dir gt  ibrix directory  used in the previous release installation to ibrix old  For example  if you expanded the  tarball in  root during the previous StoreAll installation on this node  the installer is in  root    ibrix    4  On the current active management console  expand the distribution tarball or mount the   distribution DVD in a directory of your choice  Expanding the tarball creates a subdirect
43.  cae cte arteatsecceds isnie o adepatadiesehaceseuroawss EA E OTa aT aE EDTA TE EERE 74  COMIC ET 74  Deleting elements of the SNMP conbiogurotton  75  Listing SNMP configuration lintonman ons sca ceecsauslain a icteradesiichacenciyqtetiletaneedivealauietncees 75  Event notification for MSA array SySteniS inissxsececcacscesevcdersvmnsessdseousdiauenusseideeselenendiebiesedarnnens 75  7 Coniigurnng system ege iieiaei indisi AR EES ekis gi  Backing up the Fusion Manager Gesrhtegentien Zsgtesiseg desse skiasbugefere sde eege Eessen 77  Using NDMP backup E 77  Configuring NDMP parameters on the glugter seeumgeiieer  retegen  egeie eege Edge 78    4    Contents    NDMP process Eu EEN 7     Viewing or canceling NDMP SSssi0n8  ccscetesesnsdeoncendidabdenendvncceserbinceiuadeedaeiendeseivaienasseiueas 79  Starting  stopping  or restarting an NDMP Server    79  Viewing or rescanning tape and media changer devces 80  NDMP EE 80   8 Creating host groups for StoreAll dJhentz 81  How host E 81  Creating EE 81  Adding a StoreAll client to a Et egeegegeug  eeeteben eege eegener deg eeergeegt 82  Adding a domain rule to a eegend 82  Viewing eet 83  Deleting  Jett 83  Other host group EEN eer eengegeEeeeeugieedeeg gege 83  CR 84  Monitoring 9300 9320 hardware 312s ncn vem aan anaes Oa menieleuainns 84  Monitoring  SEIVETS  diteoscecwsmlasnlcsdididoratvanndeessrecelsiuerideuaastasuaindestasaaasaioedoniestivensedvaxsiadetouedes 84  Monitoring hardware EE 88  Obtaining SERVE GSTS ht sedicescucsn
44.  can  use the Navigator to drill down in the cluster configuration to add  view  or change cluster objects  such as file systems or storage  and to initiate or view tasks such as snapshots or replication  When  you select an object  a details page shows a summary for that object  The lower Navigator allows  you to view details for the selected object  or to initiate a task  In the following example  we selected  Filesystems in the upper Navigator and Mountpoints in the lower Navigator to see details about  the mounts for file system ifs1        Filesystems    Status Name State Space  GB    Space Files   Files Generation Segments    DO rei Mounted 3089 07  sox  268 280 000 1 2 4  Event Status  24 hours   0 1 2 Se R   sa          Updated Jun  16  2011  10 59 05 AM             DP Dashboard aa  ge Cluster Configuration  1 1 Filesystems    D Snapshots  2 Servers  Gi File Shares    nrs    crs             Mountpoints       a E Summary       Host Mountpoint Access State    Segments  AS Mountpoints  Si NFS Exports  SS CFS Shares  Si HTTP Shares  Si FTP Shares  Si Remote Replication Exports       evmc4  Mei RW Mounted  evmc48 Mei RW Mounted                   NOTE  When you perform an operation on the GUI  a spinning finger is displayed until the  operation is complete  However  if you use Windows Remote Desktop to access the GUI  the  spinning finger is not displayed        Customizing the GUI    For most tables in the GUI  you can specify the columns that you want to display and the s
45.  check  the dashboard on the GUI or use the ibrix_health command    Verify that ssh shared keys have been set up  To do this  run the following command on the  node hosting the active instance of the agile Fusion Manager    ssh  lt server_name gt    Repeat this command for each node in the cluster    Note any custom tuning parameters  such as file system mount options  When the upgrade is  complete  you can reapply the parameters    Ensure that no active tasks are running  Stop any active remote replication  data tiering  or  rebalancer tasks running on the cluster   Use ibrix_task  1 to list active tasks   When the  upgrade is complete  you can start the tasks again    The 6 1 release requires that nodes hosting the agile management be registered on the cluster  network  Run the following command to verify that nodes hosting the agile Fusion Manager  have IP addresses on the cluster network    ibrix_fm  f   If a node is configured on the user network  see    Node is not registered with the cluster network       page 22  for a workaround    Stop all client I O to the cluster or file systems  On the Linux client  use Loof  lt  mountpoint gt   to show open files belonging to active processes     156 Cascading Upgrades    10     uF    12     Unmount file systems on Linux StoreAll clients   ibrix umount  f MOUNTPOINT    On all nodes hosting the passive Fusion Manager  place the Fusion Manager into maintenance  mode      lt ibrixhome gt  bin ibrix fm  m maintenance  A  On the acti
46.  column    should display of       lt ibrixhome gt  bin ibrix server  1  Unmount file systems on Linux StoreAll clients   ibrix umount  f MOUNTPOINT      Stop the SMB  NFS and NDMP services on all nodes  Run the following commands on the    node hosting the active Fusion Manager    ibrix_server  s  t cifs  c stop   ibrix_ server  s  t nfs  c stop   ibrix server  s  t ndmp  c stop   If you are using SMB  verify that all likewise services are down on alll file serving nodes   ps  ef   grep likewise   Use kill  9 to stop any likewise services that are still running   If you are using NFS  verify that all NFS processes are stopped   ps  ef   grep nfs   If necessary  use the following command to stop NFS services    etc init d nfs stop   Use kill  9 to stop any NFS processes that are still running     If necessary  run the following command on all nodes to find any open file handles for the  mounted file systems     lsof  lt  mountpoint gt     Use kill  9 to stop any processes that still have open file handles on the file systems       Unmount each file system manually     ibrix umount  f FSNAME  Wait up to 15 minutes for the file systems to unmount     Troubleshoot any issues with unmounting file systems before proceeding with the upgrade   See    File system unmount issues     page 23      Performing the upgrade manually    16    This upgrade method is supported only for upgrades from StoreAll software 6 x to the 6 3 release   Complete the following steps     1     StoreAll OS v
47.  command to modify certain if config options for a network     ibrix nic  c  n IFNAME  h HOSTNAME   I IPADDR    M NETMASK     B BCASTADDR    T MTU     For example  to set netmask 255 255 0 0 and broadcast address 10 0 0 4 for interface eth3 on  file serving node s4 hp com     ibrix nic  c  n eth3  h s4 hp com  M 255 255 0 0  B 10 0 0 4    Preferring network interfaces    After creating a user network interface for file serving nodes or StoreAll clients  you will need to  prefer the interface for those nodes and clients   It is not necessary to prefer a network interface  for NFS or SMB clients  because they can select the correct user network interface at mount time      A network interface preference is executed immediately on file serving nodes  For StoreAll clients   the preference intention is stored on the Fusion Manager  When StoreAll software services start  on a client  the client queries the Fusion Manager for the network interface that has been preferred  for it and then begins to use that interface  If the services are already running on StoreAll clients    Maintaining networks 123    when you prefer a network interface  you can force clients to query the Fusion Manager by executing  the command ibrix_lwhost   a on the client or by rebooting the client     Preferring a network interface for a file serving node or Linux StoreAll client    The first command prefers a network interface for a File Server Node  the second command prefers  a network interface for a clien
48.  conversions without actually performing the upgrade   When using the utility  you should be aware of the following     e   The file system must be unmounted   e Segments marked as BAD are not upgraded     e The upgrade takes place in parallel across all file serving nodes owning segments in the file  system  with at least one thread running on each node  For a system with multiple controllers   the utility will run a thread for each controller if possible     Upgrading the StoreAll software to the 6 1 release 159    e Files up to 3 8 TB in size can be upgraded  To enable snapshots on larger files  they must be  migrated after the upgrade is complete  see    Migrating large files     page 160      e In general  the upgrade takes approximately three hours per TB of data  The configuration of  the system can affect this number     Running the utility   Typically  the utility is run as follows to upgrade a file system    upgrade60 sh file system   For example  the following command performs a full upgrade on file system   s1   upgrade60 sh Cat    Progress and status reports   The utility writes log files to the directory  usr local ibrix log upgrade60 on each node  containing segments from the file system being upgraded  Each node contains the log files for its  segments     Log files are named  lt host gt _ lt segment gt   lt date gt  upgrade 1log  For example  the following  log file is for segment ilv2 on host ib4 2     ib4 2 ilv2_2012 03 27 11 01 upgrade log    Restarting th
49.  database   3  Enable Express Query for the file system by entering the following command   ibrix fs  T  E  f  lt FSNAME gt        NOTE  The moment Express Query is enabled  database repopulation starts for  the file system specified by  lt FSNAME gt         A  If there are any backup of the Custom Metadata with the tool MDExport  re import  them with MDImport as described in CLI Reference Guide     NOTE  If no such backup exists  the custom metadata must be manually created  again        5  Wait for the resynchronizer to complete by entering the following command   ibrix archiving  1  Repeat this command until it displays the OK status for the file system     6  If none of the above worked  contact HP     Troubleshooting an Express Query Manual Intervention Failure  MIF  143       15 Recovering a file serving node    Use the following procedure to recover a failed file serving node  You will need to create a  QuickRestore DVD or USB key  as described later  and then install it on the affected node  This  step installs the operating system and StoreAll software on the node and launches a configuration  wizard        A CAUTION  The Quick Restore DVD or USB key restores the file serving node to its original factory  state  This is a destructive process that completely erases all of the data on local hard drives        Obtaining the latest StoreAll software release    StoreAll OS version 6 3 is only available through the registered release process  To obtain the  ISO image  cont
50.  eege eebe Eege 203  Finnish recycling Re 203  WE RE 203  El le 203  Greek E ME 203  Hungarian recycling ONCE sso eats ad cuccntounisalecaladcuntnencaiucatedenntensncaituetcnntedtessiadedeteusdehectes 203  lalian recycling ONG sac asoresuctinieks yamviacuadidecteovaoassadeddevdaansenntedevaacseineniyanedeencedeurramaiuieens 204  latvian recycling NEE 204  Lithuanian EE Eegenen teuer 204  Polish recycling Notices asninn a REE EAEE Ea EEA E E ERENER EIE EERTE 204  Portuguese recycling DEE Eegeregie egenen hetten 204  LE 205  Slovak recycling DEIER 205  Spanish eet eege eege 205  EE 205  Battery replacement NOTICES    lt ncrcnssaterezeconaucadnecadediauenenangddededsnceaienainedapacdueninenendaabidgantsaucunedede 206  KENNT 206  French battery MONG iicsstensteuctavedlecetesmbeiabaceddesetiestenepddeldedy vinalesuedeediaesbienlassadgsbaiubioaludealeccs 206  erte 207  italian Er Egger   t 207  Japanese  battery NOliGe cicsscesiaraioneinicinsd aere E Eege 208  Spanish battery TEE eer EE 208  E EE 209  TE 211    Contents 9       1 Upgrading the StoreAll software to the 6 3 release    10          This chapter describes how to upgrade to the 6 3 StoreAll software release        IMPORTANT  Print the following table and check off each step as you complete it           NOTE      Upgrades from version 6 0 x  CIFS share permissions are granted on a global basis in    v6 0 X  When upgrading from v6 0 X  confirm that the correct share permissions are in place        Table 1 Prerequisites che
51.  execute ibrix_dbck   o to resynchronize the server s information with the configuration database  For information on  ibrix health  see    Monitoring cluster health     page 95         NOTE  The ibrix dek command should be used only under the direction of HP Support        To run a health check on a file serving node  use the following command   ibrix health  i  h HOSTLIST    If the last line of the output reports Passea  the file system information on the file serving node  and Fusion Manager is consistent     To repair file serving node information  use the following command   ibrix_dbck  o  f FSNAME   h HOSTLIST   To repair information on all file serving nodes  omit the  h HOSTLIST argument     Troubleshooting an Express Query Manual Intervention Failure  MIF     An Express Query Manual Intervention Failure  MIF  is a critical error that occurred during Express  Query execution  These are failures Express Query cannot recover from automatically  After a MIF  occurrence  the specific file system is logically removed from the Express Query and it requires a  manual intervention to perform the recovery  Although these errors inhibit the normal functionality  of the Express Query  they are typically due to another unrelated event in the cluster or the file  system  Therefore  most of the work to recover from an Express Query MIF is to check the health  of the cluster and the file system and take corrective actions to fix the issues caused by these events   Once the cluster a
52.  host2 archive  tar  xvf addOnCollection tgz    In this instance  addOnCollection tgz is the tar file containing the output of the add on  script     The tar command displays the following   7    host2_addoOnCollection 2012 12 20 12 38  36 tgz    4  Individual node files in the tar format are provided as   lt hostname gt _ lt collection name gt _ lt time date stamp gt  tgz    Extract the  lt hostname gt _ lt collection name gt   lt time date stamp gt  tgz tar file by  entering the following command      root host2 archive  tar  xvf  host2_addOnCollection_ 2012 12 20 12 38 36 tgz    In this instance  host 2_addOnCollection_2012 12 20 12 38 36 tgzis the individual  node file  chostname gt _ lt collection name gt   lt time date stamp gt  tgz       5  A directory with the host name is extracted  The output of the add on script is found in the    lt hostname gt  logs add_on_script local ibrixcollect   ibrix collect additional data    Find the directory containing the host name by entering the 1s  1 command  as shown in  the following example      root host2 archivel  1s  1   The following is the output of the command    total 5636    rw r  r   1 root root 2021895 Dec 20 12 41 addOnCollection tgz  drwxr xr x 6 root root 4096 Dec 20 12 41 host2     rw r  r   1 root root 2156388 Dec 20 12 41  host2_addOnCollection 2012 12 20 12 38 36 tgz    In this example  host2 is the directory with the host name     6  Goto the   lt hostname gt  logs add_on_script local ibrixcollect   ibrix_collec
53.  host_tune   q  gt  local ibrix_host_tune q txt  T Ensure that the  ibrix  local user account exists and it has the same UID number on all the    7  servers in the cluster  If they do not have the same UID number  create the account and  change the UIDs as needed to make them the same on all the servers  Similarly  ensure  that the  ibrix user  local user group exists and has the same GID number on all servers   Enter the following commands on each node   grep ibrix  etc passwd  grep ibrix user  etc group  12 Ensure that all nodes are up and running  To determine the status of your cluster nodes  g  check the health of each server by either using the dashboard on the Management Console  or entering the ibrix_health  i  h  lt hostname gt  commond for each node in the  cluster  At the top of the output look for    PASSED      13 If you have one or more Express Query enabled file system  each one needs to be manually    7   upgraded as described in    Upgrading pre 6 3 Express Query enabled file systems      page 19    IMPORTANT  Run the steps in    Required steps before the StoreAll Upgrade     page 19   before the upgrade  This section provides steps for saving your custom metadata and audit  log  After you upgrade the StoreAll software  run the steps in    Required steps after the  StoreAll Upgrade     page 20   These post upgrade steps are required for you to preserve  your custom metadata and audit log data        Online upgrades for StoreAll software    Online upgrades are
54.  ibrix fm  i  b  Perform a manual FM failover on the local node by entering the following command from  the active Fusion Manager   ibrix_fm  m nofmfailover serverl  The FM failover will take approximately one minute     c  If server  is not the active Fusion Manager  proceed to step e to fail over server  to  server 2     d  To see which node is now the active Fusion Manager  enter the following command   ibrix fm  i    e  Move to your new active Fusion Manager node  and then enter the following command  to perform the failover     ibrix server  f  p  h server       NOTE  The  p switch in the failover operation lets you reboot the effected node and  in turn the flash of the following components     e BIOS  e NIC  e     PcleNIC    e   Power_Mgmt_Ctlr  e SERVER HDD  e   Smart_Array_Ctlr       L Once the FSN boots up  verify the software reports the FSN as Up  FailedOver by  enter the following command     ibrix_ server  1l   g  Confirm the recommended flash was completed successfully by enter the following  command   hpsp fmt  fr server  o  tmp fwrecommend out    Verify that the Proposed Action column requires no more actions  and the Active  FW Version and Qualified FW Version columns display the same values     h  Fail back your updated server by entering the following command   ibrix server  f  U  h server     i  The failed over Fusion Manager remains in nofmfailover mode until it is moved to passive  mode by using the following command     ibrix_fm  m passive       NOTE  A
55.  idevisg   Has nomp Backup z i  Imym2 TapeDrive HP  Ultrium_3 SCSLO29AMYyPQ02 idevinst1  S   Active Sessions  il Session History Imym2 TapeDrive HP  Utrium_3 SCSLO29AMVWPQ02 idevisg2  il Tape Devices mm  TapeDrive HP  Ultrium_3 SCSIO29AMWWPQ03 idevinst2  N License mm  TapeDrive HP  Utrium_3 SCSIO29AMVWWPQ03 idevisg3  Imym2 TapeDrive HP  Ultrium_3 SCSIO29AMVWPQ04 idevinst3                   If you add a tape or media changer device to the SAN  click Rescan Device to update the list  If  you remove a device and want to delete it from the list  reboot all of the servers to which the device  is attached    To view tape and media changer devices from the CLI  use the following command    ibrix_tape  1   To rescan for devices  use the following command     ibrix_tape  r    events    An NDMP Server can generate three types of events  INFO  WARN  and ALERT  These events are  displayed on the GUI and can be viewed with the ibrix_ event command     INFO events  Identifies when major NDMP operations start and finish  and also report progress   For example    7012 Level 3 backup of  mnt ibfs7 finished at Sat Nov 7 21 20 58 PST 2011  7013 Total Bytes   38274665923  Average throughput   236600391 bytes sec   WARN events  Indicates an issue with NDMP access  the environment  or NDMP operations  Be  sure to review these events and take any necessary corrective actions  Following are some examples   0000 Unauthorized NDMP Client 16 39 40 201 trying to connect   4002 User  joe  md5 mode login 
56.  iiti Monitoring Host ib85s3  IP Address 192 168 865 222 192 168 385 223  Username manage  Proxy IP A             The Management Console provides a wide range of information in regards to vendor storage     Drill down into the following components in the lower Navigator tree to obtain additional details   e Servers  The Servers panel lists the host names for the attached storage     e LUNs  The LUNs panel provides information about the LUNs in a storage cluster  See     Managing LUNs in a storage cluster     page 93  for more information     Managing LUNs in a storage cluster    The LUNs panel provides information about the LUNs in a storage cluster  The following information  is provided in the LUNs panel     e  UNID   e     Physical Volume Name   e   Physical Volume UUID   In the following image  the LUNs panel displays the LUNs for a storage cluster           LUNs   Volume Name LUN ID Physical Volume Name Physical Volume UUID  fs2_seg1 O0cOfi1705200005fade    d9 PdgjOy ae60 ANEU TIP     yd01_v001 OOcOM170520000aldba    d1 ANMg1B 17Xf 1JcO ROF     vd02_v001 OOcOMiO89810000dcdb    d2 pG4i9x OBNx Z4Ze yD5x     yd03_v001 OOcOH1170520000fcdba    d3 FDOp8k EVHt 53FM Nijj     yd04_v001 O0cONI0898100001cdc    d4 xxFJYP LFZg ShMP Cay     yd05_v001 OOcORi17052000044dc    d5 aHVEV5 3ip0 HyCe GR     yd06_v001 OOcOMOS981O00062dc    d   eU4h6c rGwP GHCU ZT     vd07_v001 OOcOM17052000078dc    d7 MUHUAF 1Uhc vwrB  2s     yd08_v001 OOcOMIO898100008edc    d8 y4hi3D ri3L 6mrg F sd0        Mo
57.  in modo improprio la batteria      Non accorciare i contatti esterni o gettare in acqua o sul fuoco la batteria      Sostituire la batteria solo con i ricambi HP previsti a questo scopo     Le batterie e gli accumulatori non devono essere smaltiti insieme ai rifiuti  domestici  Per procedere al riciclaggio o al corretto smaltimento  utilizzare   il sistema di raccolta pubblico dei rifiuti o restituirli a HP  ai Partner Ufficiali HP  o ai relativi rappresentanti     Per ulteriori informazioni sulla sostituzione e sullo smaltimento delle batterie     contattare un Partner Ufficiale o un Centro di assistenza autorizzato     Battery replacement notices 207    Japanese battery notice    yT TEE    AN ZS ANRIA yTVEARMLT SHAMHVET       AYTUERVYUALTUH SSIS  EBLE THEA      ANGFUEKIZSSLEY  60  C  140  F  ERIC GL NGC TL     WTR DR  BURLEY  NREBITLYILELY CES      ARAMA RH TH  kk RELET SL      I  YTVER R DRR  HPPBEO MeL CCS     WIT  AVF ND  SEIL Rossi SI RS   BHII IELAI d SEH  AE Our HP  Hab  STEI  HP i    wt mg HUTT CA GAR    AYFURMBLUBIERRA EI LC ORT POY gn  BHECESL      Spanish battery notice    Declaraci  n sobre las bater  as    AN ADVERTENCIA  Este dispositivo podria contener una baterio       No intente recargar las baterias si las extrae      Evite el contacto de las baterias con agua y no las exponga a temperaturas superiores  a los 60   C  140   F       No utilice incorrectamente  ni desmonte  aplaste o pinche las baterias      No cortocircuite los contactos externos ni la arroj
58.  is now the active Fusion Manager  enter the following command   ibrix fm  i   The failed over Fusion Manager remains in nofmfailover mode until it is moved to passive mode  using the following command     ibrix_ fm  m passive       NOTE  A Fusion Manager cannot be moved from nofmtailover mode to active mode        Configuring High Availability on the cluster  StoreAll High Availability provides monitoring for servers  NICs  and HBAs     Server HA  Servers are configured in backup pairs  with each server in the pair acting as a backup  for the other server  The servers in the backup pair must see the same storage  When a server is  failed over  the ownership of its segments and its Fusion Manager services  if the server is hosting  the active FM  move to the backup server     NIC HA When server HA is enabled  NIC HA provides additional triggers that cause a server to  fail over to its backup server  For example  you can create a user VIF such as bondo   2 to service  SMB requests on a server and then designate the backup server as a standby NIC for bondo   2   If an issue occurs with bondo   2 on a server  the server  including its segment ownership and FM  services  will fail over to the backup server  and that server will now handle SMB requests going  through bondo  2     You can also fail over just the NIC to its standby NIC on the backup server     HBA monitoring  This method protects server access to storage through an HBA  Most servers ship  with an HBA that has two co
59.  label with the form bond   lt VLAN_id gt   For example  if the first bond created  by StoreAll has a VLAN tag of 30  it will be labeled bondo   30    It is also possible to add a VIF on top of an interface that has an associated VLAN tag  In this case   the device label of the interface takes the form bond   lt VLAN_id gt   lt VVIF_label gt   For example     if a VIF with a label of 2 is added for the bondo   30 interface  the new interface device label will  be bond0 30 2     The following commands show configuring a bonded VIF and backup nodes for a unified network  topology using the 10 10 x y subnet  VLAN tagging is configured for hosts ib142 129 and  ib142 131 on the 51 subnet    Add the bondo  51 interface with the VLAN tag       ibrix nic  a  n bond0 51  h ib142 129    ibrix nic  a  n bond0 51  h ib142 131    Assign an IP address to the bondo  51 VIFs on each node       ibrix nic  c  n bond0 51  h 1b142 129  I 192 168 51 101  M 255 255 255 0    ibrix nic  c  n bond0 51  h ib142 131  I 192 168 51 102  M 255 255 255 0    Add the bondo  51 2 VIF on top of the interface       ibrix nic  a  n bond0 51 2  h ib142 131    ibrix nic  a  n bond0 51 2  h ib142 129       Configure backup nodes       ibrix_ nic  b  H 1b142 129 bond0 51 1b142 131 bond0 51 2    ibrix_ nic  b  H 1b142 131 bond0 51 1b142 129 bond0 51 2    Create the user FM VIF   ibrix_fm  c 192 168 51 125  d bondO 51 1  n 255 255 255 0  v user    For more information about VLAG tagging  see the HP StoreAll Storage Ne
60.  log for  details     If configuration restore fails  look at  usr local ibrix autocfg logs appliance log  to determine which feature restore failed  Look at the specific feature log file under  usr   local ibrix setup logs  for more detailed information     To retry the copy of configuration  use the following command    usr local ibrix autocfg bin ibrixapp upgrade  f  s    Offline upgrade fails because iLO firmware is out of date    If the iLO2 firmware is out of date on a node  the auto_ibrixupgrade script will fail  The  usr   local ibrix setup logs auto_ibrixupgrade 1og reports the failure and describes how  to update the firmware     After updating the firmware  run the following command on the node to complete the StoreAlll  software upgrade      root ibrix ibrix ibrixupgrade  f    Node is not registered with the cluster network    Nodes hosting the agile Fusion Manager must be registered with the cluster network  If the  ibrix_fm command reports that the IP address for a node is on the user network  you will need  to reassign the IP address to the cluster network  For example  the following commands report that  node ib51 101  which is hosting the active Fusion Manager  has an IP address on the user  network  192 168 51 101  instead of the cluster network      root ib51 101 ibrix   ibrix_fm  i  FusionServer  ib51 101  active  quorum is running            root ib51 101 ibrix   ibrix fm  f  NAME IP ADDRESS    ib51 101 192 168 51 101  ib51 102 10 10 51 102    l     If th
61.  management console node  Move the ibrix directory used in the  previous release to ibrix old  Then expand the distribution tarball or mount the distribution  DVD in a directory of your choice  Expanding the tarball creates a subdirectory named ibrix  that contains the installer program  For example  if you expand the tarball in    root  the installer is in  root  ibrix    Change to the installer directory if necessary and run the upgrade      ibrixupgrade  f   The installer upgrades both the management console software and the file serving node  software on the node    On the node that was just upgraded and has its management console in maintenance mode   move the management console back to passive mode     lt ibrixhome gt  bin ibrix fm  m passive    The node now resumes its normal backup operation for the active management console     Upgrading remaining file serving nodes    Complete the following steps on the remaining file serving nodes     l     Move the  lt installer_dir gt  ibrix directory used in the previous release installation to  ibrix old  For example  if you expanded the tarball in  root during the previous StoreAll  installation on this node  the installer is in  root  ibrix     Expand the distribution tarball or mount the distribution DVD in a directory of your choice   Expanding the tarball creates a subdirectory named ibrix that contains the installer program   For example  if you expand the tarball in  root  the installer is in  root  ibrix     Change to t
62.  of        On the node hosting the active management console  place the management console into  maintenance mode  This step fails over the active management console role to the node  currently hosting the passive agile management console      lt ibrixhome gt  bin ibrix fm  m maintenance  A    Wait approximately 60 seconds for the failover to complete  and then run the following  command on the node that was the target for the failover     lt ibrixhome gt  bin ibrix fm  i   The command should report that the agile management console is now Active on this node     From the node on which you failed over the active management console in step 4  change  the status of the management console from maintenance to passive    lt ibrixhome gt  bin ibrix fm  m passive   On the node hosting the active management console  manually fail over the node now hosting  the passive management console     lt ibrixhome gt  bin ibrix server  f  p  h HOSTNAME   Wait a few minutes for the node to reboot and then run the following command to verify that  the failover was successful  The output should report Up  FailedOver    lt ibrixhome gt  bin ibrix server  1l   On the node hosting the active management console  place the management console into  maintenance mode     lt ibrixhome gt  bin ibrix_ fm  m maintenance  A   This step fails back the active management console role to the node currently hosting the  passive agile management console  the node that originally was active     Wait approximately 90 
63.  one or more users  All users  must belong to a group  Groups and users exist only in SNMPv3  Groups are assigned a security  level  which enforces use of authentication and privacy  and specific read and write views to  identify which managed objects group members can read and write     The command to create a group assigns its SNMPv3 security level  read and write views  and  context name  A context is a collection of managed objects that can be accessed by an SNMP  entity  A related option   m  determines how the context is matched  The format follows     Configuring cluster event notification    ibrix_snmpgroup  c  g GROUPNAME   s  noAuthNoPriv authNoPriv authPriv      r READVIEW    w WRITEVIEW     For example  to create the group group2 to require authorization  no encryption  and read access  to the hp view  enter    ibrix_snmpgroup  c  g group2  s authNoPriv  r hp   The format to create a user and add that user to a group follows     ibrix_snmpuser  c  n USERNAME  g GROUPNAME   j  MD5 SHA      k AUTHORIZATION PASSWORD    y  DES AES     z PRIVACY PASSWORD     Authentication and privacy settings are optional  An authentication password is required if the  group has a security level of either authNoPriv or authPriv  The privacy password is required if the  group has a security level of authPriv  If unspecified  MD5 is used as the authentication algorithm  and DES as the privacy algorithm  with no passwords assigned     For example  to create user3  add that user to group2
64.  other     Ahen HA is enabled  auto failover will occur if either server becomes unavailable  ILO IP addresses are required to be able to  automatically power a server up or down     Server HA Analysis      Selected Server  ib69s1  current designated backup  ib69s2      Selected Server System Type  G5 X9320 6G     NOTE  Servers ib69s1 and ib69s2 are verified as a couplet pair seeing the same storage     Server HA Pairing     Server  ib69s1   Server  ib69s2 S      ILO IP  192 168 69 101 ILO IP  192 168 69 102     7  Enable HA Monitoring and Auto Failover for both servers             Use the NIC HA Setup dialog box to configure NICs that will be used for data services such as  SMB or NFS  You can also designate NIC HA pairs on the server and its backup and enable  monitoring of these NICs              Server HA Pair NIC HA Setup    NIC HA Setup    High Availability for a physical or virutal NIC  typically servicing file share data  works by assigning a standy NIC on the backup  server in a server pair     Ven server HA is enabled  a monitored NIC will cause auto failover if the NIC becomes unavailable           ib69s1   Active User NICs NIG Be Add NIC   Remove MIC  Monitoring Server NIC Server Standby NIC Standby Server  No active physical or virtual User NICs Found  Add one via the  Add NIC  button         lt r    ib69s2   Active User NICs HERZ    Monitoring Server HNIC Server Standby NIC Standby Server          No active physical or virtual User NICs Found  Add one via the  A
65.  report  weekly Weekly report 14 days   age report  other User generated report 7 days    For example  for daily reports  the default of 7 days saves seven reports  To save only three daily  reports  set the age  report  daily parameter to 3 days     age report daily 3d       NOTE  You do not need to restart processes after changing the configuration  The updated  configuration is collected automatically        Fusion Manager failover and the Statistics tool configuration    In a High Availability environment  the Statistics tool fails over automatically when the Fusion  Manager fails over  You do not need to take any steps to perform the failover  The statistics  configuration changes automatically as the Fusion Manager configuration changes     The following actions occur after a successful failover     e   H Statstool processes were running before the failover  they are restarted  If the processes  were not running  they are not restarted    e The Statstool passive management console is installed on the StoreAll Fusion Manager in  maintenance mode     e Setrsync is run automatically on all cluster nodes from the current active Fusion Manager   e lLoadfm is run automatically to present all file system data in the cluster to the active Fusion  Manager     e The stored cluster level database generated before the Fusion Manager failover is moved to  the current active Fusion Manager  allowing you to request reports for the specified range if  pre generated reports are not avai
66.  s POWERSOURCE  h HOSTNAME  Delete a power source     To conserve storage  delete power sources that are no longer in use  If you are deleting multiple  power sources  use commas to separate them     ibrix powersrc  d  h POWERSRCLIST   Delete NIC monitoring    To delete NIC monitoring  use the following command   ibrix_nic  m  h MONHOST  D DESTHOST IFNAME  Delete NIC standbys    To delete a standby for a NIC  use the following command   ibrix_nic  b  U HOSTNAME1 IFNAME1    For example  to delete the standby that was assigned to interface eth2 on file serving node  s1 hp com     ibrix_nic  b  U s1 hp com eth2    Configuring High Availability on the cluster 63    Turn off automated failover   ibrix_ server  m  U   h SERVERNAME   To specify a single file serving node  include the  h SERVERNAME option     Failing a server over manually    The server to be failed over must belong to a backup pair  The server can be powered down or  remain up during the procedure  You can perform a manual failover at any time  regardless of  whether automated failover is in effect  Manual failover does not require the use of a programmable  power supply  However  if you have identified a power supply for the server  you can power it  down before the failover     Use the GUI or the CLI to fail over a file serving node     e On the GUI  select the node on the Servers panel and then click Failover on the Summary  panel     e Onthe Cll  run ibrix server  f  specifying the node to be failed over as the 
67.  supported only from the StoreAll 6 x release  Upgrades from earlier StoreAll  releases must use the appropriate offline upgrade procedure     When performing an online upgrade  note the following     File systems remain mounted and client   O continues during the upgrade   The upgrade process takes approximately 45 minutes  regardless of the number of nodes     The total I O interruption per node IP is four minutes  allowing for a failover time of two minutes  and a failback time of two additional minutes     Client LO having a timeout of more than two minutes is supported     Preparing for the upgrade    12    To prepare for the upgrade  complete the following steps  ensure that high availability is enabled  on each node in the cluster by running the following command     ibrix_haconfig  1    If the command displays an Overall HA Configuration Checker Results   PASSED  status  high availability is enabled on each node in the cluster  If the command returns Overall    Upgrading the StoreAll software to the 6 3 release    HA Configuration Checker Results   FAILED  complete the following list items based  on the result returned for each component     1   2     Make sure you have completed all steps in the upgrade checklist  Table 1  page 10      If Failed was displayed for the HA Configuration or Auto Failover columns or both  perform  the steps described in the section    Configuring High Availability on the cluster    in the  administrator guide for your current release    I
68.  to run the add on  script on StoreAll 6 3  the script will not run        Place the added on script in the following directory      usr local ibrix ibrixcollect ibrix collect _add_on_scripts     Collecting information for HP Support with the IbrixCollect 137    The following example shows several add on scripts stored in the  ibrix_ collect _add_on_scripts directory     root host2  J  ls  1   usr local ibrix ibrixcollect ibrix collect _add_on_scripts     total 8   Ywxr xr x 1 root root 93 Dec 7 13 39 60 addOn sh   rwxrwxrwx 1 root root 48 Dec 20 09 22 63 AddOnTest sh    Write an add on shell script that contains a custom command log that needs to be collected  in the final StoreAll collection  Only StoreAll and operating system commands are supported  in the scripts  These scripts should have appropriate permission to be executed        IMPORTANT  Make sure the scripts that you are creating do not collect information or logs  that are already collected as part of the ibrix_ collect command        Make sure that add on scripts that collect the custom logs redirects the collected custom logs  to the directory  local ibrixcollect ibrix collect additional data  Only  files copied to this location will be included in the generated IbrixCollect tar file  Output of  the add on scripts is available only when the IbrixCollect process is completed and the tar  files containing the output are extracted  See    Running an add on script     page 138  and then     Viewing the output from 
69.  tuning settings        Use the ibrix host une command to list or change host tuning settings     To list default values and valid ranges for all permitted host tunings     ibrix_host_tune  L    To tune host parameters on nodes or host groups    ibrix host_tune  S   h HOSTLIST  g GROUPLIST   o OPTIONLIST   Contact HP Support to obtain the values for OPTIONLIST  List the options as opt ion value  pairs  separated by commas  To set host tunings on all clients  include the  g clients  option    To reset host parameters to their default values on nodes or host groups    ibrix_host_tune  U   h HOSTLIST  g GROUPLIST    n OPTIONS    To reset all options on alll file serving nodes  host groups  and StoreAll clients  omit the  h    HOSTLIST and  n OPTIONS options  To reset host tunings on all clients  include the  g  clients option     Tuning file serving nodes and StoreAll clients 113    The values that are restored depend on the values specified for the  h HOSTLIST command   o File serving nodes  The default file serving node host tunings are restored         StoreAll clients  The host tunings that are in effect for the default clients host group are  restored     o  Hostgroups  The host tunings that are in effect for the parent of the specified host groups  are restored     e   To list host tuning settings on file serving nodes  StoreAll clients  and host groups  use the  following command  Omit the  h argument to see tunings for all hosts  Omit the  n argument  to see all tunings 
70.  up the Fusion Manager configuration    The Fusion Manager configuration is automatically backed up whenever the cluster configuration  changes  The backup occurs on the node hosting the active Fusion Manager  The backup file is  stored at  lt ibrixhome gt  tmp fmbackup  zip on that node     The active Fusion Manager notifies the passive Fusion Manager when a new backup file is available   The passive Fusion Manager then copies the file to  lt ibrixhome gt  tmp fmbackup  zip on the  node on which it is hosted  If a Fusion Manager is in maintenance mode  it will also be notified  when a new backup file is created  and will retrieve it from the active Fusion Manager    You can create an additional copy of the backup file at any time  Run the following command   which creates a fmbackup  zip file in the  TBRIXHOME 1og directory      IBRIXHOME bin db_ backup   sh   Once each day  a cron job rotates the SIBRIXHOME  log directory into the SIBRIXHOME log   daily subdirectory  The cron job also creates a new backup of the Fusion Manager configuration  in both SIBRIXHOME tmp and SIBRIXHOME 1log    To force a backup  use the following command     ibrix_fm  B          IMPORTANT  You will need the backup file to recover from server failures or to undo unwanted  configuration changes  Whenever the cluster configuration changes  be sure to save a copy of  fmbackup  zip in a safe  remote location such as a node on another cluster        Using NDMP backup applications    The NDMP backup feat
71.  you restored    ibrix server  f   p    M    N   h SERVERNAME   If you disabled NIC monitoring before using the QuickRestore  re enable the monitor   ibrix nic  m  h MONITORHOST  A DESTHOST IFNAME   For example    ibrix nic  m  h titanl6  A titanl5 eth2   Configure Insight Remote Support on the node  See    Configuring HP Insight Remote Support  on StoreAll systems     page 35      Run ibrix health  1 from the StoreAll management console to verify that no errors are  being reported     Restoring services    When you perform a Quick Restore of a file serving node  the NFS  SMB  FTP  and HTTP export  information is not automatically restored to the node  After operations are failed back to the node   the I O from client systems to the node fails for the NFS  SMB  FTP  and HTTP shares  To avoid    this situation  manually restore the NFS  SMB  FTP  and HTTP exports on the node before failing it  back     Restore SMB services  Complete the following steps     1     If the restored node was previously configured to perform domain authorization  run the  following command     ibrix_auth  n DOMAIN NAME  A AUTH _PROXY_USER_NAME domain_name   P  AUTH PROXY PASSWORD   h HOSTNAME    For example     ibrix auth  n ibgql mycompany com  A Administrator ibq1 mycompany com   P password  h ib5 9    If the command fails  check the following     e Verify that DNS services are running on the node where you ran the ibrix auth  command     e   Verify that you entered a valid domain name with the fu
72. 168 10 1  mi  Session History imvm3 13299 DATA_RESTORE Thu May 27 01 34 59 2010 192 168 10 1        Tape Devices     lt  gt  Support Tickets  EI License                   To see similar information for completed sessions  select NDMP Backup  gt  Session History   View active sessions from the CLI    ibrix_ndmpsession  1   View completed sessions    ibrix_ndmpsession  l  s   t YYYY MM DD    The  t option restricts the history to sessions occurring on or before the specified date   Cancel sessions on a specific file serving node    ibrix_ndmpsession  c SESSION1 SESSION2 SESSION3      h HOST    Starting  stopping  or restarting an NDMP Server    When a file serving node is booted  the NDMP Server is started automatically  If necessary  you  can use the following command to start  stop  or restart the NDMP Server on one or more file  serving nodes     ibrix_server  s  t ndmp  c   start   stop   restart    h SERVERNAMES     Using NDMP backup applications 79    Viewing or rescanning tape and media changer devices    NDMP    To view the tape and media changer devices currently configured for backups  select Cluster  Configuration from the Navigator  and then select NDMP Backup  gt  Tape Devices              Tape and Media Changer Devices  S S zeg i Hostname Device Type Device ID Device Node  Jg Email         Events mom  MediaChanger HP  VLS 0294MVyPQ00 idevisg12  H ge SNMP mm TapeDrive HP  Uitrium_3 SCSiO29AMVVPQ01 idevinstO   amp  Events mun  TapeDrive HP  Uttrium_3 SCSLO29AMVVPQ01
73. 24C4  primary 807F5C010 36B5072B poid  807F5C010 36B5072B   2012 03 13 11 57 35 0332894  lt INFO gt  1090169152 segment 3 chunk  inode    3099AC007 8E2125A1  poid 3099AC007 8E2125A1  primary 60A1D8024 42966361 poid  60A1D8024  42966361  2012 03 13 11 57 35 0332901  lt INFO gt  1090169152 segment 3 chunk  inode  3015A4031 C34A99FA  poid 3015A4031 C34A99FA  primary 40830415E 7793564B poid  40830415E 7793564B  2012 03 13 11 57 35 0332908  lt INFO gt  1090169152 segment 3 chunk  inode  3015A401B C34A97F8  poid 3015A401B C34A97F8  primary 4083040D9 77935458 poid  4083040D9 77935458  2012 03 13 11 57 35 0332915  lt INFO gt  1090169152 segment 3 chunk  inode  3015A4021 C34A994C  poid 3015A4021 C34A994C  primary 4083040FF 7793558E poid  4083040FF 7793558E                Use the inum2name utility to translate the primary inode ID into the file name     Removing a node from a cluster    In the following procedure  the cluster contains four nodes  FSN1  FSN2  FSN3  and FSN4  FSN4  is the node being removed  The user NIC for FSN4 is bondo   1  The file system name is ibfs1   which is mounted on  ibfs1 and shared as ibfs1 through NFS and SMB   FSN3 and FSN4 are  the failover pair  and bondo   2 is configured as the stand by interface     l     Stop High Availability    ibrix server  m  U   Verify that the Active Fusion Manager is on a server other than FSN4  Run the following  command from FSN4    ibrix_fm  i   If the Active Fusion Manager is on FSN4  move the Fusion Manager to nofmfailov
74. 4 Boeblingen  Germany    Canadian notice  Avis Canadien  197    Japanese notices    Japanese VCCI A notice  CORB  TIZABRRMHRECT  CORBtKERE CERT  DERRE SECT CEPHVEF  COBSICSEAEY BIE  WREBTSLIBKENSCEPGVET  VCCI A  Japanese VCCI B notice  CORB  TIABABRHRECTI  CORBlt  RR CEH  FECEEBNELTIETY  CORBPIIAPTLEY ay Sale  ALTANA t  BEBE ESSHCFCEMHVET   WIR ARB  gt  CIE LU WRU ELT FAL VCCI B    Japanese VCCI marking    Japanese power cord statement    Gi malik  RASA BRA KFESEL FS   GE EC E gd dl EE KEN EE ET    Meare use the attached power cond  The attached power cord a not allowed to use with other product    Korean notices    Class A equipment       Jop  yg22 MSs Al7     12  E  EN  nie  a  2  rir  y       Class B equipment    BS 7P ORS EN    ol ZIZl E 73 gee UNNES SS et 772A  FHAAANE SE SEAM ALSS S USHCH     198 Regulatory compliance notices    Taiwanese notices  BSMI Class A notice    SERE     REPRAN Em    EEE  BHRAS    OY RE IA aA  Fi  Ei KASS       Taiwan battery recycle statement    Pee ea irs Td EZ       Turkish recycling notice    T  rkiye Cumhuriyeti  EEE Y  netmeli  ine Uygundur       Vietnamese Information Technology and Communications compliance  marking    iQ    ICT    Laser compliance notices    English laser notice    This device may contain a laser that is classified as a Class 1 Laser Product in accordance with  U S  FDA regulations and the IEC 60825 1  The product does not emit hazardous laser radiation        A WARNING  Use of controls or adjustments or performance of 
75. 5  identify a user network interface  123  monitor status  93  prefer a user network interface  124  start or stop processes  110  troubleshooting  140  tune  110  tune locally  114  user interface  33  view process status  110  StoreAll software  shut down  107  start  108  upgrade  10  154  StoreAll software 5 5 upgrade  167  StoreAll software 5 6 upgrade  163  Subscriber s Choice  HP  152  subtree_check  154  symbols  on equipment  193  system recovery  144  system startup  108    T  technical support   HP  151   service locator website  152    U  upgrade6  0 sh utility  159  upgrades  6 0 file systems  161  firmware  129  Linux StoreAll clients  18  158  pre 6 0 file systems  159  161  pre 6 3 Express Query  19  StoreAll 5 5 software  167  StoreAll software  10  154  StoreAll software 5 6 release  163  Windows StoreAll clients  19  159  user network interface  add  122  configuration rules  126  defined  122  identify for StoreAll clients  123  modify  123  prefer  123  unprefer  125    V  virtual interfaces  48  bonded  create  49  client access  50  configure standby servers  49  guidelines  48    213    W  warning  rack stability  152  warnings  loading rack  193  websites  HP  152  HP Subscriber s Choice for Business  152  product manuals  151  spare parts  152  Windows StoreAll clients  upgrade  19  159    214 Index    
76. 5 14 23 22 MDT 2012       Viewing a detailed health report    To view a detailed health report  use the ibrix health  i command   ibrix_ health  i  h HOSTLIST   f    s    v     The     option displays results only for hosts that failed the check  The  s option includes information  about the file system and its segments  The  v option includes details about checks that received  a Passed or Warning result     The following example shows a detailed health report for file serving node bv18   04      root bv18 04     ibrix health  i  h bv18 04  Overall Health Checker Results   PASSED    Host Result Type State Network Last Update  bv18 04 PASSED Server Up 10 10 18 4 Thu Oct 25 13 59 40 MDT 2012  Report    Overall Result    Result Type State Module Up time Last Update Network Thread Protocol    PASSED Server Up Loaded 1699630 0 Thu Oct 25 13 59 40 MDT 2012 10 10 18 4 64 true  CPU Information          Cpu  System  User  Util  Nice  Load 1 3 15 min  Network Bps  Disk  Bps   0  0  0  0 0 09  0 05  0 01 1295 1024  Memory Information    Mem Total Mem Free Buffers  KB  Cached  KB  Swap Total  KB  Swap Free  KB     8045992 4190584 243312 2858364 14352376 14352376  Version OS Information       Fs Version IAD Version OS OS Version Kernel Version Architecture Processor  6 3 72 6 3 72 GNU Linux Red Hat Enterprise Linux Server release 5 5  Tikanga  2 6 18 194   el5 x86 64 x86_64       Host Type Network Protocol Connection State  bv18 03 Server 10 10 18 3 true S_SET S READY S_SENDHB  bv18 04 S
77. 5 2 myMSA  OK ib69s5 P 7    3 OOcOffd7e    TierFourT 2 0 TB 2 myMSA  OK ib69s4 EI FT   4 OOcOffd7e    2 018 2 myMSA  OK ib69s5 al F   5 OOcOffd7e    TierOneSixer 1 6 TB 2 myMSA  OK ib69s4 al    EZE            The Summary dialog box lists the source and destination segments for the evacuation  Click  Back to make any changes  or click Finish to start the evacuation     The Active Tasks panel reports the status of the evacuation task  When the task is complete   it will be added to the Inactive Tasks panel     When the evacuation is complete  run the following command to retire the segment from the  file system     ibrix_fs  B  f FSNAME  n BADSEGNUMLIST    The segment number associated with the storage is not reused  The underlying LUN or volume  can be reused in another file system or physically removed from the storage solution when  this step is complete     If quotas were disabled on the file system  unmount the file system and then re enable quotas  using the following command     ibrix_ fs  q  E  f FSNAME    Then remount the file system     To evacuate a segment using the CLI  use the ibrix evacuate command  as described in the  HP StoreAll Storage CLI Reference Guide     Troubleshooting segment evacuation    If segment evacuation fails  HP recommends that you run phase 1 of the ibrix fsck  command in corrective mode on the segment that failed the evacuation  For more information   see    Checking and repairing file systems    in the HP StoreAll Storage File System User G
78. 6 1  See    Upgrading the StoreAll software to the 5 5 release       page 167     If your system is currently H StoreAll software 5 5 x  upgrade to 5 6 x and then upgrade to  6 1  See    Upgrading the StoreAll software to the 5 6 release     page 163         IMPORTANT  If you are upgrading from a StoreAll 5 x release     e   Ensure that the NFS exports option subtree_check is the default export option for every  NFS export  See    Common issue across all upgrades from StoreAll 5 x     page 154  for more  information        e   Any support tickets collected with the ibrix_supportticket command will be deleted  during the upgrade  Before upgrading to 6 1 4  download a copy of the archive files    tgz   from the  admin platform diag supporttickets directory        154 Cascading Upgrades       NOTE     Verify that the root partition contains adequate free space for the upgrade  Approximately 4  GB is required     Be sure to enable password less access among the cluster nodes before starting the upgrade   Do not change the active passive Fusion Manager configuration during the upgrade   Linux StoreAll clients must be upgraded to the 6 x release        Online upgrades for StoreAll software 6 x to 6 1    Online upgrades are supported only from the StoreAll 6 x release  Upgrades from earlier StoreAll  Pg pp bk Pg    releases must use the appropriate o    ine upgrade procedure     When performing an online upgrade  note the following     File systems remain mounted and client LO conti
79. 7  Affer the FAG EN 158   Upgrading Linux StoreAll Cl EE 158  Installing a minor kernel update on Linux dente  159   Upgrading Windows StorSAlll CHGS cc nuisesescecaessacadieisaveceelasacoutrorsdense ese dawesadeedshivenuensieets 159   Upgrading pre 6 0 file systems for software snapshot 159   Upgrading pre 6 1 1 file systems for data retention Jeotures esseere reeet 161   Troubleshooting upgrade  SSeS os  c2422dsiucnasaesenteasscageganiesatdedncnanedenneduddeanevdncandeeddeceduesaucsadecis 161  Ee MEIER eege EE 161  RE 162  Offline upgrade fails because iLO firmware is out of dote  162  Node is not registered with the cluster network             ccccccceceeeeeeeeeeeeeeeeeeeeeeeeesseseteeeeeeeees 162  Filesystem GOMES eege 163   Upgrading the StoreAll software to the 5 6 release  163   Automatic  MEIER 164   eene 164  Preparing for TEE 164  SOV HS GS CONN GUNA Oliesescsentesussivceclereseudensusenced ageseent guer eh 165  Performing the EN 165  D EE EE 165  Completing the Witt So cceracsedcnneuenanecuaateddenatonatedeenteshenaweantdeheddaadesthonsadadeedeanenaiensadees 165   Troubleshooting upgrade  SSW SS sco cicnccid cones ciencdexcantenntiateeGaacysioant iedendeusenivemnatanveiseneeties 166  EEN 166  Manyal MEss eege 167   Upgrading the StoreAll software to the 5 5 release  167   E E 167   RE 168  Standard upgrade for clusters with a dedicated Management Server machine or blade         168   Standard online EE 169  Standard offline EE 170   Agile upgrade for clusters with 
80. 8Ge 3Gb SAS Host Bus Adapter 4  empty 5  empty 6  HP SCO8Ge 3Gb SAS Host Bus Adapter 1  HP SCO8Ge 3Gb SAS Host Bus Adapter 2  SAS 10Gb ss    NC522SFP dual 10Gb NIC 4  empty 5  empty 6    182 Component diagrams for 9300 systems       C System component and cabling diagrams for 9320  systems    System component diagrams    Front view of 9300c array controller or 9300cx 3 5  12 drive enclosure                    d Es O  n       8 rO Fo FF  9  M O  H Q  M Dm                         18  17559  ltem Description  1 12 Disk drive bay numbers  13 Enclosure ID LED  14 Disk drive Online Activity LED  15 Disk drive Fault UID LED  16 Unit Identification  UID  LED  17 Fault ID LED  18 Heartbeat ID LED       System component diagrams 183    Rear view of 9300c array controller       ltem Description    1 Power supplies       Power switches       Host ports       CLI port       Network port       Service port  used by service personnel only        N   oO  oan  AJOJN    Expansion port  connects to drive enclosure           Rear view of 9300cx 3 5  12 drive enclosure                      Item Description   1 Power supplies   2 Power switches   3 SAS In port  connects to the controller enclosure   4 Service port  used by service personnel only    5 SAS Out port  connects to another drive enclosure     184 System component and cabling diagrams for 9320 systems    Front view of file serving node                                                                      Item Description      Quick releas
81. All software is now available  and you can now access your file systems     Powering file serving nodes on or off    When file serving nodes are connected to properly configured power sources  the nodes can be  powered on or off or can be reset remotely  To prevent interruption of service  set up standbys for  the nodes  see    Configuring High Availability on the cluster     page 54   and then manually fail  them over before powering them off  see    Failing a server over manually     page 64    Remotely  powering off a file serving node does not trigger failover    To power on  power off  or reset a file serving node  use the following command     ibrix_server  P  on reset off  f   h HOSTNAME    Performing a rolling reboot    The rolling reboot procedure allows you to reboot all file serving nodes in the cluster while the  cluster remains online  Before beginning the procedure  ensure that each file serving node has a  backup node and that StoreAll HA is enabled  See    Configuring virtual interfaces for client access      page 48  and    Configuring High Availability on the cluster     page 54  for more information about  creating standby backup pairs  where each server in a pair is the standby for the other     Use one of the following schemes for the reboot   e Reboot the file serving nodes one ata time     e Divide the file serving nodes into two groups  with the nodes in the first group having backups  in the second group  and the nodes in the second group having backu
82. FS client and server activity     The GUI displays most of these statistics on the dashboard  See    Using the StoreAll Management  Console     page 29  for more information    To view the statistics from the CLI  use the following command    ibrix_stats  1   s    c    m   I i    n    f      h HOSTLIST    Use the options to view only certain statistics or to view statistics for specific file serving nodes    s Summary statistics    c CPU statistics    m Memory statistics    i 1  O statistics    n Network statistics    f NES statistics    h The file serving nodes to be included in the report    Sample output follows        S S         Summary               HOST Status CPU Disk MB s  Net  MB s    lab12 10 hp com Up 0 22528 616   a a at Sy ed IO ebe a a atta Ss   HOST Read  MB s  Read IO s  Read ms op  Write MB s  Write IO s  Write  ms op   lab12 10 hp com 22528 2 5 0 0 00  a a A Net           lt       HOST In MB s  In IO s  Out  MB s  Out  IO s   lab12 10 hp com 261 3 355 2   i Se Mem               HOST MemTotal  MB  MemFree  MB  SwapTotal  MB  SwapFree  MB   lab12 10 hp com 1034616 703672 2031608 2031360  SES CPU              HOST User System Nice Idle IoWait Irq SoftIrq  lab12 10 hp com D 0 D 0 97 1 0    Monitoring cluster operations    HOST  lab12 10 hp com    HOST  lab12 10 hp com    HOST  lab12 10 hp com    Null Getattr Setattr Lookup Access Readlink Read Write    0    0    0    0    0 0 0 0    Create Mkdir Symlink Mknod Remove Rmdir Rename    0    D    0    0    0 0 0    
83. HOSTNAME   If appropriate  include the  p option to power down the node before segments are migrated     ibrix_server  f   p   h HOSTNAME  Check the Summary panel or run the following command to determine whether the failover was  successful   ibrix_server  1  The STATE field indicates the status of the failover  If the field persistently shows Down  InFailover  or Up InFailover  the failover did not complete  contact HP Support for assistance  For    information about the values that can appear in the STATE field  see    What happens during a  failover     page 55      Failing back a server    After an automated or manual failover of a server  you must manually fail back the server  which  restores ownership of the failed over segments and network interfaces to the server  Before failing  back the server  confirm that it can see all of its storage resources and networks  The segments  owned by the server will not be accessible if the server cannot see its storage     To fail back a node from the GUI  select the node on the Servers panel and then click Failback on  the Summary panel     On the GUI  select the node on the Servers panel and then click Failback on the Summary pane  On the CLI  run the following command  where HOSTNAME is the failed over node   ibrix_server  f  U  h HOSTNAME    After failing back the node  check the Summary panel or run the ibrix_server  1 command  to determine whether the failback completed fully  If the failback is not complete  contact HP  Sup
84. HP Store All 9300 9320 Storage  Administrator Guide    Abstract    This guide describes tasks related to cluster configuration and monitoring  system upgrade and recovery  hardware component  replacement  and troubleshooting for the HP 9300 Storage Gateway and the HP 9320 Storage  It does not document StoreAll  file system features or standard Linux administrative tools and commands  For information about configuring and using StoreAll  software file system features  see the HP StoreAll Storage File System User Guide        This guide is intended for system administrators and technicians who are experienced with installing and administering networks   and with performing Linux operating and administrative tasks  For the latest StoreAll guides  browse to  http   www hp com support StoreAllManuals        HP Part Number  AW549 96068  Published  April 2013  Edition  12          Copyright 2010  2013 Hewlett Packard Development Company  L P   Confidential computer software  Valid license from HP required for possession  use or copying  Consistent with FAR 12 211 and 12 212  Commercial  Computer Software  Computer Software Documentation  and Technical Data for Commercial Items are licensed to the U S  Government under    vendor s standard commercial license     The information contained herein is subject to change without notice  The only warranties for HP products and services are set forth in the express  warranty statements accompanying such products and services  Nothing herein 
85. K 603718 B21 SGH107X60K 603718 B21    Servers    i  Storage  1 2  snmp       To configure Entitlements  select a device and click Modify to open the dialog box for that type of  device  The following example shows the Server Entitlement dialog box  The customer entered  serial number and product number are used for warranty checks at HP Support     Configuring HP Insight Remote Support on StoreAll systems 39    Server Entitlement    IP HostName  x9730 node1  Product Name  ProLiant BL460c G7  Serial Number  SGH107X60H  Product Number  603718 B21     Customer Entered Serial Number  SGH107X60H     Customer Entered Product Number  603713 B21        Required Value             Use the following commands to entitle devices from the CLI  The commands must be run for each  device present in the cluster     Entitle a server     ibrix_phonehome  e  h  lt Host Name gt   b  lt Customer Entered Serial Number gt    g  lt Customer Entered Product Number gt     Enter the Host Name parameter exactly as it is listed by the ibrix fm  1 command   Entitle storage  MSA      ibrix phonehome  e  i  lt Management IP Address of the Storage gt   b  lt Customer  Entered Serial Number gt   g  lt Customer Entered Product Number gt     Device discovery    HP Systems Insight Manager  SIM  uses the SNMP protocol to discover and identify StoreAll systems  automatically  On HP SIM  open Options  gt  Discovery  gt  New  Select Discover a group of systems   and then enter the discovery name and the Fusion Manager
86. Link Readdir Readdirplus Fsstat Fsinfo Pathconf Commit    0    D    0    0 0 0 0    Viewing operating statistics for file serving nodes    99       10 Using the Statistics tool    The Statistics tool reports historical performance data for the cluster or for an individual file serving  node  You can view data for the network  the operating system  and the file systems  including the  data for NFS  memory  and block devices  Statistical data is transmitted from each file serving  node to the Fusion Manager  which controls processing and report generation     Installing and configuring the Statistics tool  The Statistics tool has two main processes     e Manager process  This process runs on the active Fusion Manager  It collects and aggregates  cluster wide statistics from file serving nodes running the Agent process  and also collects local  statistics  The Manager generates reports based on the aggregated statistics and collects  reports from all file serving nodes  The Manager also controls starting and stopping the Agent  process     e Agent process  This process runs on the file serving nodes  It collects and aggregates statistics  on the local system and generates reports from those statistics           IMPORTANT  The Statistics tool uses remote file copy  rsync  to move statistics data from the  file serving nodes to the Fusion Manager for processing  report generation  and display  SSH keys  are configured automatically across all the file serving nodes to the active F
87. ME  h HOSTLIST    If you are identifying a VIF  add the VIF suffix     nnnn  to the physical interface name  For example   the following command identifies virtual interface eth1 1 to physical network interface eth1 on  file serving nodes s1  hp com and s2 hp com     ibrix nic  a  n ethl 1  h sl hp com s2 hp com    When you identify a user network interface for a file serving node  the Fusion Manager queries  the node for its IP address  netmask  and MAC address and imports the values into the configuration  database  You can modify these values later if necessary     If you identify a VIF  the Fusion Manager does not automatically query the node  If the VIF will be  used only as a standby network interface in an automated failover setup  the Fusion Manager will  query the node the first time a network is failed over to the VIF  Otherwise  you must enter the VIF   s  IP address and netmask manually in the configuration database  see    Setting network interface  options in the configuration database     page 123    The Fusion Manager does not require a MAC  address for a VIF     If you created a user network interface for StoreAll client traffic  you will need to prefer the network  for the StoreAll clients that will use the network  see    Preferring network interfaces     page 123       Setting network interface options in the configuration database  To make a VIF usable  execute the following command to specify the IP address and netmask for  the VIF  You can also use this
88. MP types on all networks  however  you can limit ICMP to types O  3     8  and 11 if necessary     Be sure to open the ports listed in the following table                    Port Description   22 tcp SSH   123 tcp  123 upd NTP   5353 udp Multicast DNS  224 0 0 251  12865 tcp netperf tool   80 tcp Fusion Manager to file serving nodes  443 tcp   5432 tcp Fusion Manager and StoreAll file system  8008 tcp   9002 tcp   9005 tcp   9008 tcp   9009 tcp   9200 tcp       2049 tcp  2049 udp  111 tcp  111 udp  875 tcp  875 udp  32803 tcp  32769 udp   892 tcp  892 udp  662 tcp  662 udp  2020 tcp  2020 udp    Between file serving nodes and NFS clients  user network   NFS   RPC   quota   lockmanager   lockmanager   mount daemon   stat    stat outgoing       4000 4003 tcp reserved for use by a custom application  CMU  and can be disabled if not used  137 udp Between file serving nodes and SMB clients  user network    138 udp   139 tcp   445 tcp       9000 9002 tcp  9000 9200 udp       Between file serving nodes and StoreAll clients  user network        34 Getting started       Port Description    20 tcp  20 udp Between file serving nodes and FTP clients  user network   21 tcp  21 udp                7777 tcp Between GUI and clients that need to access the GUI  8080 tcp   5555 tcp  5555 udp Dataprotector   631 tcp  631 udp Internet Printing Protocol  IPP    1344 tcp  1344 udp ICAP          Configuring NTP servers    When the cluster is initially set up  primary and secondary NTP servers are co
89. OD Just a bunch of disks    KVM Keyboard  video  and mouse    LUN Logical unit number  A LUN results from mapping a logical unit number  port ID  and LDEV ID to    a RAID group  The size of the LUN is determined by the emulation mode of the LDEV and the  number of LDEVs associated with the LUN     MTU Maximum Transmission Unit    NAS Network attached storage    NFS Network file system  The protocol used in most UNIX environments to share folders or mounts    NIC Network interface card  A device that handles communication between a device and other devices  on a network    NTP Network Time Protocol  A protocol that enables the storage system s time and date to be obtained  from a network attached server  keeping multiple hosts and storage devices synchronized    OA Onboard Administrator    OFED OpenFabrics Enterprise Distribution    OSD On screen display    OU Active Directory Organizational Units    RO Read only access    RPC Remote Procedure Call    RW Read write access    SAN Storage area network  A network of storage devices available to one or more servers    SAS Serial Attached SCSI     209    SELinux Security Enhanced Linux     SFU Microsoft Services for UNIX    SID Secondary controller identifier number    SNMP Simple Network Management Protocol    TCP IP Transmission Control Protocol Internet Protocol    UDP User Datagram Protocol    UID Unit identification    VACM SNMP View Access Control Model    vc HP Virtual Connect    VIF Virtual interface    WINS Windows Interne
90. Practices Guide for additional information           Verity that all file system nodes can    see    and    access    every segment logical volume that  the file system node is configured for as either the owner or the backup by entering the  following commands   1  To view all segments  logical volume name  and owner  enter the following command  on one line   ibrix fs  i   egrep  e OWNER  e MIXED awk    print  1   3   6    2   14   5     Er UI W   t     2  To verify the visibility of the correct segments on the current file system node enter the  following command on each file system node     lvm lvs   awk   print  1               Ensure that no active tasks are running  Stop any active remote replication  data tiering   or rebalancer tasks running on the cluster   Use ibrix_task  1 to list active tasks   When  the upgrade is complete  you can start the tasks again        T       Table 1 Prerequisites checklist for all upgrades  continued                             Step  Step Description completed   For additional information on how to stop a task  enter the ibrix_task command for  the help   10 Record all host tunings  FS tunings and FS mounting options by using the following oO  commands   1  To display file system tunings  enter  ibrix fs tune  1   gt  local ibrix_fs_tune 1l txt  2  To display default StoreAll tunings and settings  enter  ibrix_host_tune  L   gt  local ibrix host _tune L txt  3  To display all non default configuration tunings and settings  enter  ibrix
91. Requred Gen    Mame   x9320 nodet    Schedule   d Automatcaly execute Gscovery every  1 days 7 WII AN e    Ping inclusion ranges  system  hosts  names  andior hosts files Hey wth syetax  10 2 4 76       Enter the read community string on the Credentials  gt  SNMP tab  This string should match the Phone  Home read community string  If the strings are not identical  the device will be discovered as     Unknown          Credemials Contigqure Repait       SNMP Credentials  New Discovery Task 1    Use these credentials    Read community sting  labsystem Senna    i these credentials tal  try others that may apply  This may empact performance Leam more    The following example shows discovered devices on HP SIM 6 3  File serving nodes are discovered  as ProLiant server        HS Summary  Dt Cr  cal Y susjor    o minor O2normai 78 disabled    o Unknown Totat 7    SW iow ES   System Name t   System Type              System Address           g s m  D vO OD 0 0u Server 10 259 104 ProLiant DL380 G8  CTT O O O 24  Server 102420 ProLiant DL380 GS  m e  D 102454 Management Processor 10 24 54  pe in Server 10 2420  P  y    102520 Management Processor 10 25 30 integrated Lights Out       in Server 10 24 30   0 win T9aqn4 irpv8 Server 102474          Configuring device Entitlements    Configure the CMS software to enable remote support for StoreAll systems  For more information   see  Using the Remote Support Setting Tab to Update Your Client and CMS Information    and     Adding Individual Ma
92. S BACKUP_HOST BACKUP_IF ROUTE VLAN_TAG LINKMON  ib50 81 bond Cluster Up  LinkUp 172 16 0 81 00 00 00 00 11 172 16 0 254 No  ib50 81 bond0 1 User Up  LinkUp 172 16 0 181 00 00 00 00 11 ib50 82 bond0 2 No  ib50 81 bond0 2 User 00 00 00 00 11 No  ib50 82bond0 Cluster Up  LinkUp 172 16 0 82 00 00 00 00 12 172 16 0 254 No  ib50 82 bond0 1 User Up  LinkUp 172 16 0 182 00 00 00 00 12 ib50 81 bond0 2 No  ib50 82 bond0 2 User 00 00 00 00 12    ib50 81  Active FM Nonedit  bond0 0 Cluster Up  LinkUp  ActiveFM  172 16 0 281 No    Specifying VIFs in the client configuration    When you configure your clients  you may need to specify the VIF that should be used for client  access     NFS SMB  Specify the VIF IP address of the servers  for example  bondo   1  to establish connection   You can also configure DNS round robin to ensure NFS or SMB client to server distribution  In both  cases  the NFS SMB clients will cache the initial IP they used to connect to the respective share   usually until the next reboot     FTP  When you add an FTP share on the Add FTP Shares dialog box or with the ibrix_ftpshare  command  specify the VIF as the IP address that clients should use to access the share     HTTP  When you create a virtual host on the Create Vhost dialog box or with the  ibrix_httpvhost command  specify the VIF as the IP address that clients should use to access  shares associated with the Vhost     StoreAll clients  Use the following command to prefer the appropriate user network  Execut
93. See  usr local ibrix log statstool stats 1log for detailed logging for the Statistics  tool   The information includes detailed exceptions and traceback messages   The logs are rolled  over at midnight every day and only seven days of statistics logs are retained     The default  var log messages log file also includes logging for the Statistics tool  but the  messages are short     Uninstalling the Statistics tool  The Statistics tool is uninstalled when the StoreAll software is uninstalled   To uninstall the Statistics tool manually  use one of the following commands     e Uninstall the Statistics tool  including the Statstics tool and dependency rpms     ibrixinit  tt  u   e   Uninstall the Statistics tool  retaining the Statstics tool and dependency rpms     ibrixinit  tt  U    106 Using the Statistics tool       11 Maintaining the system    Shutting down the system    To shut down the system completely  first shut down the StoreAll software  and then power off the  system hardware     Shutting down the StoreAll software    Use the following procedure to shut down the StoreAll software  Unless noted otherwise  run the  commands from the node hosting the active Fusion Manager     1     Stop any active remote replication  data tiering  or rebalancer tasks  Run the following  command to list active tasks and note their task IDs       ibrix task  1   Run the following command to stop each active task  specifying its task ID      ibrix task  k  n TASKID   Disable High Availabi
94. Statistics tool    Upgrading the Statistics tool from StoreAll software 6 0  The statistics history is retained when you upgrade to version 6 1 or later     The Statstool software is upgraded when the StoreAll software is upgraded using the  ibrix upgrade and auto_ibrixupgrade scripts     Note the following     e If statistics processes were running before the upgrade started  those processes will automatically  restart after the upgrade completes successfully  If processes were not running before the  upgrade started  you must start them manually after the upgrade completes     e If the Statistics tool was not previously installed  the StoreAll software upgrade installs the tool  but the Statistic processes are not started  For information about starting the processes   see    Controlling Statistics tool processes     page 105      e Configurable parameters  such as age  retain  files 24h  set in the  etc ibrix   stats conf file before the upgrade are not retained after the upgrade     e After the upgrade  historical data and reports are moved from the  var lib ibrix   histstats folder to the  local statstool histstats folder     e The upgrade retains the Statistics tool database but not the reports  You can regenerate reports  for the data stored before the upgrade by specifying the date range  See    Generating reports      page 102      Using the Historical Reports GUI    You can use the GUI to view or generate reports for the entire cluster or for a specific file serving
95. Windows StoreAll client GUI    The Windows StoreAll client GUI is the client interface to the Fusion Manager  To open the GUI   double click the desktop icon or select the StoreAll client program from the Start menu on the client   The client program contains tabs organized by function        NOTE  The Windows StoreAll client GUI can be started only by users with Administrative  privileges        e     Status  Shows the client   s Fusion Manager registration status and mounted file systems  and  provides access to the IAD log for troubleshooting     e Registration  Registers the client with the Fusion Manager  as described in the HP StoreAll  Storage Installation Guide     e Mount  Mounts a file system  Select the Cluster Name from the list  the cluster name is the  Fusion Manager name   enter the name of the file system to mount  select a drive  and then  click Mount   If you are using Remote Desktop to access the client and the drive letter does  not appear  log out and log in again      e Umount  Unmounts a file system     e Tune Host  Tunable parameters include the NIC to prefer  the client uses the cluster interface  by default unless a different network interface is preferred for it   the communications protocol   UDP or TCP   and the number of server threads to use     e Active Directory Settings  Displays current Active Directory settings     For more information  see the client GUI online help     StoreAll software manpages    StoreAll software provides manpages for 
96. about    segment evacuation  events  Statistics tool  software upgrades  HP Insight Remote Support        10 December 2012   6 2 Added or revised information about High Availability  failover  server tuning  segment  migration and evacuation  SNMP  added upgrade checklist for common upgrade tasks        T March 2013 6 3 Updated information on upgrades  remote support  collection logs  phone home and  troubleshooting  Now point users to website for the latest spare parts list instead of shipping  the list  Added before and after upgrade steps for Express Query when going from 6 2 to  6 3        12 April 2013 6 3 Removed post upgrade step that tells users to modify the  etc hosts file on every StoreAll  node  In the    Cascading Upgrades    appendix  added a section that tells users to ensure  that the NFS exports option subtree_check is the default export option for every NFS  export when upgrading from a StoreAll 5 x release  Also changed ibrix fm  m  nofmfailover  Ato ibrix fm  m maintenance  A in the    Cascading Upgrades     appendix  Updated information about SMB share creation                          Contents    1 Upgrading the StoreAll software to the 6 3 relegose 10  Online upgrades for StoreAll zchwore 12  Preparing for the Weed eege 12  Peer elek ET L    Affer the WY OS eebe Eed eege 13  Automated offline upgrades for StoreAll software 6 x ko  i 14  Preparing for the NAG Eeer eege 14  Pe  rtorming th   Upgrade  leggegeieeegege eege eeh 14  Affer the Upgrades ege 15 
97. access the Management Console  navigate to the following location   specifying port 443    https    lt management_console_IP gt  443 fusion   In these URLs   lt management_console_IP gt  is the IP address of the Fusion Manager user VIF     The Management Console prompts for your user name and password  The default administrative  user is ibrix  Enter the password that was assigned to this user when the system was installed    You can change the password using the Linux passwd command   To allow other users to access  the Management Console  see    Adding user accounts for Management Console access     page 32      StoreAll Management Console    4  UMA             Upon login  the Management Console dashboard opens  allowing you to monitor the entire cluster    See the online help for information about all Management Console displays and operations    There are three parts to the dashboard  System Status  Cluster Overview  and the Navigator     Management interfaces 29       Eessen                         Updated Jun  16  2011  9 46 48 AM PDT  Capacity e Statistics z        D Network 1 0  MB s  Disk 1 0  MB s   Event Status  24 hours   0 1 2    RH  zb i  Used  102GB   H used   3 60 60  ilh Dashboard     ge Cluster Configuration eg l S      Fi 0     o       3 Filesystems Filesystems   2 Se OSR 9S ZS SS SS e  Gi Snapshots PELL LE SL oS SKK FS  E  Servers E Healthy  3  S SS FF ss F SS SS SS  ei File Shares   Ze NES   ben g     EGI out WB Read wore     CFs Se CPU Usage Memory Usage
98. act HP Support     IBRIX Filesystem Drivers loaded  ibrcud is running   pid 23325  IBRIX IAD Server  pid 23368  running       5  Execute the following commands to verify that the ibrix and ipfs services are running   lsmod grep ibrix    Upgrading the StoreAll software to the 5 5 release 171    ibrix 2323332 0  unused    lsmod grep ipfs   ipfsl 102592 0  unused    If either grep command returns empty  contact HP Support     6  From the management console  verify that the new version of StoreAll software FS IAS has  been installed on the file serving nodes      lt ibrixhome gt  bin ibrix version  1  S    Completing the upgrade   1  Remount all file systems    lt ibrixhome gt  bin ibrix mount  f  lt fsname gt   m  lt  mountpoint gt    2  From the management console  turn automated failover back on    lt ibrixhome gt  bin ibrix server  m   3  Confirm that automated failover is enabled    lt ibrixhome gt  bin ibrix server  1  In the output  HA displays on    4  From the management console  perform a manual backup of the upgraded configuration    lt ibrixhome gt  bin ibrix fm  B   5  Verify that all version indicators match for file serving nodes  Run the following command from  the management console    lt ibrixhome gt  bin ibrix version  1    If there is a version mismatch  run the  ibrix ibrixupgrade  f script again on the  affected node  and then recheck the versions  The installation is successful when all version  indicators match  If you followed all instructions and the ver
99. act HP Support to register for the release and obtain access to the software dropbox     Use a DVD    1  Burn the ISO image to a DVD    2  Insert the DVD in the server    3  Restart the server to boot from the DVD    4  When the HP Network Storage System screen appears  enter qr to install the software     Use a USB key    1  Copy the ISO to a Linux system    2  Insert a USB key into the Linux system    3  Execute cat  proc partitions to find the USB device partition  which is displayed as  dev sdX  For example     cat  proc partitions  major minor  blocks name  8 128 15633408 sdi    4  Execute the following dd command to make USB the QR installer   dd if  lt ISO file name with path gt  of  dev sdi oflag direct bs 1M    For example     dd if X9000 QRDVD 6 3 72 1 x86 64 signed iso of  dev sdi oflag direct bs 1M  4491 0 records in   4491 0 records out   4709154816 bytes  4 7 GB  copied  957 784 seconds  4 9 MB s    5  Insert the USB key into the server   6  Boot the server from USB key   Press ET and use option 3    7  When the Network Storage System screen appears  enter qr to install the software     Performing the recovery  Complete these steps   1  Log into the node     2  On the Individual Server Setup dialog box  enter your node specific information and click  OK     144 Recovering a file serving node    Individual Server Setup    Individual Server Hostname and Time Settings    Hostname    System Date  UivAraZainys   DD AMYVYYYY   System Time   KI  HH MM     Time Zone    
100. agile configuration  on all nodes hosting the passive management console  return the  management console to passive mode      lt ibrixhome gt  bin ibrix fm  m passive    If you received a new license from HP  install it as described in the    Licensing    chapter in this  document     Troubleshooting upgrade issues    If the upgrade does not complete successtully  check the following items  For additional assistance   contact HP Support     Automatic upgrade    Check the following     If the initial execution of  usr local ibrix setup upgrade fails  check   usr local ibrix setup upgrade 1og for errors  It is imperative that all servers are  up and running the StoreAll software before you execute the upgrade script     If the install of the new OS fails  power cycle the node  Try rebooting  If the install does not  begin after the reboot  power cycle the machine and select the upgrade line from the grub  boot menu     After the upgrade  check  usr local ibrix setup logs postupgrade  log for errors  or warnings     If configuration restore fails on any node  look at   usr local ibrix autocfg logs appliance 1log on that node to determine which  feature restore failed  Look at the specific feature log file under  usr local ibrix setup   logs  for more detailed information     To retry the copy of configuration  use the command appropriate for your server        A file serving node    usr local ibrix autocfg bin ibrixapp upgrade  s       An agile node  a file serving node hosting 
101. airs at the component level or to make modifications to any printed  wiring board  Improper repairs can create a safety hazard           WARNING  To reduce the risk of personal injury or damage to the equipment  the installation  of non hot pluggable components should be performed Sek individuals who are qualitied in  servicing computer equipment  knowledgeable about the procedures and precautions  and trained  to deal with products capable of producing hazardous energy levels     WARNING  To reduce the risk of personal injury or damage to the equipment  observe local  occupational health and safety requirements and guidelines for manually handling material        194 Warnings and precautions       CAUTION  Protect the installed solution from power fluctuations and temporary interruptions with  a regulating Uninterruptible Power Supply  UPS   This device protects the hardware from damage  caused by power surges and voltage spikes  and keeps the system in operation during a power  failure     CAUTION  To pape ventilate the system  you must provide at least 7 6 centimeters  3 0 inches   of clearance at the tront and back of the device     CAUTION  Schedule physical configuration changes during periods of low or no activity  If the  system is performing rebuilds RAID migrations  array expansions LUN expansions  or experiencing  heavy I O  avoid physical configuration changes such as adding or replacing hard drives or   hot  licens a controller or any other component  For example
102. al Intervention Failure  MIF      l     2     Check the health of the file system as described in the    Monitoring cluster operations      page 84   and clear any pending issues related to the file system   lt FSNAME gt       Clear the Express Query MIF state by entering the following command    ibrix archiving  C  lt FSNAME gt    Monitor the Express Query recovery by entering the following command    ibrix_ archiving  1   While the Express Query is recovering from MIF  it displays the RECOVERY state  Wait for  the state to return to OK or MIF     If the state returns as OK  no additional steps are required  The Express Query is updating the  database with all the outstanding logged file system changes since the MIF occurrence    If you have a MIF condition for one or several file systems and cluster and file system health  checks are not OK  redo the previous steps     Cluster and file system health checks have an OK status but Express Query is yet in a MIF  condition for one or several specific file systems  This unlikely situation occurs when some data  has been corrupted and it cannot be recovered     To solve this situation   a  If there is a full backup of the file system involved  do a restore   b  If there is no full backup   1  Disable Express Query for the file system  by entering the following command     ibrix fs  T  D  f  lt FSNAME gt    2  Delete the current database for the file system by entering the following command   rm  rf  lt FS_MOUNTPOINT gt   archiving
103. ally appears depending on the situation     Obtain detailed information for hardware components in the server by clicking the nodes under  the Server node     a T Hardware  Ace  Blade Enclosure   Buame    3m cru  Ce ILO Module     Memory DIMM  AB Nc     Power Management Controller    z ATT  Storage Cluster     x Drive   E AS Storage Controller       10 Cache Module  ET volume        Storage Controller   O Battery  Ce 10 Cache Module   OF Temperature sensor    Monitoring 9300 9320 hardware 89    Table 2 Obtaining detailed information about a server       Panel name    CPU    Information provided    Status  Type  Name  UUID  Model    Location       ILO Module    Status   Type   Name   UUID   Serial Number  Model   Firmware Version    Properties       Memory DiMM    Status  Type  Name  UUID  Location    Properties       NIC    Status  Type  Name  UUID    Properties       Power Management Controller    Status  Type  Name  UUID    Firmware Version       Storage Cluster    Status    Type  Name  UUID          90 Monitoring cluster operations          Table 2 Obtaining detailed information about a server  continued        Panel name    Drive  Displays information about each drive in a storage  cluster     Information provided    Status   Type   Name   UUID   Serial Number  Model   Firmware Version  Location    Properties       Storage Controller  Displayed for a server     Status   Type   Name   UUID   Serial Number  Model   Firmware Version  Location  Message    Diagnostic message  
104. an add on script     page 138      Running an add on script    To run an add on script     l     Verify that the add on script is saved under the following location    usr local ibrix ibrixcollect ibrix collect_add_on _ scripts   The ibrix_collect command only runs add on scripts saved in this location   Enter the ibrix_collect command    ibrix_collect  c  n addOnCollection   In this instance addOnCollection is the collection name     The output of the add on scripts is included into the final tar collection along with other logs  and command outputs  In this instance  the output would be in the addOnCollection tgz  file        NOTE  The add on scripts timeout after 20 minutes        Viewing the output from an add on script    To view an output from an add on script     1     Go to the active Fusion Manager node in the  local ibrixcollect archive directory  by entering the following command      root host2    cd  local ibrixcollect archive     The output of the add on scripts is available under the tar file of the individual node  To view  the contents of the directory  enter the following command      root host2    ls  1   The following is an example of the output displayed    total 3520    rw r  r   1 root root 2021895 Dec 20 12 41 addOnCollection tgz    138 Troubleshooting    3  Extract the tar file  containing the output of the add on script  The tar file containing the output  of the add on script has the name of the collection by entering the following command      root
105. an agile management console conbgurotton  172   E WEE E 172   AS WEE ME EE 176  Troubleshooting upgrade jesues 178    Contents 7    B Component diagrams for 9300 zwstems 180    Front view of file serving MORAG E 180  Rear view of file serving EE ee 180  C System component and cabling diagrams for 9320 zwstemz 183  System component CII AIN Es eesexedesutostepcetid ode onaserenesisies Ser    seslestiecacete deeg eegen 183  Front view of 9300c array controller or 9300cx 3 5  12 drive enclosure 183  Rear view of 9300c array coil EE 184  Rear view of 9300cx 3 5  12 drive pelen Seeiargrdesebdrst s  iere 184  Front view of file serving Eesen geet eege 185  Rear view of file serving Ne EN 185  EN EE 188  Cluster network cabling ii Gri ausaincecsdansansantunadadadsedsavanente duaddacadeuanennemesdeussennineeandeteentns 188  SATA option cabl Ngesir nE EEA AT EEA R 189  elt EE 190  Drive enclosure  le aussie E A E E 191   D VV civics  ond EE 192  Electrostatic discharge information  sescscssneiiorcoeceraneyuusaseoevandnay cand tenses vanewenedervanaedinnereneuseneesedenntue 192  Preventing electrostatic E E 192  E KE HAS osenic ste caasencias uae dab auinsncndnylcnveeabaqunentascu seed none ontnseebeneivenwianeeeivadeeasintes 192  Eegen  tee Sege Ee EE 193  Rack warnings and PRECRI NOMS faced snsccsensadadenancsadtunsdanecaessdgancnen peandanentninensidenadeadadnoneneedeteinantie 193  Device warnings and E E 194  E Regulatory compliance MONCES nxcecacc lt seranmscnnsacesandsdnmsanbestar
106. andby Server  Off bond   10 30 69 151  ib69s1  ib69s2           ib69s2   Active User NICs EG Remove NG  Monitoring Server NIC Server Standby NIC Standby Server    No active physical or virtual User NICs Found  Add one via the  Add NIC  button                 Next  enable NIC monitoring on the VIF  Select the new user NIC and click NIC HA  On the NIC  HA Contig dialog box  check Enable NIC Monitoring     Configuring failover    Enable NIC Monitoring    Server  ib69s41    User NIC  bonds   10 30 69 151     standby Server  ib69s       Standby NIC  Ke    Ch Required Value    In the Standby NIC field  select New Standby NIC to create the standby on backup server ib69s2   The standby you specify must be available and valid  To keep the organization simple  we specified  bond0 1 as the Name  this matches the name assigned to the NIC on server ib69s1  When you  click OK  the NIC HA configuration is complete           Configuring High Availability on the cluster 59    Server  ibfos     Enter a NIC name of an existing physical interface  e g     eth4    or     bond1   to configure an active physical network  To create a virtual  interface  VIF   enter a NIC name  e g     bond1 1   based on an existing  active physical network     Name  IP Address  No IP  Inactive   Standby    Net Mask  255   255  255   Route    MTU         Required Value       You can create additional user VIFs and assign standby NICs as needed  For example  you might  want to add a user VIF for another share on serve
107. arning  A suboptimal condition that might require your attention was found on one or more  tested hosts or standby servers     The detailed report consists of the summary report and the following additional data   e Summary of the test results   e   Host information such as operational state  performance data  and version data  e    Nondefault host tunings   e   Results of the health checks    By default  the Result Information field in a detailed report provides data only for health checks  that received a Failed or a Warning result  Optionally  you can expand a detailed report to provide  data about checks that received a Passed result  as well as details about the file system and  segments     Viewing a summary health report   To view a summary health report  use the ibrix health  1 command    ibrix health  1   h HOSTLIST    f    b    By default  the command reports on all hosts  To view specific hosts  include the  h HOSTLIST    argument  To view results only for hosts that failed the check  include the     argument  To include  standby servers in the health check  include the  b argument     The following is an example of the output from the ibrix health  1 command      root bv18 03     ibrix_health  1  Overall Health Checker Results   PASSED          Host Summary Results          96 Monitoring cluster operations    Host Result Type State Network Last Update    bv18 03 PASSED Server Up 10 10 18 3 Thu Oct 25 14 23 12 MDT 2012  bv18 04 PASSED Server Up 10 10 18 4 Thu Oct 2
108. ary Observe the current server status in the grid below     Note that this is a snapshot of server performance  you can get more detailed historical  data using the Statistics Tool      Based on the server data  you may choose to change segment ownership to another server that can  see  the same storage segment     Server Status                  Server State CPU  Gol Network 1 0  MB s  Disk 1 0  MB s   ib69s4 Up 30 0 01 0 00  ib69s5 Up 1 0 00 0 00  Segment Properties   Segment LUN UUID Filesystem Tier Size  TB Used     Storage State Owner  1   OOcOffd7e   myFS1  TierOneSixer 1 6 TB   mn OK   Tee  e Deene parpena rees eae gone   S   BS See Geier eg D  ser ae S    E  OOcOffd7e    myFS1 TierFourT 2 0 TB myMSA OK ib69s4  4 OOcOffd7e    myFS1 2 0 TB myMSA OK ib69s5  5 OOcOffd7e    myFS1 TierOneSixer 1 6 TB myMS amp  OK ib69s4             The new owner of the segment must be able to see the same storage as the original owner  The  Change Segment Owner dialog box lists the servers that can see the segment you selected  Select  one of these servers to be the new owner     116 Maintaining the system       Current Owner     Filesystem     Segment     Select a new owner for this segment below  The available    servers for selection are restricted to those that can see the  same storage segment       New Owner  select          Required Value       The Summary dialog box shows the segment migration you specified  Click Back to make any  changes  or click Finish to complete the operation  
109. ary and execute the following command     ibrixupgrade  f   5  Verify that the management console started successfully    etc init d ibrix fusionmanager status    The status command confirms whether the correct services are running  Output is similar to  the following     Fusion Manager Daemon  pid 18748  running       6  Check  usr local ibrix log fusionserver 1log for errors     Upgrading the file serving nodes  After the management console has been upgraded  complete the following steps on each file  serving node     1  Move the  lt installer dir gt  ibrix directory used in the previous release installation to  ibrix old  For example  if you expanded the tarball in  root during the previous StoreAll  installation on this node  the installer is in  root  ibrix     2  Expand the distribution tarball or mount the distribution DVD in a directory of your choice   Expanding the tarball creates a subdirectory named ibrix that contains the installer program   For example  if you expand the tarball in  root  the installer is in  root  ibrix     3  Change to the installer directory if necessary and execute the following command     ibrixupgrade  f  The upgrade automatically stops services and restarts them when the process completes     4  When the upgrade is complete  verify that the StoreAll software services are running on the  node      etc init d ibrix server status    The output should be similar to the following example  If the IAD service is not running on your  system  cont
110. as NICs     Monitoring 9300 9320 hardware    85    Navigator    a  Dashboard      ge Cluster Configuration     Filesysterns   Il Snapshots    Se NFS   e CES  se FTP  C HTTP  R Certificates  T Storage  T vendor Storage  E  Clients  3f Hostgroups     Events          8 Mountpoints    NFS  SA CIFS    An Power       Events    B 7 Hardware  Ass Server    The following are the top level options provided for the server        NOTE  Information about the Hardware node can be found in    Monitoring hardware components           page 88     e HBAs  The HBAs panel displays the following information      Node WWN     Port WWN     Backup    86 Monitoring cluster operations    e   Monitoring     State    NICs  The NICs panel shows all NICs on the server  including offline NICs  The NICs panel  displays the following information        Home  o Ip      Type      State   o Route       Standby Server      Standby Interface   Mountpoints  The Mountpoints panel displays the following information        Mountpoint   o Filesystem      Access    NFS  The NFS panel displays the following information        Host  o Path     Options    CIFS  The CIFS panel displays the following information     NOTE   CIFS in the GUI has not been rebranded to SMB yet  CIFS is just a different name  for SMB        o Home  o Value    Power  The Power panel displays the following information        Host      Home      Type   o IP Address     Slot ID    Monitoring 9300 9320 hardware 87    e Events  The Events panel display
111. assed   on     Agile offline upgrade    This upgrade procedure is appropriate for major upgrades  Perform the agile offline upgrade in  the following order     e   File serving node hosting the active management console  e   File serving node hosting the passive management console    e Remaining file serving nodes       NOTE  To determine which node is hosting the active management console  run the following  command      lt ibrixhome gt  bin ibrix fm  i       Preparing for the upgrade   1  On the active management console node  disable automated failover on all file serving nodes    lt ibrixhome gt  bin ibrix server  m  U   2  Verify that automated failover is off  In the output  the HA column should display of       lt ibrixhome gt  bin ibrix_ server  1l    3  On the active management console node  stop the NFS and SMB services on all file serving  nodes to prevent NFS and SMB clients from timing out      lt ibrixhome gt  bin ibrix server  s  t cifs  c stop   lt ibrixhome gt  bin ibrix_ server  s  t nfs  c stop  Verify that all likewise services are down on alll file serving nodes   ps  ef   grep likewise  Use kill  9 tokill any likewise services that are still running    4  Unmount all StoreAll file systems      lt ibrixhome gt  bin ibrix_ umount  f  lt fsname gt     Upgrading the file serving nodes hosting the management console    Complete the following steps     176 Cascading Upgrades    N    10     On the node hosting the active management console  force a backup of th
112. be configured on VIFs that will be used by NFS  SMB  FTP  or HTTP           IMPORTANT  When configuring NIC monitoring  use the same backup pairs that you used when  configuring standby servers        Creating a bonded VIF 49    For example       ibric_nic  m  h nodel  A node2 bond0     ibric_nic  m  h node2  A nodel bondo     ibric_nic  m  h node3  A node4 bondo     ibric_nic  m  h node4  A node3 bondo0     PRR    Configuring automated failover  To enable automated failover for your file serving nodes  execute the following command   ibrix_ server  m   h SERVERNAME     Example configuration  This example uses two nodes  ib50 81 and ib50 82  These nodes are backups for each other   forming a backup pair      root ib50 80     ibrix server  1  Segment Servers          SERVER_NAME BACKUP STATE HA ID GROUP  ib50 81 ib50 82 Up on 132cf6  la d25b 40f8 890e e97363ae0d0b servers  ib50 82 ib50 81 Up on 70258451 4455 484d bf80 75c94d17121d servers    All VIFs on ib50 81 have backup  standby  VIFs on ib50 82  Similarly  all VIFs on ib50 82  have backup  standby  VIFs on ib50 81  NFS  SMB  FTP  and HTTP clients can connect to bondo   1  on either host  If necessary  the selected server will fail over to bonao   2 on the opposite host   StoreAll clients could connect to bond  on either host  as these clients do not support or require  NIC failover   The following sample output shows only the relevant fields       root ib50 80     ibrix_nic      HOST IFNAME TYPE STATE IP_ADDRESS MAC ADDRES
113. brix mount  f  lt fsname gt   m  lt  mountpoint gt    Re enable High Availability if used    ibrix server  m   Start ay remote replication  rebalancer  or data tiering tasks that were stopped before the  upgrade     If you are using SMB  set the following parameters to synchronize the SMB software and the  Fusion Manager database     e smb signing enabled   e smb signing required   e ignore _writethru   Use ibrix_cifsconfig to set the parameters  specifying the value appropriate for your  cluster  l1 enabled  O disabled   The following examples set the parameters to the default  values for the 6 1 release    ibrix_cifsconfig  t  S  smb _ signing enabled 0    smb signing required 0    ibrix_cifsconfig  t  S  ignore writethru 1    The SMB signing feature specifies whether clients must support SMB signing to access SMB  shares  See the HP StoreAll Storage File System User Guide for more information about this    feature  Whenignore_writethru is enabled  StoreAll software ignores writethru buffering  to improve SMB write performance on some user applications that request it     Mount file systems on Linux StoreAll clients    Because of a change in the inode format  files used for snapshots must either be created on  StoreAll 6 0 or later  or the pre 6 0 file system containing the files must be upgraded for  snapshots  For more information about upgrading a file system  see    Upgrading pre 6 0 file  systems for software snapshots     page 159      Upgrading Linux StoreAll clients
114. bs gt mxmib  a ibrixMib cfg  For more information about the MIB  see the  Compiling and customizing MIBs  chapter in the HP  Systems Insight Manager User Guide  which is available at   http   www hp com go insightmanagement sim    Click Support  amp  Documents and then click Manuals  Navigate to the user guide        Limitations  Note the following   e   For StoreAll systems  the HP Insight Remote Support implementation is limited to hardware  events   Configuring the StoreAll cluster for Insight Remote Support    To enable 9300 9320 systems for remote support  first register MSA disk arrays and then configure  Phone Home settings  All nodes in the cluster should be up when you perform this step        NOTE  Configuring Phone Home removes any previous StoreAll snmp configuration details and  populates the SNMP configuration with Phone Home configuration details  When Phone Home is  enabled  you cannot use ibrix_snmpagent to edit or change the snmp agent configuration   However  you can use ibrix_snmptrap to add trapsink IPs and you can use ibrix_event to  associate events to the trapsink IPs        Registering MSA disk arrays  To register an MSA disk array with the cluster  run the following command     ibrix_vs  r  n STORAGENAME  t msa  I IP s   U USERNAME   P PASSWORD     Configuring Phone Home settings    To configure Phone Home on the GUI  select Cluster Configuration in the upper Navigator and  then select Phone Home in the lower Navigator  The Phone Home Setup panel sho
115. ce  lt FSNAME gt  is the file system    Re create REST API  Object API  shares deleted before the upgrade on each node in the cluster   if desired  by entering the following command     NOTE  The REST API  Object API  functionality has expanded  and any REST API  Object  API  shares you created in previous releases are now referred to as HTTP StoreAll REST API  shares in file compatible mode  The 6 3 release is also introducing a new type of share called  HTTP StoreAll REST API share in Object mode        20 Upgrading the StoreAll software to the 6 3 release    ibrix httpshare  a  lt SHARENAME gt   c  lt PROFILENAME gt   t  lt VHOSTNAME gt   f   lt FSNAME gt   p  lt DIRPATH gt   P  lt URLPATH gt   S     ibrixRestApiMode filecompatible  anonymous true       In this instance    e  lt SHARENAME gt  is the share name    e  lt PROFILENAME gt  is the profile name   e  lt VHOSTNAME gt  is the virtual host name  e  lt FSNAMEs gt  is the file system    e  lt DIRPATH gt  is the directory path    e   lt URLPATH gt  is the URL path    e   lt SETTINGLISTs gt  is the settings     Wait for the resynchronizer to complete by entering the following command until its output is   lt FSNAME gt   OK     ibrix archiving  1   Restore your audit log data by entering the following command    MDImport  f  lt FSNAME gt   n  tmp auditData csv  t audit   In this instance  lt F SNAME gt  is the file system    Restore your custom metadata by entering the following command    MDImport  f  lt FSNAME gt   n  t
116. cedsrsdcivedeeoredeieietives ege Eder 53  Agile Fusion Manager RE E 53  Configuring High Availability on the custer   54  What happens RR viiciecnn concertina acieccneaneenper eens 55  Configuring automated failover with the HA Wizord  55  Configuring automated failover menger iesgetersbeteetetEseeegeetA geed Set 62  Changing the HA configuration manual  63  Failing a server over MICU 25 54  ctceccetedasenrenciveateisidccenactesalaetcena cieelialalesichrecieeateaterees 64  Failing EE 64  Setting Mel DEE 64  Checking the High Availability ComliQuration oxsexncasessaendevcoueicedeeniecanecavneeddersseceuoctacedbdevscadeeneds 66  Capturing a core dump from a failed NOUS E 68  Prerequisites for setting TE 68  Setting up nodes for crash GOPHONE snc e352 2  acmcnnnsaidasede dananeaudioeecvedennundndnaieds dude geamenowvadeunenadwamede 69   6 Configuring cluster event notification             ssnnnseeesssesseeeeoiereeseessreererresereeeee 70  EE eneen euer 70  Setting up email notification of cluster Svenits  lt iscxxtivsazeiciciaciresetsaiasdincssteediaviinediecsamsedarselaeseeceeees 70  EE 71  Configuring email notification settings  EE 71  Dissociating events and email oddresges 71  USR ASS S65 ic cde ct vccauuahendsdedendaniodaneatadedeaenetesahnveaddeddaaieheieveideheaqumnedenndandadansuiveks 71  Viewing ENEE EE 72  Setting up E 72  Configuring the SNMP Eegenen Eege 72  Configuring trapsink E TE 73  RR INKS 2 cceiciarcexsvenntataiitadyaereerieninaunevetialaneinandeluced 74  hie
117. cklist for all upgrades       Step  1    Description   Verify that the entire cluster is currently running StoreAll 6 0 or later by entering the following  command    ibrix_ version  l   IMPORTANT  All the StoreAll nodes must be at the same release     e If you are running a version of StoreAll earlier than 6 0  upgrade the product as  described in    Cascading Upgrades     page 154      e If you are running StoreAll 6 0 or later  proceed with the upgrade steps in this section     Verity that the  local partition contains at least 4 GB for the upgrade by using the  following command     df  kh  local    Step  completed     O       The 6 3 release requires that nodes hosting the agile Fusion Manager be registered on the  cluster network  Run the following command to verify that nodes hosting the agile Fusion  Manager have IP addresses on the cluster network     ibrix_fm  1    If a node is configured on the user network  see    Node is not registered with the cluster  network      page 22  for a workaround     NOTE  The Fusion Manager and all file serving nodes must be upgraded to the new  release at the same time  Do not change the active passive Fusion Manager configuration  during the upgrade              Verify that the crash kernel parameter on all nodes has been set to 256M by viewing the  default boot entry in the  etc grub conf file  as shown in the following example     kernel  vmlinuz 2 6 18 194 e15 ro root  dev vgl1 lv1  crashkernel 256M 16M    The  etc grub conf fil
118. cks together securely     e Extend only one rack component at a time  Racks can become unstable if more than one  component is extended        Product warranties    For information about HP product warranties  see the warranty information website   htto   www hp com go storagewarranty       Subscription service    HP recommends that you register your product at the Subscriber s Choice for Business website   http   www hp com go e updates    After registering  you will receive email notification of product enhancements  new driver versions   firmware updates  and other product resources     152 Support and other resources       17 Documentation feedback    HP is committed to providing documentation that meets your needs  To help us improve the  documentation  send any errors  suggestions  or comments to Documentation Feedback   docsfeedback hp com   Include the document title and part number  version number  or the URL    when submitting your feedback     153       A Cascading Upgrades    If you are oe a Store All version earlier than 5 6  do incremental upgrades as described in  the following table  If you are running StoreAll 5 6  upgrade to 6 1 before upgrading to 6 3        If you are upgrading from Upgrade to Where to find additional information    StoreAll version 5 4 StoreAll version 5 5    Upgrading the StoreAll software to the  5 5 release     page 167        StoreAll version 5 5 StoreAll version 5 6    Upgrading the StoreAll software to the  5 6 release     page 163 
119. control startall    e   Stop processes on specific file serving nodes              usr local ibrix stats bin ibrix_statscontrol stop  lt hostnamel gt    lt hostname2 gt     e Start processes on specific file serving nodes        usr local ibrix stats bin ibrix_statscontrol start  lt hostnamel1 gt    lt hostname2 gt     Troubleshooting the Statistics tool    Testing access    To verity that ssh authentication is enabled and data can be obtained from the nodes without  prompting for a password  run the following command        usr local ibrix stats bin stmanage testpull    Troubleshooting the Statistics tool 105    Other conditions    e Data is not collected  If data is not being gathered in the common directory for the Statistics  Manager   usr local statstool histstats  by default   restart the Statistics tool  processes on all nodes  See    Controlling Statistics tool processes     page 105      e Installation issues  Check the  tmp stats install 1log and try to fix the condition  or  send the  tmp stats install log to HP Support     e Missing reports for file serving nodes  If reports are missing on the Stats tool web page  check  the following        Determine whether collection is enabled for the particular file serving node  If not  see     Enabling collection and synchronization     page 100      o   Check for time synchronization  All servers in the cluster should have the same date time  and time zone to allow proper collection and viewing of reports     Log files    
120. d   0 Unknown C 0 Informational Total  5        MP   SW  ES   System Name t   System Type   System Address   Product Name   E y      102420 Storage Device 10 2 4 20 HP X9300 NetStor FSN P    Managed by 10 259  104      y     10245 Management Processc10 2 4 54 Integrated Lights Out    in Serer 10 2 4 20  F      1024 68 Storage Device 10 2 4 68 HP StorageWorks MSA231  ES Managed by 10 2 59 104    F y      10259104 Fusion Manager 10 2 59 104 HP X9000 Solution          _ wintetchastog Server 10 2 4 75 ProLiant DL360 G6       File serving nodes and MSA arrays are associated with the Fusion Manager IP address  In HP SIM   select Fusion Manager and open the Systems tab  Then select Associations to view the devices   You can view all StoreAll devices under Systems by Type  gt  Storage System  gt  Scalable Storage  Solutions  gt  All 9000 Systems    Configuring Insight Remote Support for HP SIM 6 3 and IRS 5 6    Discovering devices in HP SIM    HP Systems Insight Manager  SIM  uses the SNMP protocol to discover and identify StoreAll systems  automatically  On HP SIM  open Options  gt  Discovery  gt  New  and then select Discover a group of  systems  On the New Discovery dialog box  enter the discovery name and the IP addresses of the  devices to be monitored  For more information  see the HP SIM 6 3 documentation        NOTE  Each device in the cluster should be discovered separately        42 Getting started    New Discovery    o Dacover a group of sysiens  Dacover a single system  
121. d after you run the ibrix reten_adm  u  f FSNAME command  For more  information  see    Upgrading pre 6 0 file systems for software snapshots     page 159      If you have an Express Query enabled file system prior to version 6 3  manually complete  each file system upgrade as described in    Required steps after the StoreAll Upgrade     page 20      Automated offline upgrades for StoreAll software 6 x to 6 3    Preparing for the upgrade    To prepare for the upgrade  complete the following steps     l   2     3     Make sure you have completed all steps in the upgrade checklist  Table 1  page 10      Stop all client   O to the cluster or file systems  On the Linux client  use lsof  lt  mountpoint gt   to show open files belonging to active processes    Verify that all StoreAll file systems can be successfully unmounted from all FSN servers     ibrix umount  f fsname    Performing the upgrade    14    This upgrade method is supported only for upgrades from StoreAll software 6 x to the 6 3 release   Complete the following steps     1     StoreAll OS version 6 3 is only available through the registered release process  To obtain  the ISO image  contact HP Support to register for the release and obtain access to the software  dropbox    Make sure the  local ibrix folder is empty prior to copying the contents of pkgfull   The upgrade will fail if the  local ibrix folder contains leftover   rpm packages not listed  in the build manifest    Mount the ISO image and copy the entire d
122. d for protocol traffic  and that interface fails on a file serving node  any protocol clients using the failed interface to  access a mounted file system will lose contact with the file system because they have no knowledge  of the cluster and cannot reroute requests to the standby for the node     Link aggregation and virtual interfaces    When creating a user network interface  you can use link aggregation to combine physical resources  into a single VIF  VIFs allow you to provide many named paths within the larger physical resource   each of which can be managed and routed independently  as shown in the following diagram   See the network interface vendor documentation for any rules or restrictions required for link  aggregation     122 Maintaining the system    e TT wf  E A Jd D    J    SEE so ees F    d       bondo  NICO A bond0 01  192 168 1 101  MAC addr 00 07 91 E9 04 42 B bond0 02  192 168 1 102  HCH C bond0 03  192 168 21  MAC addr  0007 91 E9 04 D bond0 04  192 168 22 _  R E bond0 05  192 168 3 101  Bandwidth is bondedto Bandwidth is divided into  use a single network separate virtual interfaces  interface name  Fort  VIFs  to manage and  with twice the capacity foute independently    Identifying a user network interface for a file serving node    To identity a user network interface for specific file serving nodes  use the ibrix nic command   The interface name  IFNAME  can include only alphanumeric characters and underscores  such  as eth     ibrix nic  a  n IFNA
123. dd NIC  button              For example  you can create a user VIF that clients will use to access an SMB share serviced by  server ib69s1  The user VIF is based on an active physical network on that server  To do this  click  Add NIC in the section of the dialog box for ib69s1     56   Configuring failover    On the Add NIC dialog box  enter a NIC name  In our example  the cluster uses the unified network  and has only bondo  the active cluster FM IP  We cannot use bondo   0  which is the management  IP VIF  We will create the VIF bondo   1  using bondo as the base  When you click OK  the user  VIF is created     Server  ibsosi    Enter a NIC name of an existing physical interface  e g     eth4    or     bond1    to configure an active physical network  To create a virtual  interface  VIF   enter a NIC name  e g     bond1 1   on an existing  active physical network       Name  bond     IP Address  10  Route   MTU      Ch Required Value    The new  active user NIC appears on the NIC HA setup dialog box        Configuring High Availability on the cluster 57    58             Server HA Pair NIC HA Setup  s   NIC HA Setup    High Availability for a physical or virutal NIC  typically servicing file share data  works by assigning a standy NIC on the backup  server in a server pair     Ven server HA is enabled  a monitored NIC will cause auto failover if the NIC becomes unavailable     ib69s1   Active User NICs NIG BA Add NIC remove Mic    Monitoring Server WC Server Standby NIC St
124. des both a  Management Console and a CLI  Most operations can be performed from either the StoreAll  Management Console or the CLI     The following operations can be performed only from the CLI     e SNMP configuration  ibrix_snmpagent  ibrix snmpgroup  ibrix_snmptrap   ibrix snmpuser  ibrix snmpview     e Health checks  ibrix haconfig  ibrix_ health  ibrix_healthconfig    e   Raw storage management  ibrix pv  ibrix_vg  ibrix_1lv    e Fusion Manager operations  ibrix_fm  and Fusion Manager tuning  ibrix_fm_tune   e File system checks  ibrix_ fsck     e Kernel profiling  tbrix profile     Getting started    e Cluster configuration  ibrix_clusterconfig    e Configuration database consistency  ibrix_dbck    e   Shell task management  ibrix_she11    The following operations can be performed only from the StoreAll Management Console   e   Scheduling recurring data validation scans   e   Scheduling recurring software snapshots    e   Scheduling recurring block snapshots    Using the StoreAll Management Console    The StoreAll Management Console is a browser based interface to the Fusion Manager  See the  release notes for the supported browsers and other software required to view charts on the  dashboard  You can open multiple Management Console windows as necessary     If you are using HTTP to access the Management Console  open a web browser and navigate to  the following location  specifying port 80     http    lt management_console_IP gt  80 fusion   If you are using HTTPS to 
125. djusts various advanced parameters that affect server operations                          General Tunings Module Tunings     IAD Tunings      Module Tunings The following are advanced system module tunings       Servers   D Summary e 5   i z   Use default values for all Module tune options  defaults defined in parenthesis   a  commit_ywatermark  50   50  0 100        create_sleep_time  10   10  deleg_lru_high_wm  20000   20000  deleg_Iru_low_wm  15000   15000  disconnected_op_timeout   1    fA  0 900   do_async_read  1   1  0 1   do_async_write  2   2  0 3   4 flushd_timeout  500   500  100 6000    high_active_threads  6   6  1 80           On the Servers dialog box  select the servers to which the tunings should be applied     112 Maintaining the system          Modify Server s  Wizard    o General Tunings Servers   o IAD Tunings      Module Tunings In the following grid  select the servers to apply any tuning changes to      Servers    lar Select servers to apply changes to      Server    ib69s1    ib69s2        lt Back   zen       Tuning file serving nodes from the CLI    All Fusion Manager commands for tuning hosts include the  h HOSTLIST option  which supplies  one or more host groups  Setting host tunings on a host group is a convenient way to tune a set  of clients all at once  To set the same host tunings on all clients  specify the clients host group        CAUTION  Changing host tuning settings alters file system performance  Contact HP Support  before changing host
126. e Central Management Server gt   P   Country Name   z Software Entitlement ID    r Read Community    w Write   Community    t System Contact    n System Name    o System Location    For example    ibrix_phonehome  c  i 99 2 4 75  P US  r public  w private  t Admin  n   SYS01 US  o Colorado   Next  configure Insight Remote Support for the version of HP SIM you are using    e HP SIM 7 1 and IRS 5 7  See    Configuring Insight Remote Support for HP SIM 7 1 and IRS  5 7     page 39     e HP SIM 6 3 and IRS 5 6  See    Configuring Insight Remote Support for HP SIM 6 3 and IRS  5 6     page 42      Configuring Insight Remote Support for HP SIM 7 1 and IRS 5 7    To configure Insight Remote Support  complete these steps   1  Configure Entitlements for the servers and chassis in your system   2  Discover devices on HP SIM     Configuring Entitlements for servers and storage    Expand Phone Home in the lower Navigator  When you select Servers  or Storage  the GUI displays  the current Entitlements for that type of device  The following example shows Entitlements for the  servers in the cluster        NOTE  The Chassis selection does not apply to 9300 or 9320 systems                 Cluster Configuration Servers  SW La 3 IP Hostname Product Name SerialNumber Product Number Customer Entered SerialNumber Customer Entered Product Number  vents  ae  Phone Home x9730 node1 ProLiant BL460cG7 SGH107X60H 603718 B21 SGH107X60H 603718 B21  we  mi  Chassis x9730 node2 ProLiant BL460c G7 SGH107X60
127. e al fuego o al agua      Sustituya las baterias s  lo por el repuesto designado por HP     Las bater  as  los paquetes de bater  as y los acumuladores no se deben eliminar junto  con los desperdicios generales de la casa  Con el fin de tirarlos al contenedor de  reciclaje adecuado  utilice los sistemas p  blicos de recogida o devu  lvalas a HP    un distribuidor autorizado de HP o sus agentes     Para obtener m  s informaci  n sobre la sustituci  n de la bater  a o su eliminaci  n    correcta  consulte con su distribuidor o servicio t  cnico autorizado     208 Regulatory compliance notices       Glossary    ACE Access control entry    ACL Access control list    ADS Active Directory Service    ALB Advanced load balancing    BMC Baseboard Management Configuration    CIFS Common Internet File System  The protocol used in Windows environments for shared folders    CL Command line interface  An interface comprised of various commands which are used to control  operating system responses    CSR Customer self repair    DAS Direct attach storage  A dedicated storage device that connects directly to one or more servers    DNS Domain name system    FTP File Transfer Protocol    GSI Global service indicator    HA High availability    HBA Host bus adapter    HCA Host channel adapter    HDD Hard disk drive    IAD HP 9000 Software Administrative Daemon    iLO Integrated Lights Out    IML Initial microcode load    IOPS I Os per second    IPMI Intelligent Platform Management Interface    JB
128. e configuration    Complete the following steps on each node  starting with the node hosting the active management  console     l     Run  usr local ibrix setup save_cluster_config  This script creates a tgz file  named  lt hostname gt _cluser_config tgz  which contains a backup of the node  configuration    Save the  lt hostname gt _cluser_ config  tgz file  which is located in  tmp  to the external  storage media     Performing the upgrade    Complete the following steps on each node     l     eN    Obtain the latest Quick Restore image from the HP kiosk at http   www software hp com   kiosk  you will need your HP provided login credentials     Burn the ISO image to a DVD    Insert the Quick Restore DVD into the server DVD ROM drive    Restart the server to boot from the DVD ROM    When the StoreAll Network Storage System screen appears  enter qr to install the StoreAll  software on the file serving node     The server reboots automatically after the software is installed  Remove the DVD from the  DVD ROM drive     Restoring the node configuration    Complete the following steps on each node  starting with the previous active management console     Lk    2   3     4     Log in to the node  The configuration wizard should pop up  Escape out of the configuration  wizard     Attach the external storage media containing the saved node configuration information   Restore the configuration  Run the following restore script and pass in the tgz file containing  the node s saved con
129. e destination host  DESTHOST  cannot be a host group  For example  to prefer network interface  eth3 for traffic from all StoreAll clients  the clients host group  to file serving node s2   hp   com     ibrix_hostgroup  n  g clients  A s2 hp com eth3    Unpreferring network interfaces    To return file serving nodes or StoreAll clients to the cluster interface  unprefer their preferred  network interface  The first command unprefers a network interface for a file serving node  the  second command unprefers a network interface for a client     ibrix_ server  n  h SRCHOST  D DESTHOST  ibrix_ client  n  h SRCHOST  D DESTHOST    To unprefer a network interface for a host group  use the following command   ibrix_client  n  g HOSTGROUP  A DESTHOST    Making network changes    This section describes how to change IP addresses  change the cluster interface  manage routing  table entries  and delete a network interface     Changing the IP address for a Linux StoreAll client    After changing the IP address for a Linux StoreAll client  you must update the StoreAll software  configuration with the new information to ensure that the Fusion Manager can communicate with  the client  Use the following procedure     1  Unmount the file system from the client   2  Change the client   s IP address   3  Reboot the client or restart the network interface card   4  Delete the old IP address from the configuration database   ibrix_client  d  h CLIENT  5  Re register the client with the Fusion Mana
130. e following command   ibrix collect  d  n NAME    To specify more than one collection to be deleted at a time from the CLI  provide the names  separated by a semicolon     Collecting information for HP Support with the lbrixCollect 135    To delete all data collections manually from the CLI  use the following command     ibrix_ collect  d  F    Configuring lbrix Collect    You can configure data collection to occur automatically upon a system crash  This collection will  include additional crash digester output  The archive filename of the system crash triggered collection  will be in the format  lt timestamp gt  crash_ lt crashedNodeName gt  tgz     1  To enable or disable an automatic collection of data after a system crash  and to configure  the number of data sets to be retained     a  Select Cluster Configuration  and then select Ibrix Collect   b  Click Modify  and the following dialog box will appear       Mosity Configuration           General Settings    be  Enable automate date cofecton        Number of data sets    SG be retaines    Email Settings       Requred Vave    c  Under General Settings  enable or disable automatic collection by checking or unchecking  the appropriate box     d Enter the number of data sets to be retained in the cluster in the text box   To enable disable automatic data collection using the CLI  use the following command   ibrix_ collect  C  a  lt Yes No gt     To set the number of data sets to be retained in the cluster using the CLI  use 
131. e following command at  the command prompt       ibrix nic  l    Notice if the    ROUTE    column is unpopulated for IFNAME     Configuring virtual interfaces for client access    bond1  bondo  bond1  bondi     root ib50 80     ibrix_nic  l    HOST IFNAME TYPE STATE IP_ADDRESS MAC ADDRESS BACKUP_HOST BACKUP_IF ROUTE VLAN_TAG LINKMON  ib50 81 bond Cluster Up  LinkUp 172 16 0 81 00 00 00 00 11 172 16 0 254 No  ib50 81 bond0 1 User Up  LinkUp 172 16 0 181 00 00 00 00 11 ib50 82 bond0 2 No  ib50 81 bond0 2 User 00 00 00 00 11 No  ib50 82 bond Cluster Up  LinkUp 172 16 0 82 00 00 00 00 12 172 16 0 254 No  ib50 82 bond0 1 User Up  LinkUp 172 16 0 182 00 00 00 00 12 ib50 81 bondo 2 No  ib50 82 bondO 2 User 00 00 00 00 12    ib50 81  Active FM Nonedit  bondO 0 Cluster Up  LinkUp  ActiveFM 172 16 0 281 No    3  To assign the IFNAME a default route for the parent cluster bond and the user VIFS  assigned to FSNs for use with SMB NFS  enter the following ibrix nic command at  the command prompt       ibrix nic  r  n IFNAME  h HOSTNAME A  R  lt ROUTE_IP gt     4  Configure backup monitoring  as described in    Configuring backup servers     page 49      Creating a bonded VIF    NOTE  The examples in this chapter use the unified network and create a bonded VIF on bondo   If your cluster uses a different network layout  create the bonded VIF on a user network bond such  as bond1        Use the following procedure to create a bonded VIF  bondo   1 in this example      1  If high availability
132. e ib51 101  which is hosting the active Fusion Manager  has an IP address on the user  network  192 168 51 101  instead of the cluster network      root ib51 101 ibrix   ibrix_fm  i  FusionServer  ib51 101  active  quorum is running            root ib51 101 ibrix   ibrix_fm  1  NAME IP ADDRESS    ib51 101 192 168 51 101  i1b51 102 10 10 51 102    1  _ If the node is hosting the active Fusion Manager  as in this example  stop the Fusion Manager  on that node      root ib51 101 ibrix    etc init d ibrix fusionmanager stop  Stopping Fusion Manager Daemon   OK     root ib51 101 ibrix     2  On the node now hosting the active Fusion Manager  ib51 102 in the example   unregister  node ib51 101     Upgrading the StoreAll software to the 6 3 release     root ib51 102     ibrix_fm  u ib51 101  Command succeeded    3  On the node hosting the active Fusion Manager  register node ib51 101 and assign the  correct IP address      root ib51 102     ibrix fm  R ib51 101  I 10 10 51 101  Command succeeded        NOTE  When registering a Fusion Manager  be sure the hostname specified with  R matches  the hostname of the server        The ibrix_f  m commands now show that node ib51 101 has the correct IP address and node  ib51 102 is hosting the active Fusion Manager      root ib51 102     ibrix_fm  f   NAME IP ADDRESS   ib51 101 10 10 51 101   ib51 102 10 10 51 102    root ib51 102     ibrix_fm  i   FusionServer  ib51 102  active  quorum is running           File system unmount issues  If a fi
133. e levers  2        HP Systems Insight Manager display       Hard drive bays       SATA optical drive bay       Video connector          alas   AJOJN    USB connectors  2        Rear view of file serving node       Soe EE Ee I   CH  STEET EEEE ke    ECKE SE     Po         a gt  6 OF                                     Item Description   1 PCI slot 5   2 PCI slot 6   3 PCI slot 4   4 PCI slot 2   5 PCI slot 3   6 PCI slot 1   7 Power supply 2  PS2   8 Power supply 1  PS1   9 USB connectors  2   10 Video connector   m NIC 1 connector   12 NIC 2 connector       System component diagrams 185                      Item Description   13 Mouse connector  14 Keyboard connector  15 Serial connector   16 iLO 2 connector   17 NIC 3 connector   18 NIC 4 connector          186 System component and cabling diagrams for 9320 systems    Server    SATA 1Gb    PCle card  HP SCO8Ge 3Gb SAS Host Bus Adapter    PCI slot       NC364T Quad 1Gb NIC       empty       empty       empty       empty    be dl nas   AJOJN       SATA 10Gb    HP SCO8Ge 3Gb SAS Host Bus Adapter              empty       empty       NC522SFP dual 10Gb NIC       empty       empty    ala  AJOJN       SAS 1Gb    HP SCO8Ge 3Gb SAS Host Bus Adapter              NC364T Quad 1Gb NIC       empty       HP SCO8Ge 3Gb SAS Host Bus Adapter       empty       empty     IAJ AJOJN       SAS 10Gb    HP SCO8Ge 3Gb SAS Host Bus Adapter              HP SCO8Ge 3Gb SAS Host Bus Adapter       empty       NC522SFP dual 10Gb NIC       empty          emp
134. e management  console configuration     lt ibrixhome gt  bin ibrix fm  B   The output is stored at  usr local ibrix tmp fmbackup  zip  Be sure to save this file  in a location outside of the cluster     On the node hosting the passive management console  place the management console into  maintenance mode      lt ibrixhome gt  bin ibrix fm  m maintenance  A    On the active management console node  move the  lt installer dir gt  ibrix directory  used in the previous release installation to ibrix old  For example  if you expanded the  tarball in  root during the previous StoreAll installation on this node  the installer is in  root    ibrix    On the active management console node  expand the distribution tarball or mount the  distribution DVD in a directory of your choice  Expanding the tarball creates a subdirectory  named ibrix that contains the installer program  For example  if you expand the tarball in   root  the installer is in  root  ibrix     Change to the installer directory if necessary and run the upgrade      ibrixupgrade  f   The installer upgrades both the management console software and the file serving node  software on this node    Verify the status of the management console     etc init d ibrix fusionmanager status    The status command confirms whether the correct services are running  Output will be similar  to the following     Fusion Manager Daemon  pid 18748  running       Check  usr local ibrix log fusionserver 1log for errors    Upgrade the remaining
135. e might contain multiple instances of the crash kernel parameter   Make sure you modify each instance that appears in the file     If you must modify the  etc grub conf file  follow the steps in this section   1  Use SSH to access the active Fusion Manager  FM    2  Do one of the following     e  Versions 6 2 and later  Place all passive FMs into nofmfailover mode   ibrix fm  m nofmfailover  A  e  Versions earlier than 6 2  Place all passive FMs into maintenance mode   ibrix_fm  m maintenance  A  3  Disable Segment Server Failover on each node in the cluster   ibrix_server  m  U  h  lt node gt     4  Set the crash kernel to 256M in the  etc grub conf file  The  etc grub  conf  file might contain multiple instances of the crash kernel parameter  Make sure you modify  each instance that appears in the file     NOTE  Save a copy of the  etc grub conf file before you modify it     Upgrading the StoreAll software to the 6 3 release          Table 1 Prerequisites checklist for all upgrades  continued        Step    Description    The following example shows the crash kernel set to 256M     kernel  vmlinuz 2 6 18 194 e15 ro root  dev vgl1 lv1  crashkernel 256M 16M    Reboot the active FM   6  Use SSH to access each passive FM and do the following   a  Modify the  etc grub conf file as described in the previous steps   b  Reboot the node   After all nodes in the cluster are back up  use SSH to access the active FM   8  Place all disabled FMs back into passive mode     uo    N    ibri
136. e minute reports  you will  need to have the collector allow value set in your stats conf  configuration file     To generate a report  enter the necessary specifications and click Submit  The completed report  appears in the list of reports on the statistics home page     When generating reports  be aware of the following   e A report can be generated only from statistics that have been gathered  For example  if you    start the tool at 9 40 a m  and ask for a report from 9 00 a m  to 9 30 a m   the report cannot  be generated because data was not gathered for that period     e   Reports are generated on an hourly basis  It may take up to an hour before a report is  generated and made available for viewing        NOTE    If the system is currently generating reports and you request a new report at the same  time  the GUI issues an error  Wait a few moments and then request the report again        Deleting reports  To delete a report  log into each node and remove the report from the  local statstool   histstats reports  directory     Maintaining the Statistics tool    Space requirements    The Statistics tool requires about 4 MB per hour for a two node cluster  To manage space  take  the following steps     e Maintain sufficient space  4 GB to 8 GB  for data collection in the  usr local statstool   histstats directory     e Monitor the space in the  local statstool histstats reports  directory  For the  default values  see    Changing the Statistics tool configuration     pa
137. e node is hosting the active Fusion Manager  as in this example  stop the Fusion Manager  on that node     root ib51 101 ibrix    etc init d ibrix fusionmanager stop   Stopping Fusion Manager Daemon   OK     root ib51 101 ibrix      On the node now hosting the active Fusion Manager  ib51 102 in the example   unregister  node ib51 101     root ib51 102     ibrix fm  u ib51 101   Command succeeded    On the node hosting the active Fusion Manager  register node ib51 101 and assign the  correct IP address      root ib51 102     ibrix fm  R ib51 101  I 10 10 51 101  Command succeeded        NOTE  When registering a Fusion Manager  be sure the hostname specified with  R matches  the hostname of the server        The ibrix_fm commands now show that node ib51 101 has the correct IP address and node  ib51 102 is hosting the active Fusion Manager      root ib51 102     ibrix_fm  f  NAME IP ADDRESS    162 Cascading Upgrades    i1b51 101 10 10 51 101   i1b51 102 10 10 51 102    root ib51 102     ibrix_fm  i   FusionServer  ib51 102  active  quorum is running           File system unmount issues  If a file system does not unmount successfully  perform the following steps on all servers   1  Run the following commands   chkconfig ibrix_server off  chkconfig ibrix ndmp off  chkconfig ibrix_fusionmanager off    2  Reboot all servers     3  Run the following commands to move the services back to the on state  The commands do not  start the services     chkconfig ibrix_ server on  chkconfig ib
138. e specified   the authentication password  both the authentication and privacy  passwords  or no passwords  The CONTEXT NAME is required if the trap receiver has defined  subsets of managed objects  The format is    ibrix_snmptrap  c  h HOSTNAME  v 3   p PORT   n USERNAME   j  MD5 SHA       k AUTHORIZATION PASSWORD    y  DES AES     z PRIVACY PASSWORD      x CONTEXT NAME    s  on off     The following command creates a v3 trapsink with a named user and specifies the passwords to  be applied to the default algorithms  If specitied  passwords must contain at least eight characters     Setting up SNMP notifications 73    ibrix snmptrap  c  h lab13 114  v 3  n trapsender  k auth passwd  z priv passwd    Associating events and trapsinks    Associating events with trapsinks is similar to associating events with email recipients  except that  you specify the host name or IP address of the trapsink instead of an email address   Use the ibrix event command to associate SNMP events with trapsinks  The format is     ibrix_event  c  y SNMP   e ALERT INFO EVENTLIST    m TRAPSINK    For example  to associate all Alert events and two Info events with a trapsink at IP address  192 168 2 32  enter     ibrix_event  c  y SNMP  e ALERT server registered   filesystem created  m 192 168 2 32    Use the ibrix event  d command to dissociate events and trapsinks   ibrix event  d  y SNMP   e ALERT INFO EVENTLIST   m TRAPSINK    Defining views    A MBB view is a collection of paired OID subtrees and as
139. e the  command once for each destination host that the client should contact using the specified interface     ibrix client  n  h SRCHOST  A DESTNOST IFNAME  For example     ibrix client  n  h client12 mycompany com  A ib50 81 mycompany com bond1    50 Configuring virtual interfaces for client access       NOTE  Because the backup NIC cannot be used as a preferred network interface for Store Al  clients  add one or more user network interfaces to ensure that HA and client communication work  together     Configuring VLAN tagging    VLAN capabilities provide hardware support for running multiple logical networks over the same  physical networking hardware  To allow multiple packets for different VLANs to traverse the same  physical interface  each packet must have a field added that contains the VLAN tag  The tag is a  small integer number that identifies the VLAN to which the packet belongs  When an intermediate  switch receives a    tagged    packet  it can make the appropriate forwarding decisions based on  the value of the tag     When set up properly  StoreAll systems support VLAN tags being transferred all of the way to the  file serving node network interfaces  The ability of file serving nodes to handle the VLAN tags  natively in this manner makes it possible for the nodes to support multiple VLAN connections  simultaneously over a single bonded interface     Linux networking tools such as ifconfig display a network interface with an associated VLAN  tag using a device
140. e utility    If the upgrade is stopped or the system shuts down  you can restart the upgrade utility and it will  continue the operation   To stop an upgrade  press Ctrl C on the command line or send an interrupt  signal to the process      There should be no adverse effects to the file system  however  certain blocks that were newly  allocated by the file system at the time of the interruption will be lost  Running ibrix fsck in  corrective mode will recover the blocks        NOTE  The upgrade6  0 sh utility cannot upgrade segments in an INACTIVE state  If a node is  rebooted or shuts down with an unmounted file system  the file system segments owned by that  node will be in an INACTIVE state  To move the segments to ACTIVE states  mount the file system  with ibrix mount  Then unmount the file system with ibrix umount and resume running  upgrade60 sh  You can verify segment states with the Linux 1vscan command        Migrating large files    The upgrade60  sh utility does not upgrade files larger than 3 8 TB  After the upgrade is complete  and the file system is mounted  migrate the file to another segment in the file system using the  following command     ibmigrate  f filesystem  m 1  d destination segment file   The following example migrates file  9 from its current segment to destination segment 2   ibmigrate  f ibfs  m 1  d 2  mnt storeall test dir dirli file 9  After the file is migrated  you can snap the file     Synopsis  Run the upgrade utility   upgrade60 sh   v  
141. eir intended  function and are  therefore  covered by these rules  These rules place computers and related  peripheral devices into two classes  A and B  depending upon their intended installation  Class A  devices are those that may reasonably be expected to be installed in a business or commercial  environment  Class B devices are those that may reasonably be expected to be installed in a  residential environment  for example  personal computers   The FCC requires devices in both classes  to bear a label indicating the interference potential of the device as well as additional operating  instructions for the user     FCC rating label    The FCC rating label on the device shows the classification  A or B  of the equipment  Class B  devices have an FCC logo or ID on the label  Class A devices do not have an FCC logo or ID on  the label  After you determine the class of the device  refer to the corresponding statement     Class A equipment    This equipment has been tested and found to comply with the limits for a Class A digital device   pursuant to Part 15 of the FCC rules  These limits are designed to provide reasonable protection  against harmful interference when the equipment is operated in a commercial environment  This  equipment generates  uses  and can radiate radio frequency energy and  if not installed and used  in accordance with the instructions  may cause harmful interference to radio communications   Operation of this equipment in a residential area is likely t
142. er Up  LinkUp 10 30 0 4  Mountpoints bond0 2 10 30 69 161 User Up  LinkUp ib69s1 bond0 2  EI bond  Inactive   Standby User Inactive  Standby HA N A  je CIS  A4  Power     Events                   Changing the HA configuration    To change the configuration of a NIC  select the server on the Servers panel  and then select NICs  from the lower Navigator  Click Modify on the NICs panel  The General tab on the Modify NIC  Properties dialog box allows you change the IP address and other NIC properties  The NIC HA  tab allows you to enable or disable HA monitoring and failover on the NIC and to change or  remove the standby NIC  You can also enable link state monitoring if it is supported on your cluster   See    Configuring link state monitoring for iSCSI network interfaces     page 51      To view the power source for a server  select the server on the Servers panel  and then select Power  from the lower Navigator  The Power Source panel shows the power source configured on the  server when HA was configured  You can add or remove power sources on the server  and can  power the server on or off  or reset the server                 D    Summary Host Ss Type eee em  39 HBAs           Ncs ib69s1 ib69s1 ilo2 192 168 69 101 4  8  Mountpoints  Ehe  Si CES          Events                Configuring High Availability on the cluster 61    Configuring automated failover manually    62    To configure automated failover manually  complete these steps   L Configure file serving nodes in back
143. er mode   ibrix_fm  m nofmfailover   Migrate all segments from FSN4 to its failover partner  FSN3     ibrix fs  m  f ibfsl  h FSN4 FSN3   NOTE  If there is a large number of segments to be migrated and or segments can be  migrated to several FSNs  in SAN environments   run the following command instead   ibrix fs  m  f ibfsl  s LVLIST  h FSN_name       120 Maintaining the system    10     11     12     13     Remove the High Availability configuration between FSN4 and FSN3   a  Stop NIC monitoring for the user NICs     ibrix nic  m  h FSN3  D FSN4 bond0 1  ibrix nic  m  h FSN4  D FSN3 bond0 1  b  Remove the backup NICs   ibrix nic  b  u FSN4 bond0 2       ibrix nic  b  u FSN3 bond0 2  c  Remove the backup server    ibrix server  b  U  h FSN4   ibrix_ server  b  U  h FSN3  If FSN4 is configured for DNS round robin or there are NFS SMB clients mounting file systems  directly from FSN4  you must migrate the user NIC IP address to another FSN  Continue with    step 6 if the FSN4 NIC IP address has been removed and there are no clients accessing data  from FSN4     a  Create a new placeholder NIC on FSN3 you will be migrating IP address to   ibrix_ nic  a  n bond0 3  h FSN3   b  Migrate NIC bondo  1 from FSN4 to bondo  3 on FSN3   ibrix nic  s  H bond0 1 FSN4 bond0 3 FSN3   Stop NFS and SMB services on FSN4    ibrix_server  t  s NFS  c stop   ibrix_server  t  s cifs  c stop    Remove all NFS and SMB shares from FSN4  in this example  ibfs1 is shared via NFS and  CIFS      ibrix_ex
144. er node  e ss summary xml     Commands pertaining to the file serving node    e common_summary xml     Commands and logs common to both Fusion Manager and file  serving nodes       NOTE  These xml files should be modified carefully  Any missing tags during modification might  cause Ibrix Collect to not work properly        Viewing software version numbers  To view version information for a list of hosts  use the following command   ibrix_ version  l1   h HOSTLIST   For each host  the output includes   e   Version number of the installed file system  e   Version numbers of the IAD and File System module  e Operating system type and OS kernel version  e Processor architecture    The  S option shows this information for all file serving nodes  The  C    option shows the information  for all StoreAll clients     The file system and IAD FS output fields should show matching version numbers unless you have  installed special releases or patches  If the output fields show mismatched version numbers and  you do not know of any reason for the mismatch  contact HP Support  A mismatch might affect the  operation of your cluster     Troubleshooting specific issues    Software services    Cannot start services on a file serving node  or Linux StoreAll client  SELinux might be enabled  To determine the current state of SELinux  use the getenforce  command  If it returns enforcing  disable SELinux using either of these commands     setenforce Permissive  setenforce 0    To permanently disab
145. er source s  configured PASSED   Backup server or backups for segments configured PASSED   Automatic server failover configured PASSED    Cluster  amp  User Nics monitored  Cluster nic xs01 hp com ethl1 monitored FAILED Not monitored       User nics configured with a standby nic PASSED    HBA ports monitored    Configuring High Availability on the cluster 67    Hba port 21 01 00 e0 8b 2a 0d 6d monitored FAILED Not monitored  Hba port 21 00 00 e   0 8b 0a 0d 6d monitored FAILED Not monitored    Capturing a core dump from a failed node          The crash capture feature collects a core dump from a failed node when the Fusion Manager  initiates failover of the node  You can use the core dump to analyze the root cause of the node  failure  When enabled  crash capture is supported for both automated and manual failover  Failback  is not affected by this feature  By default  crash capture is disabled  This section provides the  prerequisites and steps for enabling crash capture        NOTE  Enabling crash capture adds a delay  up to 240 seconds  to the failover to allow the  crash kernel to load  The failover process ensures that the crash kernel is loaded before continuing        When crash capture is enabled  the system takes the following actions when a node fails     1  The Fusion Manager triggers a core dump on the failed node when failover starts  changing  the state of the node to Up  InFailover     2  The failed node boots into the crash kernel  The state of the node change
146. erfaces for client access    StoreAll software uses a cluster network interface to carry Fusion Manager traffic and traffic between  file serving nodes  This network is configured as bondo when the cluster is installed  To provide  failover support for the Fusion Manager  a virtual interface is created for the cluster network  interface     Although the cluster network interface can carry traffic between file serving nodes and clients  HP  recommends that you configure one or more user network interfaces for this purpose     To provide high availability for a user network  you should configure a bonded virtual interface   VIF  for the network and then set up failover for the VIF  This method prevents interruptions to client  traffic  If necessary  the file serving node hosting the VIF can fail over to its backup server  and  clients can continue to access the file system through the backup server     StoreAll systems also support the use of VLAN tagging on the cluster and user networks  See     Configuring VLAN tagging     page 51  for an example     Network and VIF guidelines    48    To provide high availability  the user interfaces used for client access should be configured as  bonded virtual interfaces  VIFs   Note the following     e Nodes needing to communicate for file system coverage or for failover must be on the same  network interface  Also  nodes set up as a failover pair must be connected to the same network  interface     e Use a Gigabit Ethernet port  or faste
147. erms and how to obtain and install new  StoreAll software product license keys        NOTE  For licensing features such as block snapshots on the HP P2000 G3 MSA Array System  or HP 2000 Modular Smart Array  see the array documentation        Viewing license terms    The StoreAll software license file is stored in the installation directory  To view the license from the  GUI  select Cluster Configuration in the Navigator and then select License     To view the license from the CLI  use the following command     ibrix_license  i    The output reports your current node count and capacity limit  In the output  Segment Server refers  to file serving nodes     Retrieving a license key    When you purchased this product  you received a License Entitlement Certificate  You will need  information from this certificate to retrieve and enter your license keys     You can use any of the following methods to request a license key     Obtain a license key from hitp   webware hp com     Use AutoPass to retrieve and install permanent license keys  See    Using AutoPass to retrieve  and install permanent license keys     page 128      Fax the Password Request Form that came with your License Entitlement Certificate  See the  certificate for fax numbers in your area     Call or email the HP Password Center  See the certificate for telephone numbers in your area  or email addresses     Using AutoPass to retrieve and install permanent license keys    The procedure must be run from a client with
148. ernel  and StoreAlll    client services start automatically  Use the ibrix version  1  C command to verify the kernel  version on the client        NOTE  To use the verify client command  the StoreAll client software must be installed        Upgrading Windows StoreAll clients    Complete the following steps on each client    1  Remove the old Windows StoreAll client software using the Add or Remove Programs utility  in the Control Panel    2  Copy the Windows StoreAll client MSI file for the upgrade to the machine    3  Launch the Windows Installer and follow the instructions to complete the upgrade    4  Register the Windows StoreAll client again with the cluster and check the option to Start Service  after Registration    5  Check Administrative Tools   Services to verify that the StoreAll client service is started    6  Launch the Windows StoreAll client  On the Active Directory Settings tab  click Update to  retrieve the current Active Directory settings    7  Mount file systems using the StoreAll Windows client GUI        NOTE  If you are using Remote Desktop to perform an upgrade  you must log out and log back  in to see the drive mounted        Upgrading pre 6 0 file systems for software snapshots    To support software snapshots  the inode format was changed in the StoreAll 6 0 release  The  upgrade60 sh utility upgrades a file system created on a pre 6 0 release  enabling software  snapshots to be taken on the file system     The utility can also determine the needed
149. ersion 6 3 is only available through the registered release process  To obtain  the ISO image  contact HP Support to register for the release and obtain access to the software  dropbox    Make sure the  local ibrix folder is empty prior to copying the contents of pkgfull1   The upgrade will fail if the  local ibrix folder contains leftover   rpm packages not listed  in the build manifest    Mount the ISO image on each node and copy the entire directory structure to the  local   ibrix directory on the disk running the OS     The following is an example of the mount command     Upgrading the StoreAll software to the 6 3 release    mount  o loop   local pkg ibrix pkgfull FS_6 3 72 IAS 6 3 72 x86 64 signed iso   mnt  lt storeall gt     In this example   lt storeal1 gt can have any name    The following is an example of the copy command    cp  R  mnt storeall    local ibrix   Change directory to  local ibrix on the disk running the OS and then run chmod  R  777   on the entire directory structure    Run the following upgrade script      ibrixupgrade    f   The upgrade script automatically stops the necessary services and restarts them when the  upgrade is complete  The upgrade script installs the Fusion Manager on all file serving nodes   The Fusion Manager is in active mode on the node where the upgrade was run  and is in  passive mode on the other file serving nodes  If the cluster includes a dedicated Management  Server  the Fusion Manager is installed in passive mode on that s
150. erver    Upgrade Linux StoreAll clients  See    Upgrading Linux StoreAll clients     page 18      If you received a new license from HP  install it as described in the    Licensing    chapter in this  guide     After the upgrade    Complete the following steps     l     2   3      gt     If your cluster nodes contain any 10Gb NICs  reboot these nodes to load the new driver  You  must do this step before you upgrade the server firmware  as requested later in this procedure     Upgrade your firmware as described in    Upgrading firmware     page 129     Run the following command to rediscover physical volumes    ibrix pv  a   Apply any custom tuning parameters  such as mount options    Remount all file systems    ibrix mount  f  lt fsname gt   m  lt  mountpoint gt    Re enable High Availability if used    ibrix server  m   Start any remote replication  rebalancer  or data tiering tasks that were stopped before the    upgrade     If you are using SMB  set the following parameters to synchronize the SMB software and the  Fusion Manager database     e smb signing enabled  e smb signing required  e ignore _writethru    Use ibrix_cifsconfig to set the parameters  specifying the value appropriate for your  cluster  l1 enabled  O disabled   The following examples set the parameters to the default  values for the 6 3 release     ibrix cifsconfig  t  S  smb signing enabled 0   smb_ signing required 0     ibrix_cifsconfig  t  S  ignore writethru 1     The SMB signing feature specifies wh
151. erver 10 10 18 4 true S_NEW  Check Results  Check bv18 04 can ping remote segment server hosts  Check Description Result Result Information  Remote server bv18 03 pingable PASSED  Check   Iad s monitored nics are pingable  g Result Result Information       nic bv18 04 bond1 2 pingable from host bv18 03 PASSED  heck   Physical volumes are readable    QG   n  o  la              QI    Physical volume OwndzX STuL wSIi wc7w 12hv JZ2g Lj2JTf readable PASSED  dev mpath mpath2  Physical volume aoA402 Ilek G9B2 HHyR H5Y8 eexU P6knhd readable PASSED  dev mpath mpath    Physical volume h7krR6 2pxA M8bD dkdf 3PK7 iwFE L17jcD readable PASSED  dev mpath mpatho  Physical volume voXTso a2KQ MWCN tGcu 10Bs ejWG YrKLEe readable PASSED  dev mpath mpath3  Check   Iad and Fusion Manager consistent    Check Description  Result Result Information       bv18 03 engine uuid matches on Iad and Fusion Manager PASSED  bv18 03 IP address matches on Iad and Fusion Manager PASSED  bv18 03 network protocol matches on Iad and Fusion Manager PASSED  bv18 03 engine connection state on Iad is up PASSED  bv18 04 engine uuid matches on Iad and Fusion Manager PASSED  bv18 04 IP address matches on Iad and Fusion Manager PASSED  bv18 04 network protocol matches on Iad and Fusion Manager PASSED  bv18 04 engine connection state on Iad is up PASSED    Monitoring cluster health 97    ibrixFS file system uuid matches on Iad and Fusion Manager PASSED  ibrixFS file system generation matches on Iad and Fusion Manager PASSED
152. ervers  This traffic can use the cluster network interface or a  user network interface     To specify a new virtual cluster interface  use the following command     ibrix_fm  c  lt VIF IP address gt   d  lt VIF Device gt   n  lt VIF Netmask gt    v cluster   I  lt Local IP address_or_ DNS hostname gt      Managing routing table entries    StoreAll Software supports one route for each network interface in the system routing table  Entering  a new route for an interface overwrites the existing routing table entry for that interface     Adding a routing table entry    To add a routing table entry  use the following command     ibrix nic  r  n IFNAME  h HOSTNAME  A  R ROUTE    The following command adds a route for virtual interface eth2   232 on file serving node  ai  bp  com  sending all traffic through gateway gw  hp com     ibrix nic  r  n eth2 232  h s2 hp com  A  R gw hp com    Deleting a routing table entry    If you delete a routing table entry  it is not replaced with a default entry  A new replacement route  must be added manually  To delete a route  use the following command     ibrix nic  r  n IFNAME  h HOSTNAME  D    The following command deletes all routing table entries for virtual interface etho  1 on file serving    node s2 hp com     ibrix nic  r  n eth0 1  h s2 hp com  D    Deleting a network interface    Before deleting the interface used as the cluster interface on a file serving node  you must assign  a new interface as the cluster interface  See    Changin
153. es   Complete the following steps on each file serving node    1  Manually fail over the file serving node    lt ibrixhome gt  bin ibrix server  f  p  h HOSTNAME  The node will be rebooted automatically     2  Move the  lt installer dir gt  ibrix directory used in the previous release installation to  ibrix old  For example  if you expanded the tarball in  root during the previous StoreAll  installation on this node  the installer is in  root  ibrix     3  Expand the distribution tarball or mount the distribution DVD in a directory of your choice   Expanding the tarball creates a subdirectory named ibrix that contains the installer program   For example  if you expand the tarball in  root  the installer is in  root  ibrix     4  Change to the installer directory if necessary and execute the following command     ibrixupgrade  f  The upgrade automatically stops services and restarts them when the process is complete     5  When the upgrade is complete  verify that the StoreAll software services are running on the  node      etc init d ibrix server status    The output will be similar to the following  If the IAD service is not running on your system   contact HP Support     IBRIX Filesystem Drivers loaded  ibrcud is running   pid 23325  IBRIX IAD Server  pid 23368  running     6  Verify that the ibrix and ipfs services are running   1lsmod grep ibrix  ibrix 2323332 0  unused   lsmod grep ipfs  ipfs1 102592 0  unused   If either grep command returns empty  contact HP Support   
154. ested later in this procedure   Upgrade your firmware as described in    Upgrading firmware     page 129     Mount file systems on Linux StoreAll clients    If you have a file system version prior to version 6  you might have to make changes for  snapshots and data retention  as mentioned in the following list     e Snapshots  Files used for snapshots must either be created on StoreAll software 6 0 or  later  or the pre 6 0 file system containing the files must be upgraded for snapshots  To  upgrade a file system  use the upgrade60 sh utility  For more information  see     Upgrading pre 6 0 file systems for software snapshots     page 159      e Data retention  Files used for data retention  including WORM and auto commit  must be  created on StoreAll software 6 1 1 or later  or the pre 6 1 1 file system containing the  files must be upgraded for retention features  To upgrade a file system  use the  ibrix_reten_adm  u  f FSNAME command  Additional steps are required before  and after you run the ibrix reten_adm  u  f FSNAME command  For more  information  see    Upgrading pre 6 0 file systems for software snapshots     page 159      If you have an Express Query enabled file system prior to version 6 3  manually complete  each file system upgrade as described in    Required steps after the StoreAll Upgrade     page 20      Manual offline upgrades for StoreAll software 6 x to 6 3    Preparing for the upgrade    To prepare for the upgrade  complete the following steps     l   2
155. ether clients must support SMB signing to access SMB  shares  See the HP StoreAll Storage File System User Guide for more information about this    Manual offline upgrades for StoreAll software 6 x to 6 3 17    feature  When ignore_writethru is enabled  Store All software ignores writethru buffering  to improve SMB write performance on some user applications that request it     9  Mount file systems on Linux StoreAll clients     10  If you have a file system version prior to version 6  you might have to make changes for  snapshots and data retention  as mentioned in the following list     e Snapshots  Files used for snapshots must either be created on StoreAll software 6 0 or  later  or the pre 6 0 file system containing the files must be upgraded for snapshots  To  upgrade a file system  use the upgrade60 sh utility  For more information  see     Upgrading pre 6 0 file systems for software snapshots     page 159      e Data retention  Files used for data retention  including WORM and auto commit  must be  created on StoreAll software 6 1 1 or later  or the pre 6 1 1 file system containing the  files must be upgraded for retention features  To upgrade a file system  use the  ibrix_reten_adm  u  f FSNAME command  Additional steps are required before  and after you run the ibrix reten_adm  u  f FSNAME command  For more  information  see    Upgrading pre 6 0 file systems for software snapshots     page 159      ll  If you have an Express Query enabled file system prior to versio
156. everal components with upgradable firmware  The following table  lists these components and specifies whether they can be upgraded online and in a nondisruptive  manner  The following example is an example for the 9320 system        Component Online and Nondisruptive    DL380 Nondisruptive if done one server at a time  SAS HBA Yes  if done one server at a time   OS image Yes  if done one server at a time   RAID controller Yes  if done one controller at a time   HDD No       Enter the following command to show which components could be flagged for flash upgrade   hpsp_fmt  lc    The following is an example of the server components that are displayed     SERVER    ILO3  Integrated Lights Out  iLO  3   BIOS  Systems ROM for Server Blade   Power Mgmt_Ctlr  Power Management Controller   Smart _Array Ctlr  HP Embedded Smart Array Controller  NIC HD Embedded Network Adapter  PCIeNIC  HP PCIe Network Adapter   SERVER_HDD  HP Server Hard Disk Drives    Components for firmware upgrades 129    Steps for upgrading the firmware             IMPORTANT  On the StoreAll 9320 Storage platform the prerequisite measures must be followed  before performing the firmware update operation on enclosure hard disk drives     e   Storage disk drive update is an OFFLINE process  Ensure that all host and array I O must be  stopped prior to the update     Make sure all the file systems are unmounted  Failure to comply may result in an OS crashing     Ensure that no other user is performing administrative f
157. ewall     page 34     HP Insight Remote Support and Phone Home  See    Configuring HP Insight Remote Support  on StoreAll systems     page 35      Virtual interfaces for client access  See    Configuring virtual interfaces for client access      page 48      Cluster event notification through email or SNMP  See    Configuring cluster event notification      page 70      Fusion Manager backups  See    Backing up the Fusion Manager configuration     page 77    NDMP backups  See    Using NDMP backup applications     page 77     Statistics tool  See    Using the Statistics tool     page 100     lbrix Collect  See    Collecting information for HP Support with the lbrixCollect     page 134      Setting up the system 27    File systems  Set up the following features as needed     NFS  SMB  Server Message Block   FTP  or HTTP  Configure the methods you will use to access  file system data     Quotas  Configure user  group  and directory tree quotas as needed     Remote replication  Use this feature to replicate changes in a source file system on one cluster  to a target file system on either the same cluster or a second cluster     Data retention and validation  Use this feature to manage WORM and retained files     Antivirus support  This feature is used with supported Antivirus software  allowing you to scan  files on a StoreAll file system     StoreAll software snapshots  This feature allows you to capture a point in time copy of a file  system or directory for online backup purpo
158. example sets up monitoring for NICs over bondo   1   ibric nic  m  h nodel  A node2 bondo0   ibric nic  m  h node2  A nodel bondo   ibric nic  m  h node3  A node4 bondo   ibric nic  m  h node4  A node3 bond0 1    PPP    The next example sets up server s2 hp com to monitor server s1  hp com over user network  interface eth1     ibrix_ nic  m  h s2 hp com  A sl hp com ethl    4  Enable automated failover    Automated failover is turned off by default  When automated failover is turned on  the Fusion  Manager starts monitoring heartbeat messages from file serving nodes  You can turn automated  failover on and off for all file serving nodes or for selected nodes     Turn on automated failover   ibrix_server  m   h SERVERNAME     Changing the HA configuration manually  Update a power source     If you change the IP address or password for a power source  you must update the configuration  database with the changes  The user name and password options are needed only for remotely  managed power sources  Include the  s option to have the Fusion Manager skip BMC     ibrix powersrc  m   I IPADDR    u USERNAME    p PASSWORD    s   h POWERSRCLIST  The following command changes the IP address for power source ps1    ibrix_powersre  m  I 192 168 3 153  h ps1   Disassociate a server from a power source     You can dissociate a file serving node from a power source by dissociating it from slot 1  its default  association  on the power source  Use the following command     ibrix hostpower  d 
159. f Failed was displayed for the NIC or HBA Monitored columns  see the sections for  ibrix nic  m  h  lt host gt   A node _2 node_ interface and ibrix_ hba  m  h   lt host gt   p  lt World_Wide_Name gt  in the CLI guide for your current release     Performing the upgrade    The online upgrade is supported only from the StoreAll 6 x releases           IMPORTANT  Complete all steps provided in the Table 1  page 10         Complete the following steps     1     6   7     StoreAll OS version 6 3 is only available through the registered release process  To obtain  the ISO image  contact HP Support to register for the release and obtain access to the software  dropbox     Make sure the  local ibrix folder is empty prior to copying the contents of pkgfull   The upgrade will fail if the  local ibrix folder contains leftover   rpm packages not listed  in the build manifest     Mount the ISO image and copy the entire directory structure to the  Local ibrix directory  on the disk running the OS     The following is an example of the mount command     mount  o loop   local pkg ibrix pkgfull FS_6 3 72 IAS 6 3 72 x86 64 signed iso   mnt  lt storeall gt     In this example   lt storeal1 gt  can have any name   The following is an example of the copy command   cp  R  mnt storeall    local ibrix    Change directory to  local ibrix and then run chmod  R 777   onthe entire directory  structure     Run the upgrade script and follow the on screen directions     auto_online ibrixupgrade    Upgrade L
160. f you execute ibrix fs  i  f FSNAME  the output  will list No in the ONBACKUP field  indicating that the primary server now owns the segments   even though it does not  In this situation  you will be unable to complete the failback after you fix  the storage subsystem problem     Perform the following manual recovery procedure   1  Restore the failed storage subsystem   2  Reboot the primary server  which will allow the arrested failback to complete     StoreAll client   O errors following segment migration    Following successful segment migration to a different file serving node  the Fusion Manager sends  all StoreAll clients an updated map reflecting the changes  which enables the clients to continue    O operations  If  however  the network connection between a client and the Fusion Manager is  not active  the client cannot receive the updated map  resulting in client   O errors     To fix the problem  restore the network connection between the clients and the Fusion Manager   Windows StoreAll clients    Logged in but getting a    Permission Denied    message   The StoreAll client cannot access the Active Directory server because the domain name was not  specified  Reconfigure the Active Directory settings  specifying the domain name  See the HP  StoreAll Storage Installation Guide for more information    Verify button in the Active Directory Settings tab does not work    This issue has the same cause as the above issue     Mounted drive does not appear in Windows Explore
161. failed    ALERT events  Indicates that an NDMP action has failed  For example     1102  Cannot start the session monitor daemon  ndmpd exiting    7009 Level 6 backup of  mnt shares accountsl1 failed  writing eod header error    8001 Restore Failed to read data stream signature    You can configure the system to send email or SNMP notifications when these types of events  Occur     80 Configuring system backups       8 Creating host groups for StoreAll clients    A host group is a named set of StoreAll clients  Host groups provide a convenient way to centrally  manage clients  You can put different sets of clients into host groups and then perform the following  operations on all members of the group     e Create and delete mount points  e Mount file systems   e Prefer a network interface   e   Tune host parameters   e     Set allocation policies    Host groups are optional  If you do not choose to set them up  you can mount file systems on clients  and tune host settings and allocation policies on an individual level     How host groups work    In the simplest case  the host groups functionality allows you to perform an allowed operation on  all StoreAll clients by executing a command on the default clients host group with the CLI or  the GUI  The clients host group includes all StoreAll clients configured in the cluster        NOTE  The command intention is stored on the Fusion Manager until the next time the clients  contact the Fusion Manager   To force this contact  resta
162. figuration information as an argument    usr local ibrix setup restore  lt saved_config tgz gt     Reboot the node     Completing the upgrade    Complete the following steps     l     Remount all StoreAll file systems    lt ibrixhome gt  bin ibrix mount  f  lt fsname gt   m  lt  mountpoint gt     Remount all previously mounted StoreAll file systems on Windows StoreAll clients using the  Windows client GUI     If automated failover was enabled before the upgrade  turn it back on from the node hosting  the active management console      lt ibrixhome gt  bin ibrix server  m  Confirm that automated failover is enabled    lt ibrixhome gt  bin ibrix_ server  1l    In the output  HA should display on     Upgrading the StoreAll software to the 5 6 release 165    From the node hosting the active management console  perform a manual backup of the  upgraded configuration      lt ibrixhome gt  bin ibrix fm  B    Verify that all version indicators match for file serving nodes  Run the following command from  the active management console    lt ibrixhome gt  bin ibrix version  1    If there is a version mismatch  run the  ibrix ibrixupgrade  f script again on the  affected node  and then recheck the versions  The installation is successful when all version  indicators match  If you followed all instructions and the version indicators do not match   contact HP Support     Verity the health of the cluster    lt ibrixhome gt  bin ibrix health  1  The output should show Passed   on     For an 
163. file serving nodes that share a network with the test hosts  Remote servers that  are pingable might not be connected to a test host because of a Linux or StoreAll software  issue  Remote servers that are not pingable might be down or have a network problem     e If test hosts are assigned to be network interface monitors  pings their monitored interfaces to  assess the health of the connection   For information on network interface monitoring  see     Setting network interface options in the configuration database     page 123     e Determines whether specified hosts can read their physical volumes     The ibrix health command runs this health check on both file serving nodes and StoreAll  clients     e Determines whether information maps on the tested hosts are consistent with the configuration  database     If you include the  b option  the command also checks the health of standby servers  if configured      check reports    The summary report provides an overall health check result for all tested file serving nodes and  StoreAll clients  followed by individual results  If you include the  b option  the standby servers for  all tested file serving nodes are included when the overall result is determined  The results will be  one of the following    e Passed  All tested hosts and standby servers passed every health check     e Failed  One or more tested hosts failed a health check  The health status of standby servers is  not included when this result is calculated     e W
164. file systems changed between  releases 6 2 x and 6 3  Each file system with Express Query enabled must be manually upgraded  to 6 3  This section has instructions to be run before and after the StoreAll upgrade  on each of  those file systems     Required steps before the StoreAll Upgrade  These steps are required before the StoreAll Upgrade     l     2     Mount all Express Query file systems on the cluster to be upgraded if they are not mounted  yet     Save your custom metadata by entering the following command      usr local ibrix bin MDExport pl   dbconfig   usr local Metabox scripts startup xml   database  lt FSNAME gt     outputfile  tmp custAttributes csv   user ibrix    Save your audit log data by entering the following commands   ibrix audit reports  t time  f  lt FSNAME gt     cp  lt path to report file printed from previous command gt    tmp auditData csv    Disable auditing by entering the following command   ibrix_ fs  A  f  lt FSNAME gt   oa audit_mode off  In this instance  lt FSNAME gt  is the file system     If any archive API shares exist for the file system  delete them     Upgrading Windows StoreAll clients 19       NOTE  To list all HTTP shares  enter the following command    ibrix httpshare  1   To list only REST API  Object API  shares  enter the following command   ibrix_httpshare  l1  f  lt FSNAME gt   v 1   grep     objectapi  true    awk     print  2      In this instance  lt FSNAME gt  is the file system        e Delete all HTTP shares  regular or
165. g the cluster interface     page 125      To delete a network interface  use the following command     ibrix nic  d  n IFNAME  h HOSTLIST    The following command deletes interface eth3 from file serving nodes s1   hp   com and    s2 hp com     ibrix nic  d  n eth3  h s1 hp com s2 hp com    Viewing network interface information    Executing the ibrix_nic command with no arguments lists all interfaces on all file serving nodes   Include the  h option to list interfaces on specific hosts     ibrix nic  1  h HOSTLIST    The following table describes the fields in the output                 Field Description   BACKUP HOST File serving node for the standby network interface   BACKUP IF Standby network interface    HOST File serving node    IFNAME Network interface on this file serving node     126 Maintaining the system          Field    Description                      IP_ADDRESS IP address of this NIC    LINKMON Whether monitoring is on for this NIC   MAC_ADDR MAC address of this NIC    ROUTE IP address in routing table used by this NIC   STATE Network interface state    TYPE Network type  cluster or user         When ibrix_nic is used with the  i option  it reports detailed information about the interfaces   Use the  h option to limit the output to specific hosts  Use the  n option to view information for a    specific interface     ibrix nic  i      h HOSTLIST    n NAME     Maintaining networks 127       12 Licensing    This chapter describes how to view your current license t
166. ge 104      Maintaining the Statistics tool 103    Updating the Statistics tool configuration    When you first configure the Statistics tool  the configuration includes information for all file systems  configured on the cluster  If you add a new node or a new file system  or make other additions to  the cluster  you must update the Statistics tool configuration  Complete the following steps   L If you are adding a new file serving node to the cluster  enable synchronization for the node   See    Enabling collection and synchronization     page 100  for more information   2  Add the file system to the Statistics tool  Run the following command on the node hosting the  active Fusion Manager    usr local ibrix stats bin stmanage loadfm    The new configuration is updated automatically on the other nodes in the cluster  You do not  need to restart the collection process  collection continues automatically     Changing the Statistics tool configuration    You can change the configuration only on the management node  To change the configuration   add a configuration parameter and its value to the  etc ibrix stats conf file on the currently  active node  Do not modify the  etc ibrix statstool conf and  etc ibrix   statstool local conf files directly     You can set the following parameters to specify the number of reports that are retained     Parameter Report Type to retain Default Retention Period  age report hourly Hourly report 1 day  age report daily Daily report 7 days   age
167. ge file size     Select Rebalance Mode     Rebalance   All  as evenly as possible within all same tier segments   Rebalance   Advanced  manually specify source and destination segments        Evacuate   Advanced  manually specify source segments for evacuation  and destination segments           Sakai  a    On the Evacuate Advanced dialog box  locate the segment to be evacuated and click Source   Then locate the segments that will receive the data from the segment and click Destination  If  the file system is tiered  be sure to select destination segments on the same tier as the source   segment     118 Maintaining the system       Segment Rebalance and Evacuation Wizard          Select Mode Evacuate Advanced     Evacuate Advanced  a Summar The table below shows information about segements in the filesystem including their size  state  and current utilization     In this mode  data on selected source segments will be evacuated entirely and redistributed to selected destination segments as evenly as  possible  Note  to maintain tier hierarchy in a tiered filesystem  only select segments from within the same tier     Selected Source Segments   Used Space  0 TB  Selected Destination Segments   Available Space  OTB    Estimated Destination Segment Utilization  0     Storage segments for filesystem  myFS1    Segment LUN UUID Tier Size  TB  Used     Storage State Owner Source Destination   1   OOcOffd7e    TierOneSixer 1 6 TB 2 myMSA  OK ib69s4 ai F   2 OOcOffd7e    TierFourT 2 0 T
168. gement console  Fusion Manager  for your cluster  and then click OK  If your cluster does  not exist in the list of choices  click Cancel so that you can provide the IP address of the FM  to which this node has to be registered     Join Cluster  Select the management console of the appropriate cluster     4st  e3sh7    e5s12  e5s11    e8s8 1  e8s81    e9s81  e9s84    ibiisi3  ibiisi3  USESS N9B8A  iblis9  ib11s9  USE858N985  ib27s1  ib27s1    ib33s2  ib33s1     ib33s3  ib33s3    ib4is4  ib4is4  USE834NBP9  ib45s8  192 168 45 8   ib46s1  ib46s1        Performing the recovery 147    7  IF you clicked the Cancel button in the previous dialog box  enter the management console IP  of the desired cluster on the Management Console IP dialog box and click OK     Management Console IP    Please provide the IP of the FM to which this node has to be    registered     8  On the Replace Existing Server dialog box  click Yes when you are asked if you want to  replace the existing server     Replace Existing Server       Initial registration attempt failed   The Management Console reports that  a host using this name is already  registered    Replace the existing server with the    current system     148 Recovering a file serving node       Completing the restore on a file serving node    Complete the following steps     l     2     Ensure that you have root access to the node  The restore process sets the root password to  hpinvent  the factory default     Verify information about the node
169. ger   register client  p console IPAddress  c clusterIF  n ClientName    6  Remount the file system on the client     Changing the IP address for the cluster interface on a dedicated management console    You must change the IP address for the cluster interface on both the file serving nodes and the  management console     1 If High Availability is enabled  disable it by executing ibrix server  m  U    2  Unmount the file system from all file serving nodes  and reboot    3  On each file serving node  locally change the IP address of the cluster interface    4  Change the IP address of the cluster interface for each file serving node    lt installdirectory gt  bin ibrix nic  c  n IFNAME  h HOSTNAME   I IPADDR   Remount the file system    6  Re enable High Availability if necessary by executing ibrix server  m     Changing the cluster interface    If you restructure your networks  you might need to change the cluster interface  The following rules  apply when selecting a new cluster interface     e The Fusion Manager must be connected to all machines  including standby servers  that use  the cluster network interface  Each file serving node and StoreAll client must be connected to    Maintaining networks 125    the Fusion Manager by the same cluster network interface  A Gigabit  or faster  Ethernet port  must be used for the cluster interface     e   StoreAll clients must have network connectivity to the file serving nodes that manage their data  and to the standbys for those s
170. gh 8 for each file serving node in the cluster    After all file serving nodes have been upgraded and failed back  complete the upgrade     Completing the upgrade   1  From the management console  turn automated failover back on    lt ibrixhome gt  bin ibrix server  m   2  Confirm that automated failover is enabled    lt ibrixhome gt  bin ibrix server  1  In the output  HA displays on     3  Verify that all version indicators match for file serving nodes  Run the following command from  the management console      lt ibrixhome gt  bin ibrix version  1    If there is a version mismatch  run the  ibrix ibrixupgrade  f script again on the  affected node  and then recheck the versions  The installation is successful when all version  indicators match  If you followed all instructions and the version indicators do not match   contact HP Support     4  Propagate a new segment map for the cluster    lt ibrixhome gt  bin ibrix dbck  I  f FSNAME  5  Verity the health of the cluster    lt ibrixhome gt  bin ibrix health  1  The output should specify Passed   on     Standard offline upgrade   This upgrade procedure is appropriate for major upgrades  The management console must be   upgraded first  You can then upgrade file serving nodes in any order    Preparing for the upgrade   1  From the management console  disable automated failover on all file serving nodes    lt ibrixhome gt  bin ibrix server  m  U    2  From the management console  verify that automated failover is off  In the ou
171. gt  directory is processed  Once processed  the directory will be renamed  var   crash  lt timestamp gt  PROCESSED  HP Support may request that you send this information to  assist in resolving the system crash     NOTE  HP recommends that you maintain your crash dumps in the  var crash directory  Ibrix  Collect processes the core dumps present in the  var crash directory  linked to  local   platform crash  only  HP also recommends that you monitor this directory and remove  unnecessary processed crashes        Downloading the archive file    When data is collected  a compressed archive file is created and stored in a compressed archive  file   tgz  under  local ibrixcollect archive directory  To download the collected data  to your desktop  select the collection and click Download from the Fusion Manager        NOTE  Only one collection can be downloaded at a time     NOTE  The average size of the archive file depends on the size of the logs present on individual  nodes in the cluster     NOTE  You may later be asked to email this final   tgz file to HP Support        Deleting the archive file    You can delete a specific data collection or all collections simultaneously in the GUI and the CLI     To delete a specific data collection using the GUI  select the collection to be deleted  and click  Delete  The tgz file stored locally will be deleted from each node     To delete all of the collections  click Delete All   To delete a specific data collection using the CLI  use th
172. have a maximum of 79 bytes  For example  AdmineMyDomain com     In the SNMP Configuration section  set the options     Notification Level  Select the minimum severity for which the system should send notifications   Critical  only   Error  and Critical   Warning  and Error and Critical   Informational  all   The  default is none  which disables SNMP notification     Read Community  The SNMP read password for your network  This password is also included  in traps that are sent  The value is case sensitive  can include letters  numbers  hyphens  and  underscores  and can have a maximum of 31 bytes  The default is public     Write Community  The SNMP write password for your network  The value is case sensitive   can include letters  numbers  hyphens  and underscores  and can have a maximum of 31  bytes  The default is private     Trap Host Address fields  The IP addresses of up to three host systems that are configured to  receive SNMP traps     See the MSA array documentation for additional information  For HP P2000 G3 MSA systems   see the HP P2000 G3 MSA System SMU Reference Guide  For P2000 G2 MSA systems  see the  HP 2000 G2 Modular Smart Array Reference Guide  To locate these documents  go to http     www hp com support manuals  On the Manuals page  select storage  gt Disk Storage Systems  gt   P2000 MSA Disk Arrays  gt HP 2000sa G2 Modular Smart Array or HP P2000 G3 MSA Array  Systems     Configuring cluster event notification       7 Configuring system backups    Backing
173. he  storage segments used for the file system  Each server is responsible for managing the segments  it owns     When the cluster is expanded  the StoreAll software attempts to maintain proper load balancing  and utilization in the following ways     e When servers are added  ownership of the existing segments is redistributed among the  available servers     e When storage is added  ownership of the new segments is distributed among the available  servers     114 Maintaining the system    Occasionally you may need to manage the segments manually     e Migrate segments  This operation transfers ownership of segments to other servers  For example   if a server is overloaded or unavailable  you can transfer its segments to another server that  can see the same storage     e  Rebalance segments  This operation redistributes files across segments and can be used if  certain segments are filling up and affecting file system performance  See    Maintaining file  systems    in the for more information     e Evacuate segments  This operation moves the data in a segment to another segment  It is  typically used before removing storage from the cluster     Migrating segments    Segment migration transfers segment ownership but it does not move segments from their physical  locations in the storage system  Segment ownership is recorded on the physical segment itself  and  the ownership data is part of the metadata that the Fusion Manager distributes to file serving nodes  and StoreAll c
174. he Navigator tree    The Servers panel lists the servers included in each chassis   2  Select the server you want to obtain more information about     Information about the servers in the chassis is displayed in the right pane        Status Server Mame     Bay Chassis Type State CPU CS  Net  MBs  Disk  MB s  Backup HA  Dichassts  clh_SO0cO T1 tacb7000  2 Itens      62 212   9320 Up 1 0 00 0 00 ib49 2   on  A 49 213   9320 Up  FailedOver 1 0 00 0 00 ib49 2   on    To view summary information for the selected server  select the Summary node in the lower  Navigator tree     84 Monitoring cluster operations       Summary   Hame  State  Group  Standby  Auto Failover Enabled  Module  ID  Uptime  Last Update  Admin IP  Filesystem Version  IAD Version  Protocol  OS  Kernel Version  Architecture  Processor  CPUCSystem User Util Nice   Load 1 5 15 min   Memory Total  MB   Memory Free  MB   Buffers  KB   Cached  KB   Swap Total  MB   Swap Free  MB   Network  MB s   Disk  MB s   Number of Admin Threads    Number of Server Threads    Value   Up   servers   ib1 28 1682   Yes   Loaded   27 f1e7 7 f b668 43fb 9  25 279ch3e783bh5  13 days  19 23   Thu Oct 11 17 24 26 UTC 2012  10 26 16 1   6 2 332 internal rev 130489 in SYN   6 2 332   TCP   GNU Linux   2 6 18 194 e15   x86_64   x86_64   0 1 2 0   0 27  0 32  0 22   48 169 57   45 664 70   402644   758500   16 383 74   16 383 74   0 00   0 00   10   128    Select the server component that you want to view from the lower Navigator panel  such 
175. he installer directory if necessary and execute the following command     ibrixupgrade  f    Upgrading the StoreAll software to the 5 5 release 177    The upgrade automatically stops services and restarts them when the process is complete     When the upgrade is complete  verify that the StoreAll software services are running on the  node      etc init d ibrix server status    The output should be similar to the following example  If the IAD service is not running on your  system  contact HP Support     IBRIX Filesystem Drivers loaded  ibrcud is running   pid 23325  IBRIX IAD Server  pid 23368  running       Execute the following commands to verify that the ibrix and ipfs services are running   lsmod grep ibrix   ibrix 2323332 0  unused    lsmod grep ipfs   ipfsl 102592 0  unused    If either grep command returns empty  contact HP Support     From the active management console node  verify that the new version of StoreAll software  FS IAS is installed on the file serving nodes      lt ibrixhome gt  bin ibrix version  1  S    Completing the upgrade    l     Remount the StoreAll file systems     lt ibrixhome gt  bin ibrix mount  f  lt fsname gt   m  lt  mountpoint gt    From the node hosting the active management console  turn automated failover back on    lt ibrixhome gt  bin ibrix server  m   Confirm that automated failover is enabled     lt ibrixhome gt  bin ibrix server  1   In the output  HA should display on     From the node hosting the active management console  perform
176. ia changer devices  80  network interfaces  add routing table entries  126  bonded and virtual interfaces  122  defined  122  delete  126  delete routing table entries  126  guidelines  48  viewing  126  Network Storage System  configuration  27  management interfaces  28  NFS  exporting  154  NIC failover  49  no subtree check  154    NTP servers  35    P  passwords  change  GUI password  33  Phone Home  37  ports  open  34  power sources  server  62  pre 6 3 Express Query  upgrade  19    Q  QuickRestoreDVD  144    R    rack stability  warning  152  recycling notices  202  regulatory compliance  Canadian notice  197  European Union notice  197  identification numbers  196  Japanese notices  198  Korean notices  198  laser  199  recycling notices  202  Taiwanese notices  199  related documentation  151  rolling reboot  109  routing table entries  add  126  delete  126    S    segments  evacuate from cluster  118  migrate  115  servers  configure standby  49  crash capture  68  failover  55  tune  110  SNMP event notification  72  SNMP MIB  74  spare parts  obtaining information  152  Statistics tool  100  enable collection and synchronization  100  failover  104  Historical Reports GUI  101  install  100  log files  106  maintain configuration  104  processes  105  reports  102  space requirements  103  troubleshooting  105  uninstall  106  upgrade  101  Storage  software   25  storage  remove from cluster  118    StoreAll clients  add to host group  82  change IP address  12
177. ibrix_hostgroup  m  g finance  h cl01 hp com    Adding a domain rule to a host group    82    To contigure automatic host group assignments  define a domain rule for host groups  A domain  rule restricts host group membership to clients on a particular cluster subnet  The Fusion Manager  uses the IP address that you specify for clients when you register them to perform a subnet match  and sorts the clients into host groups based on the domain rules     Setting domain rules on host groups provides a convenient way to centrally manage mounting   tuning  allocation policies  and preferred networks on different subnets of clients  A domain rule  is a subnet IP address that corresponds to a client network  Adding a domain rule to a host group  restricts its members to StoreAll clients that are on the specified subnet  You can add a domain  rule at any time    To add a domain rule to a host group  use the ibrix_hostgroup command as follows   ibrix_hostgroup  a  g GROUPNAME  D DOMAIN    For example  to add the domain rule 192 168 to the finance group     Creating host groups for StoreAll clients    ibrix_hostgroup  a  g finance  D 192 168    Viewing host groups  To view all host groups or a specific host group  use the following command   ibrix_hostgroup  1   g GROUP     Deleting host groups    When you delete a host group  its members are reassigned to the parent of the deleted group     To force the reassigned StoreAll clients to implement the mounts  tunings  network interface  p
178. ibrixhome gt  bin ibrix server  1l  In the output  the HA column should display of        3  Move the  lt installer dir gt  ibrix directory used in the previous release installation to  ibrix old  For example  if you expanded the tarball in  root during the previous StoreAll  installation on this node  the installer is in  root  ibrix     4  Expand the distribution tarball or mount the distribution DVD in a directory of your choice   Expanding the tarball creates a subdirectory named ibrix that contains the installer program   For example  if you expand the tarball in  root  the installer is in  root  ibrix     5  Change to the installer directory if necessary and run the upgrade     ibrixupgrade  f   6  Verify that the management console is operational    etc init d ibrix fusionmanager status    The status command should report that the correct services are running  The output is similar  to this     Fusion Manager Daemon  pid 18748  running       7  Check  usr local ibrix log fusionserver 1log for errors     Upgrading file serving nodes    After the management console has been upgraded  complete the following steps on each file  serving node     1  From the management console  manually fail over the file serving node    lt ibrixhome gt  bin ibrix_ server  f  p  h HOSTNAME  The node reboots automatically     2  Move the  lt installer dir gt  ibrix directory used in the previous release installation to  ibrix old  For example  if you expanded the tarball in  root during the p
179. iews  defined by subsets    of IOD subtree and associated bitmask entries  which define what a particular user can access  in the MIB     Steps for setting up SNMP include    e   Agent configuration  all SNMP versions    e   Trapsink configuration  all SNMP versions    e   Associating event notifications with trapsinks  all SNMP versions   e   View definition  V3 only    e Group and user configuration  V3 only     StoreAll software implements an SNMP agent that supports the private StoreAll software MIB  The  agent can be polled and can send SNMP traps to configured trapsinks     Setting up SNMP notifications is similar to setting up email notifications  You must associate events  to trapsinks and configure SNMP settings for each trapsink to enable the agent to send a trap  when an event occurs        NOTE  When Phone Home is enabled  you cannot edit or change the configuration of the StoreAll  SNMP agent with the ibrix_snmpagent  However  you can add trapsink IPs with  ibrix_snmptrap and can associate events to the trapsink IP with ibrix event        Configuring the SNMP agent    The SNMP agent is created automatically when the Fusion Manager is installed  It is initially  configured as an SNMPv2 agent and is off by default     72 Configuring cluster event notification    Some SNMP parameters and the SNMP default port are the same  regardless of SNMP version   The default agent port is 161  SYSCONTACT  SYSNAME  and SYSLOCATION are optional MIB II  agent parameters that have 
180. ig  1   h HOSTLIST    f    b   For example  to view a summary report for file serving nodes xsO1 hp com and xs02 hp com     ibrix_haconfig  l  h xs01 hp com xs02 hp com                Host HA Configuration Power Sources Backup Servers Auto Failover  Nics Monitored Standby Nics HBAs Monitored   xs01 hp com FAILED PASSED PASSED PASSED  FAILED PASSED FAILED   xs02 hp com FAILED PASSED FAILED FAILED  FAILED WARNED WARNED                Viewing a detailed report  Execute the ibrix_haconfig  i commond to view the detailed report   ibrix_haconfig  i   h HOSTLIST          b    s    v     The  h HOSTLIST option lists the nodes to check  To also check standbys  include the  b option   To view results only for file serving nodes that failed a check  include the     argument  The  s  option expands the report to include information about the file system and its segments  The  v  option produces detailed information about configuration checks that received a Passed result   For example  to view a detailed report for file serving node xsO1 hp com     ibrix_haconfig  i  h xs01 hp com                    Overall HA Configuration Checker Results                    FAILED   SE Overall Host Results                   Host HA Configuration Power Sources Backup Servers Auto Failover Nics Monitored  Standby Nics HBAs Monitored   xs01 hp com FAILED PASSED PASSED PASSED FAILED  PASSED FAILED   See ee E Server xs01 hp com FAILED Report                   Check Description Result Result Information   Pow
181. ight the Server Availability option in main menu  and then press the Enter key  Highlight  the ASR Timeout option and then press the Enter key  Select the 30 Minutes  and then press  the Enter key    To exit RBSU  press Ese until the main menu is displayed  Then  at the main menu  press F10   The server automatically restarts     Setting up nodes for crash capture             IMPORTANT  Complete the steps in    Prerequisites for setting up the crash capture     page 68   before starting the steps in this section        To set up nodes for crash capture  complete the following steps     Lk    Enable crash capture  Run the following command     ibrix_host_tune  S    h HOSTLIST    g GROUPLIST    o  trigger _crash_on_failover 1    Tune Fusion Manager to set the DUMPING status timeout by entering the following command   ibrix fm_tune  S  o dumpingStatusTimeout 240    This command is required to delay the failover until the crash kernel is loaded  otherwise   Fusion Manager will bring down the failed node     Capturing a core dump froma failed node 69       6 Configuring cluster event notification    Cluster events    There are three categories for cluster events        rx  Alerts  Disruptive events that can result in loss of access to file system data        J  Warnings  Potentially disruptive conditions where file system access is not lost  but if the situation is not    addressed  it can escalate to an alert condition           Information  Normal events that change the cluster 
182. in diesem Dokument missen befolgt werden  Bei Einstellungen oder  Durchf  hrung sonstiger Verfahren  die   ber die Anleitungen in diesem Dokument bzw   im Installationshandbuch des Lasergerats hinausgehen  kann es zum Austritt gef  hrlicher  Strahlung kommen  Zur Vermeidung der Freisetzung gef  hrlicher Strahlungen sind die  folgenden Punkte zu beachten       Versuchen Sie nicht  die Abdeckung des Lasermoduls zu   ffnen  Im Inneren befinden  sich keine Komponenten  die vom Benutzer gewartet werden k  nnen      Benutzen Sie das Lasergerat ausschlie  lich gem     den Anleitungen und Hinweisen  in diesem Dokument       Lassen Sie das Ger  t nur von einem HP Servicepartner reparieren     200 Regulatory compliance notices    Italian laser notice    AN AVVERTENZA  AVVERTENZA Questo dispositivo pu   contenere un  laser classificato come prodotto laser di Classe 1 in conformit   alle normative US FDA  e IEC 60825 1  Questo prodotto non emette radiazioni laser pericolose     L eventuale esecuzione di comandi  regolazioni o procedure difformi a quanto   specificato nella presente documentazione o nella guida di installazione del prodotto   pu   causare l esposizione a radiazioni nocive  Per ridurre i rischi di esposizione a   radiazioni pericolose  attenersi alle seguenti precauzioni      Non cercare di aprire il contenitore del modulo  All interno non vi sono componenti  soggetti a manutenzione da parte dell utente      Non eseguire operazioni di controllo  regolazione o di altro gene
183. ing down the StoreAllSet Wale cze sessoussdowsetedveouineeseeedus deauselovidwetaloutaniemdeweleneanencedecs 107  E RE EE 108  eer 108  Starting TEE 108    Contents 5    Powering file serving nodes on or  Ollipxcacescsinteycevncdoiannyayeesdaadda Yes idaaeslaneualau EES 109    Performing a rolling E 109  Starting and stopping E 110  Tuning file serving nodes and StareAlll ClieitSs  secessescenetseccenrniecdooxadeeseszenssnsemetdveraseuneemeonerdeeneesens 110  Ma  aging  SEQMENIS aiii irienna E ean RE En AE EE E EAEE ETENEE 114  Migrating SUE euer eegene seenen Egger dreg   ege 115  Evacuating segments and removing storage from the cluster             ccccccceceeeeeeseeeeeeeeeeeessseeeees 118  Removing a node E EEN 120  Maintaining RE 122  Cluster and user network AER eege eege EE 122  Adding user network inlertaceSisccccasovsaecncnssenancraneatssaueduvanenaddanddcnasesantuesdecdtedanenenedadtncsdasances 122  Setting network interface options in the configuration dotobose eseese 123  Preferring network interfaces         c cccccccceescsseeeeeeceesneeeceeeeeseaeeeeeeceesseeeeeeeesesessseeeeeeesettseaeees 123  BT er RT 125  Making network e 125  Changing the IP address for a Linux StoreAll dent  125  Changing the IP address for the cluster interface on a dedicated management console         125  Changing the eluetee  SEIEEE eege geb 125  Managing routing table GEES ccckscccnarasondeteetaedesoranepdGdusnte li vardanieniaceanrdenssenteedeneiexeses 126  Deleting a network interlace
184. ing steps only if you see the Version mismatch  upgrade needed in the  command s output     l     Disable auditing by entering the following command    ibrix fs  A  f  lt FSNAME gt   oa audit_mode off   In this instance  lt FSNAME gt  is the file system    Disable Express Query by entering the following command    ibrix fs  T  D  f  lt FSNAME gt    In this instance  lt FSNAME gt  is the file system    Delete the internal database files for this file system by entering the following command   rm  rf  lt FS_MOUNTPOINT gt   archiving database   In this instance  lt F S_MOUNTPOINT gt  is the file system mount point    Clear the MIF condition by running the following command    ibrix archiving  C  lt FSNAME gt    In this instance  lt FSNAME gt  is the file system    Re enable Express Query on the file systems    ibrix fs  T  E  f  lt FSNAME gt    In this instance  lt FSNAME gt  is the file system    Express Query will begin resynchronizing  repopulating  a new database for this file system   Re enable auditing if you had it running before  the default     ibrix fs  A  f  lt FSNAME gt   oa audit_mode on   In this instance  lt FSNAME gt  is the file system    Restore your audit log data    MDImport  f  lt FSNAME gt   n  tmp auditData csv  t audit   In this instance  lt FSNAME gt  is the file system    Restore your custom metadata    MDImport  f  lt FSNAME gt   n  tmp custAttributes csv  t custom  In this instance  lt FSNAME gt  is the file system     Upgrading the StoreAll software 
185. ing the failing server to be centrally powered down by the  Fusion Manager in the case of automated failover  and manually in the case of a forced manual  failover     StoreAll software works with iLO  IPMI  OpenIPMI  and OpenlPMI2 integrated power sources   The following configuration steps are required when setting up integrated power sources   e   For automated failover  ensure that the Fusion Manager has LAN access to the power sources     e Install the environment and any drivers and utilities  as specified by the vendor documentation   If you plan to protect access to the power sources  set up the UID and password to be used     Use the following command to identify a power source     ibrix_powersre  a  t  ipmi openipmi openipmi2 ilo   h HOSTNAME  I IPADDR   u USERNAME  p PASSWORD    For example  to identify an iLO power source at IP address 192 168 3 170 for node ss01   ibrix powersre  a  t ilo  h ss01  I 192 168 3 170  u Administrator  p  password   3  Configure NIC monitoring   NIC monitoring should be configured on user VIFs that will be used by NFS  SMB  FTP  or HTTP     Configuring failover          IMPORTANT  When configuring NIC monitoring  use the same backup pairs that you used when  configuring backup servers        Identify the servers in a backup pair as NIC monitors for each other  Because the monitoring must  be declared in both directions  enter a separate command for each server in the pair     ibrix nic  m  h MONHOST  A DESTHOST IFNAME   The following 
186. ink    pasenusiq nenaudojam   jrangq turite nuve  ti    elektrini   ir  elektronini   atlieky surinkimo punkt    Daugiau informacijos teiraukit  s buitini   atlieky surinkimo tarnybos        Lithuanian recycling notice    Nolietotu iek  rtu izn  cin    anas noteikumi lietot  jiem Eiropas Savien  bas priv  taj  s m  jsaimniec  b  s      is simbols nor  da  ka ier  ci nedr  kst utiliz  t kop   ar citiem m  jsaimniec  bas atkritumiem  Jums j  r  p  jas  par cilv  ku vesel  bas un vides aizsardz  bu  nododot lietoto apr  kojumu otrreiz  jai p  rstr  dei   pa     lietotu  elektrisko un elektronisko ier    u sav  k  anas punkt    Lai ieg  tu pla    ku inform  ciju  l  dzu  sazinieties ar  savu m  jsaimniec  bas atkritumu likvid    anas dienestu        Polish recycling notice    Utylizacja zu  ytego dk przez u  ytkownik  w w prywatnych gospodarstwach domowych w  krajach Unii Europejskiej    Ten symbol oznacza    e nie wolno wyrzuca   produktu wraz z innymi domowymi odpadkami   Obowigzkiem u  ytkownika jest ochrona zdrowa ludzkiego i   rodowiska przez przekazanie zu  ytego  sprz  tu do wyznaczonego punktu zajmuj  cego si   recyklingiem odpad  w powsta  ych ze sprz  tu  elektrycznego i elektronicznego  Wi  cej informacji mo  na uzyska   od lokalnej firmy zajmuj  cej wywozem  nieczys  o  ci        Portuguese recycling notice  Descarte de equipamentos usados por utilizadores dom  sticos na Uni  o Europeia    Este s  mbolo indica que n  o deve descartar o seu produto juntamente com os ou
187. inux StoreAll clients  See    Upgrading Linux StoreAll clients     page 18    If you received a new license from HP  install it as described in    Licensing     page 128      After the upgrade    Complete these steps     l     2   3     If your cluster nodes contain any 10Gb NICs  reboot these nodes to load the new driver  You  must do this step before you upgrade the server firmware  as requested later in this procedure     Upgrade your firmware as described in    Upgrading firmware     page 129      Start any remote replication  rebalancer  or data tiering tasks that were stopped before the  upgrade     If you have a file system version prior to version 6  you might have to make changes for  snapshots and data retention  as mentioned in the following list     e Snapshots  Files used for snapshots must either be created on StoreAll software 6 0 or  later  or the pre 6 0 file system containing the files must be upgraded for snapshots  To    Online upgrades for StoreAll software 13    5     upgrade a file system  use the upgrade60 sh utility  For more information  see     Upgrading pre 6 0 file systems for software snapshots     page 159      e Data retention  Files used for data retention  including WORM and auto commit  must be  created on StoreAll software 6 1 1 or later  or the pre 6 1 1 file system containing the  files must be upgraded for retention features  To upgrade a file system  use the  ibrix reten adm  u  f FSNAME command  Additional steps are required before  an
188. ion instructions noted in the URL  This issue does not  affect G7 servers     http  7h20000 www2 hp com bizsupport TechSupport SoftwareDescription jsp lang en amp   cc us amp prodlypeld  15351  amp prodSeriesld 1146658 amp    swltem MTX 949698a 14e 114478b9fe 126499  amp prodNameld  1135772 amp swEnvOID 4103 amp   swlang 8 amp taskld 135 amp mode 3    A change in the inode format impacts files used for     e Snapshots  Files used for snapshots must either be created on StoreAll software 6 0 or  later  or the pre 6 0 file system containing the files must be upgraded for snapshots  To  upgrade a file system  use the upgrade60 sh utility  For more information  see     Upgrading pre 6 0 file systems for software snapshots     page 159      e Data retention  Files used for data retention  including WORM and auto commit  must be  created on StoreAll software 6 1 1 or later  or the pre 6 1 1 file system containing the  files must be upgraded for retention features  To upgrade a file system  use the  ibrix_reten adm  u  f FSNAME command  Additional steps are required before  and after you run the ibrix reten_adm  u  f FSNAME command  For more  information  see    Upgrading pre 6 0 tile systems for software snapshots     page 159      Offline upgrades for StoreAll software 5 6 x or 6 0 x to 6 1    Preparing for the upgrade    To prepare for the upgrade  complete the following steps     l     2     Ensure that all nodes are up and running  To determine the status of your cluster nodes 
189. ion listed in the table  and select the hyperlink in the far left column   To download the installation instructions  select the Installation Instructions tab    To download the firmware  click Download        Downloading MSA2000 G2 G3 firmware for 9320 systems 133          14 Troubleshooting  Collecting information for HP Support with the lbrixCollect    Ibrix Collect is a log collection utility that allows you collect relevant information for diagnosis by  HP Support when system issues occur  The collection can be triggered manually using the GUI or  CLI  or automatically during a system crash  Ibrix Collect gathers the following information     e Specific operating system and StoreAll command results and logs  e Crash digester results  e Summary of collected logs including error exception failure messages    e   Collection of information from LHN and MSA storage connected to the cluster       NOTE  When the cluster is upgraded from a StoreAll software version earlier than 6 0  the  support tickets collected using the ibrix_supportticket command will be deleted  Before  performing the upgrade  download a copy of the archive files    tgz  from the  admin platform   diag supporttickets directory        Collecting logs    To collect logs and command results using the GUI     1  Select Cluster Configuration  and then select Data Collection   2  Click Collect                  Summa  Ae Tarea ry  Undated May AE 2011  10 22   AM UTT emm een  Seit Sake Gai hee 9     A Cister Mare t
190. irectory structure to the  local ibrix directory  on the disk running the OS    The following is an example of the mount command    mount  o loop    local pkg ibrix pkgfull FS_6 3 72 IAS 6 3 72 x86 64 signed iso   mnt  lt storeall gt    In this example   lt storeal1 gt can have any name    The following is an example of the copy command    cp  R  mnt storeall    local ibrix   Change directory to  local ibrix on the disk running the OS and then run chmod  R  777   on the entire directory structure    Run the following upgrade script      auto_ibrixupgrade    The upgrade script automatically stops the necessary services and restarts them when the  upgrade is complete  The upgrade script installs the Fusion Manager on all file serving nodes   The Fusion Manager is in active mode on the node where the upgrade was run  and is in  passive mode on the other file serving nodes  If the cluster includes a dedicated Management  Server  the Fusion Manager is installed in passive mode on that server     Upgrading the StoreAll software to the 6 3 release    6     7     Upgrade Linux StoreAll clients  See    Upgrading Linux StoreAll clients     page 18    If you received a new license from HP  install it as described in the    Licensing    chapter in this  guide     After the upgrade    Complete the following steps     l     2   3     If your cluster nodes contain any 10Gb NICs  reboot these nodes to load the new driver  You  must do this step before you upgrade the server firmware  as requ
191. l  Select the minimum severity for which the system should send notifications   Critical  only   Error  and Critical   Warning  and Error and Critical   Informational  all   The  default is none  which disables email notification     e SMTP Server address  The IP address of the SMTP mail server to use for the email messages   If the mail server is not on the local network  make sure that the gateway IP address was set  in the network configuration step     e Sender Name  The sender name that is joined with an   symbol to the domain name to form  the    from    address for remote notification  This name provides a way to identify the system  that is sending the notification  The sender name can have a maximum of 31 bytes  Because  this name is used as part of an email address  do not include spaces  For example   Storage 1  If no sender name is set  a default name is created     Event notification for MSA array systems 75    76    Sender Domain  The domain name that is joined with an   symbol to the sender name to  form the    from    address for remote notification  The domain name can have a maximum of  31 bytes  Because this name is used as part of an email address  do not include spaces  For  example  MyDomain com  If the domain name is not valid  some email servers will not process  the mail     Email Address fields  Up to four email addresses that the system should send notifications to   Email addresses must use the format user name domain name  Each email address can  
192. lable under the Hourly  Daily and Weekly categories  See     Generating reports     page 102      104 Using the Statistics tool       NOTE  If the old active Fusion Manager is not available  pingable  for more than two days   the historical statistics database is not transferred to the current active Fusion Manager        e If configurable parameters were set before the failover  the parameters are retained after the  failover     Check the  usr local ibrix log statstool stats 1log for any errors        NOTE  The reports generated before failover will not be available on the current active Fusion  Manager        Checking the status of Statistics tool processes    To determine the status of Statistics tool processes  run the following command       etc init d ibrix_ statsmanager status  ibrix_statsmanager  pid 25322  is running       In the output  the pid is the process id of the    master    process     Controlling Statistics tool processes    Statistics tool processes on all tile serving nodes connected to the active Fusion Manager can be  controlled remotely from the active Fusion Manager  Use the ibrix_ statscontrol tool to start  or stop the processes on all connected file serving nodes or on specified hostnames only     e Stop processes on all file serving nodes  including the Fusion Manager        usr local ibrix stats bin ibrix_statscontrol stopall    e Start processes on all file serving nodes  including the Fusion Manager        usr local ibrix stats bin ibrix_stats
193. le SELinux  edit its configuration file   etc selinux config  and set  SELINUX parameter to either permissive or disabled  SELinux will be stopped at the next  boot     140 Troubleshooting    For StoreAll clients  the client might not be registered with the Fusion Manager  For information on  registering clients  see the HP StoreAll Storage Installation Guide     Failover    Cannot fail back from failover caused by storage subsystem failure    When a storage subsystem fails and automated failover is turned on  the Fusion Manager will  initiate its failover protocol  It updates the configuration database to record that segment ownership  has transferred from primary servers to their standbys and then attempts to migrate the segments  to the standbys  However  segments cannot migrate because neither the primary servers nor the  standbys can access the storage subsystem and the failover is stopped     Perform the following manual recovery procedure     1  Restore the failed storage subsystem  for example  replace failed Fibre Channel switches or  replace a LUN that was removed from the storage array      2  Reboot the standby servers  which will allow the failover to complete     Cannot fail back because of a storage subsystem failure    This issue is similar to the previous issue  If a storage subsystem fails after you have initiated a  failback  the configuration database will record that the failback occurred  even though segments  never migrated back to the primary server  I
194. le show the default users in these groups     ibrix admin x 501 root ibrix  ibrix user x 502 ibrix  ibrixUser  ibrixuser    You can add other users to these groups as needed  using Linux procedures  For example   adduser  G ibrix  lt groupname gt   lt username gt     When using the adduser command  be sure to include the  G option     Using the CLI    The administrative commands described in this guide must be executed on the Fusion Manager  host and require root privileges  The commands are located in  IBRIXHOME bin  For complete  information about the commands  see the HP StoreAll Network Storage System CLI Reference  Guide     When using ssh to access the machine hosting the Fusion Manager  specify the IP address of the  Fusion Manager user VIF     Starting the array management software    32    Depending on the array type  you can launch the array management software from the GUI  In  the Navigator  select Vendor Storage  select your array from the Vendor Storage page  and click  Launch Storage Management     Getting started    StoreAll client interfaces  StoreAll clients can access the Fusion Manager as follows     e     Linux clients  Use Linux client commands for tasks such as mounting or unmounting file systems  and displaying statistics  See the HP StoreAll Storage CLI Reference Guide for details about  these commands     e Windows clients  Use the Windows client GUI for tasks such as mounting or unmounting file  systems and registering Windows clients     Using the 
195. le system does not unmount successfully  perform the following steps on all servers   1  Run the following commands   chkconfig ibrix_server off  chkconfig ibrix ndmp off  chkconfig ibrix_fusionmanager off    2  Reboot all servers     3  Run the following commands to move the services back to the on state  The commands do not  start the services     chkconfig ibrix_ server on  chkconfig ibrix ndmp on  chkconfig ibrix_fusionmanager on   4  Run the following commands to start the services   service ibrix_fusionmanager start  service ibrix server start    5  Unmount the file systems and continue with the upgrade procedure     File system in MIF state after StoreAll software 6 3 upgrade    If an Express Query enabled file systems ended in MIF state after completing the StoreAll software  upgrade process  ibrix archiving  l prints  lt FSNAME gt   MIF   check the MIF status by  running the following command     cat   lt FSNAME gt   archiving database serialization ManualInterventionFailure    If the command s output displays Version mismatch  upgrade needed  as shown in the  following output   steps were not performed as described in    Required steps after the StoreAll  Upgrade     page 20     MIF Version mismatch  upgrade needed   error code 14     If you did not see the Version mismatch  upgrade needed in the command s output  see     Troubleshooting an Express Query Manual Intervention Failure  MIF      page 142      Troubleshooting upgrade issues 23    24    Perform the follow
196. lean up the mount points  reboot the management console  and run the upgrade  procedure again     Upgrading the StoreAll software to the 5 5 release 179       B Component diagrams for 9300 systems    Front view of file serving node                                                                            Item Description     Quick release levers  2    2 HP Systems Insight Manager display  3 Hard drive bays   4 SATA optical drive bay   5 Video connector   6 USB connectors  2        Rear view of file serving node           YVS  D SEI EIER NES  Kl ease gees eiss sees                                   Item Description     PCI slot 5   2 PCI slot 6   3 PCI slot 4   4 PCI slot 2   5 PCI slot 3   6 PCI slot 1   7 Power supply 2  PS2   8 Power supply 1  PS1   9 USB connectors  2   10 Video connector       180 Component diagrams for 9300 systems                               ltem Description   1 NIC 1 connector  12 NIC 2 connector   13 Mouse connector  14 Keyboard connector  15 Serial connector   16 iLO 2 connector   17 NIC 3 connector   18 NIC 4 connector    Rear view of file serving node       181                                                                               Server PCle card PCI slot  HP SCO8Ge 3Gb SAS Host Bus Adapter 1  NC364T Quad 1Gb NIC 2  SATA 1Gb Aa  empty 4  empty 5  empty 6  HP SCO8Ge 3Gb SAS Host Bus Adapter 1  empty 2  SATA 10Gb SS  NC522SFP dual 10Gb NIC 4  empty 5  empty 6  HP SCO8Ge 3Gb SAS Host Bus Adapter 1  NC364T Quad 1Gb NIC 2  SAS 1Gb 2 3  HP SCO
197. lients so that they can locate segments     To migrate segments on the GUI  select the file system on the Filesystems panel  select Segments  from the lower Navigator  and then click Ownership Migration on the Segments panel to open  the Segment Ownership Migration Wizard        Segment Ownership Migration Wizard     gt  Welcome Welcome    Change Ownership     Summary This Vizard allows you to migrate segment ownership between servers in your filesystems to optimize load balancing and utilization by all    available servers    Segment migration transfers segment ownership but it does not move segments from their physical locations in networked storage  systems     X9000 Software already attempts to maintain proper load balancing and utilization in two ways   1  Wen servers are added to the cluster  the ownership of existing segments are distributed between available servers     2  When storage is added  ownership of the new segments is distributed among available servers    However  the purpose of this wizard is to allow you to manually migrate segments in order to take account for other server workload  factors    Gm SR         The Change Ownership dialog box reports the status of the servers in the cluster and lists the  segments owned by each server  In the Segment Properties section of the dialog box  select the  segment whose ownership you are transferring  and click Change Owner     Managing segments 115             Welcome Change Ownership    Change Ownership    o Summ
198. ling notice  Bortskaffelse af brugt udstyr hos brugere i private hjem i EU    Dette symbol betyder  at produktet ikke m   bortskaffes sammen med andet husholdningsaffald  Du skal  i stedet den menneskelige sundhed og milj  et ved at afl evere dit brugte udstyr p   et dertil beregnet  indsamlingssted for af brugt  elektrisk og elektronisk udstyr  Kontakt n  rmeste renovationsafdeling for  yderligere oplysninger        Dutch recycling notice    Inzameling van afgedankte apparatuur van particuliere huishoudens in de Europese Unie    Dit symbool betekent dat het product niet mag worden gedeponeerd bij het overige huishoudelijke afval   Bescherm de gezondheid en het milieu door afgedankte apparatuur in te leveren bij een hiervoor bestemd  inzamelpunt voor recycling van afgedankte elektrische en elektronische apparatuur  Neem voor meer  informatie contact op met uw gemeentereinigingsdienst        202 Regulatory compliance notices    Estonian recycling notice  Aravisatavate seadmete likvideerimine Euroopa Liidu eramajapidamistes  See mark n  itab  et seadet ei tohi visata olmepr  gi hulka  Inimeste tervise ja keskkonna s    stmise nimel    tuleb   ravisatav toode tuua elektriliste ja elektrooniliste seadmete k  itlemisega egelevasse kogumispunkti   K  simuste korral p    rduge kohaliku pr  gik  itlusettev  tte poole        Finnish recycling notice  Kotitalousj  tteiden h  vitt  minen Euroopan unionin alueella  T  m   symboli merkitsee  ett   laitetta ei saa h  vitt     muiden kotita
199. lity on all cluster nodes    ibrix_server  m  U   Move all passive Fusion Manager instances into nofmfailover mode    ibrix fm  m nofmfailover  A   Stop the SMB  NFS and NDMP services on all nodes  Run the following commands   ibrix_server  s  t cifs  c stop   ibrix_ server  s  t nfs  c stop   ibrix_ server  s  t ndmp  c stop   If you are using SMB  verify that all likewise services are down on all file serving nodes   ps  ef   grep likewise   Use kill  9 to stop any likewise services that are still running    If you are using NFS  verify that all NFS processes are stopped    ps  ef   grep nfs   If processes are running  use the following commands on the affected node       pdsh  a service nfslock stop   dshbak    pdsh  a service nfs stop   dshbak    If necessary  run the following command on all nodes to find any open file handles for the  mounted file systems     lsof  lt  mountpoint gt    Use kill  9 to stop any processes that still have open file handles on the file systems    List file systems mounted on the cluster      ibrix fs  1   Unmount all file systems from StoreAll clients    e On Linux StoreAll clients  run the following command to unmount each file system   ibrix_lwumount  f  lt fs_ name gt    e On Windows StoreAll clients  stop all applications accessing the file systems  and then    use the client GUI to unmount the file systems  for example     DRIVE   Next  go to Services  and stop the fusion service     Shutting down the system 107    7  Unmount all file s
200. ll path for the  n and  A options   Rejoin the likewise database to the Active Directory domain    opt likewise bin domainjoin cli join  lt domain_name gt  Administrator    Push the original share information from the management console database to the restored  node  On the node hosting the active management console  first create a temporary SMB  share     ibrix cifs  a  f FSNAME  s SHARENAME  p SHAREPATH       NOTE  You cannot create an SMB share with a name containing an exclamation point      or a number sign     or both        Then delete the temporary SMB share     Completing the restore on a file serving node 149    ibrix cifs  d  s SHARENAME    4  Run the following command to verify that the original share information is on the restored  node     ibrix cifs  i  h SERVERNAME    Restore HTTP services  Complete the following steps   1  Take the appropriate actions     e   If Active Directory authentication is used  join the restored node to the AD domain  manually     e   H Local user authentication is used  create a temporary local user on the GUI and apply  the settings to all servers  This step resynchronizes the local user database  Then remove  the temporary user     2  Run the following command   ibrix httpconfig  R  h HOSTNAME    3  Verify that HTTP services have been restored  Use the GUI or CLI to identify a share served  by the restored node and then browse to the share     All Vhosts and HTTP shares should now be restored on the node   Restore FTP services  Co
201. llect the following information    e   Product model names and numbers   e   Technical support registration number  if applicable   e   Product serial numbers   e   Error messages   e Operating system type and revision level    e Detailed questions  Related information    Using the 9320 Storage    e HP StoreAll Storage File System User Guide  e HP StoreAll Storage CLI Reference  e   HP StoreAll Storage Release Notes    To find these documents  go to the StoreAll Manuals page  http   www hp com support   StoreAllManuals    Using and maintaining file serving nodes    e HP Proliant DL380 G7 Server User Guide  e HP Proliant DL380 G7 Server Maintenance and Service Guide  e HP Proliant DL380 G6 Server User Guide  e HP Proliant DL380 G6 Server Maintenance and Service Guide    To find these documents  go to the Manuals page  http   www hp com support manuals  and  select servers  gt  ProLiant ml dl and tc series servers  gt  HP ProLiant DL380 G7 Server series or HP  ProLiant DL380 G6 Server series     Using and maintaining the optional dedicated Management Server    e HP Proliant DL360 G7 Server User Guide  e HP Proliant DL360 G7 Server Maintenance and Service Guide  e HP Proliant DL360 G6 Server User Guide  e HP Proliant DL360 G6 Server Maintenance and Service Guide    To find these documents  go to the Manuals page  http   www hp com support manuals  and  select servers  gt  ProLiant ml dl and tc series servers  gt HP ProLiant DL360 G7 Server series or HP  ProLiant DL360 G6 Server 
202. local ibrix setup upgrade 1log for errors  It is imperative that all servers are  up and running the StoreAll software before you execute the upgrade script     If the install of the new OS fails  power cycle the node  Try rebooting  If the install does not  begin after the reboot  power cycle the machine and select the upgrade line from the grub  boot menu     After the upgrade  check  usr local ibrix setup logs postupgrade  log for errors  or warnings     If configuration restore fails on any node  look at   usr local ibrix autocfg logs appliance 1log on that node to determine which  feature restore failed  Look at the specific feature log file under  usr local ibrix setup   logs  for more detailed information     To retry the copy of configuration  use the following command    usr local ibrix autocfg bin ibrixapp upgrade  f  s  If the install of the new image succeeds  but the configuration restore fails and you need to    revert the server to the previous install  run the following command and then reboot the machine   This step causes the server to boot from the old version  the alternate partition       usr local ibrix setup boot_info  r    If the public network interface is down and inaccessible for any node  power cycle that node        NOTE  Each node stores its ibrixupgrade log file in  tmp        Upgrading the StoreAll software to the 6 1 release 161    Manual upgrade    Check the following     If the restore script fails  check  usr local ibrix setup logs restore 
203. lousj  tteiden mukana  Sen sijaan sinun    on suojattava ihmisten terveytt   ja ymp  rist     toimittamalla k  yt  st   poistettu laite s  hk    tai  elektroniikkaj  tteen kierr  tyspisteeseen  Lis  tietoja saat jatehuoltoyhtidlta        French recycling notice  Mise au rebut d   quipement par les utilisateurs priv  s dans l Union Europ  enne    Ce symbole indique que vous ne devez pas jeter votre produit avec les ordures m  nag  res  Il est de  votre responsabilit   de prot  ger la sant   et l environnement et de vous d  barrasser de votre   quipement  en le remettant    une d  chetterie effectuant le recyclage des   quipements   lectriques et   lectroniques   Pour de plus amples informations  prenez contact avec votre service d   limination des ordures m  nag  res        German recycling notice  Entsorgung von Altgeraten von Benutzern in privaten Haushalten in der EU    Dieses Symbol besagt  dass dieses Produkt nicht mit dem Haushaltsmill entsorgt werden dort Zum  Schutze der Gesundheit und der Umwelt sollten Sie stattdessen Ihre Altgerate zur Entsorgung einer daf  r  vorgesehenen Recyclingstelle f  r elektrische und elektronische Ger  te   bergeben  Weitere Informationen  erhalten Sie von Ihrem Entsorgungsunternehmen f  r Hausm  ll        Greek recycling notice  Anoppiwn axprnotou efonAiopou and idiwtes vp  orec or Eupwnaikn  Evwon    Autd To obp Bodo onpaive dn Sev np  nel va anoppiwete To npo    v pe Ta AoINa oikiak   anoppipparta   Avti8eta  np  ne  va npootat  wete tny
204. ls for devices that are no longer in the cluster  On the GUI  select Cluster    Configuring HP Insight Remote Support on StoreAll systems 45    Configuration in the upper Navigator  select Phone Home in the lower Navigator  and click Rescan  on the Phone Home Setup panel     On the CLI  run the following command     ibrix_phonehome  s    Disabling Phone Home    When Phone Home is disabled  all Phone Home information is removed from the cluster and  hardware and software are no longer monitored  To disable Phone Home on the GUI  click Disable  on the Phone Home Setup panel  On the CLI  run the following command     ibrix_phonehome  d  Troubleshooting Insight Remote Support    Devices are not discovered on HP SIM    Verify that cluster networks and devices can access the CMS  Devices will not be discovered  properly if they cannot access the CMS     The maximum number of SNMP trap hosts has already been configured    If this error is reported when you configure Phone Home  the maximum number of trapsink IP  addresses have already been configured  For MSA devices  the maximum number of trapsink IP  addresses is 3  Manually remove a trapsink IP address from the device and then rerun the Phone  Home configuration to allow Phone Home to add the CMS IP address as a trapsink IP address     A cluster node was not configured in Phone Home    If a cluster node was down during the Phone Home configuration  the log file will include the  following message     SEVERE  Sent event server sta
205. lth check reports  96  help  obtaining  151  High Availability  agile Fusion Manager  53  automated failover  turn on or off  63  check configuration  66  configure automated failover manually  62  detailed configuration report  67  fail back a node  64  failover protection  26  HBA monitor  64  manual failover  64  NIC HA  54  power management for nodes  109  power sources  62  server HA  54  summary configuration report  67  troubleshooting  141  host groups  81  add domain rule  82  add StoreAll client  82  create host group tree  82  delete  83  prefer a user network interface  124  view  83  HP  technical support  151  HP Insight Remote Support  35  Phone Home  37    212 Index    troubleshooting  46       Ibrix Collect  134  add on scripts  137  configure  136  ibrix_reten_adm  u command  161  IP address  change for cluster interface  125  change for StoreAll client  125    L    labels  symbols on equipment  193  laser compliance notices  199  link state monitoring  51  Linux StoreAll clients  upgrade  18  158  loading rack  warning  193  localization  28  log files  98   collect for HP Support  134    M  manpages  33  monitoring  chassis and components  88  cluster events  94  cluster health  95  file serving nodes  93  node statistics  98  servers   84  88  storage and components  92    N  NDMP backups  77  cancel sessions  79  configure NDMP parameters  78  rescan for new devices  80  start or stop NDMP Server  79  view events  80  view sessions  79  view tape and med
206. m  sticos de la Uni  n Europea    Este s  mbolo indica que este producto no debe eliminarse con los residuos dom  sticos  En lugar de ello   debe evitar causar da  os a la salud de las personas y al medio ambiente llevando los equipos que no  utilice a un punto de recogida designado para el reciclaje de equipos el  ctricos y electr  nicos que ya  no se utilizan  Para obtener m  s informaci  n  p  ngase en contacto con el servicio de recogida de  residuos dom  sticos        Swedish recycling notice  Hantering av elektroniskt avfall f  r hemanv  ndare inom EU  Den h  r symbolen inneb  r att du inte ska kasta din produkt i hush  llsavfallet  V  rna i st  llet om natur    och milj   genom att l  mna in uttj  nt utrustning p   anvisad insamlingsplats  Allt elektriskt och elektroniskt  avfall g  r sedan vidare till   tervinning  Kontakta ditt   tervinningsf  retag f  r mer information        Recycling notices 205    Battery replacement notices    Dutch battery notice    French    Verklaring betreffende de batterij    AN WAARSCHUWING  dit apparaat bevat mogelijk een batterij       Probeer de batterijen na het verwijderen niet op te laden      Stel de batterijen niet bloot aan water of temperaturen boven 60   C      De batterijen mogen niet worden beschadigd  gedemonteerd  geplet of doorboord      Zorg dat u geen kortsluiting veroorzaakt tussen de externe contactpunten en laat de  batterijen niet in aanraking komen met water of vuur      Gebruik ter vervanging alleen door HP goedgeke
207. module  uuid 09USE038187W0AModule2  status MISSING  Message  The Onboard  Administrator module is missing or has failed   Diagnostic message  Reseat the Onboard  Administrator module  If reseating the module does not resolve the issue  replace the Onboard  Administrator module   eventId 000D0004  location OAmodule in chassis S N USE123456W   level   ALERT   FILESYSTEM   HOST   1x24 03 ad hp com   USER NAME   OPERATION   SEGMENT NUMBER   PV NUMBER   NIC   HBA     RELATED EVENT   0   The ibrix event  1 and  i commands can include options that act as filters to return records    associated with a specific file system  server  alert level  and start or end time  See the HP StoreAll  Network Storage System CLI Reference Guide for more information     Removing events from the events database table    Use the ibrix_ event  p command to removes event from the events table  starting with the  oldest events  The default is to remove the oldest seven days of events  To change the number of  days  include the  o DAYS_COUNT option     ibrix event  p   o DAYS COUNT     Monitoring cluster health    To monitor the functional health of file serving nodes and StoreAll clients  execute the  ibrix_ health command  This command checks host performance in several functional areas  and provides either a summary or a detailed report of the results     Monitoring cluster health 95    Health checks    Health    The ibrix health command runs these health checks on file serving nodes     e Pings remote 
208. most of its commands  To view the manpages  set the  MANPATH variable to include the path to the manpages and then export it  The manpages are in  the  IBRIXHOME man directory  For example  if SIBRIXHOME is  usr local ibrix  the  default   set the MANPATH variable as follows and then export the variable     MANPATH SMANPATH   usr local ibrix man    Changing passwords  You can change the following passwords on your system   e Hardware passwords  See the documentation for the specific hardware for more information   e Root password  Use the passwd  8  command on each server     e   StoreAll software user password  This password is created during installation and is used to  log in to the GUI  The default is ibrix  You can change the password using the Linux passwd  command       passwd ibrix    You will be prompted to enter the new password     StoreAll software manpages 33    Configuring ports for a firewall             IMPORTANT  To avoid unintended consequences  HP recommends that you configure the firewall  during scheduled maintenance times        When configuring a firewall  you should be aware of the following   e   SELinux should be disabled     e By default  NFS uses random port numbers for operations such as mounting and locking   These ports must be fixed so that they can be listed as exceptions in a firewall configuration  file  For example  you will need to lock specific ports for rpc  statd  rpc  lockd   rpc mountd  and rpc  quotad     e It is best to allow all IC
209. mp custAttributes csv  t custom  In this instance  lt FSNAME gt  is the file system     Troubleshooting upgrade issues    If the upgrade does not complete successfully  check the following items  For additional assistance   contact HP Support     Automatic upgrade    Check the following     If the initial execution of  usr local ibrix setup upgrade fails  check   usr local ibrix setup upgrade 1og for errors  It is imperative that all servers are  up and running the StoreAll software before you execute the upgrade script     If the install of the new OS fails  power cycle the node  Try rebooting  If the install does not  begin after the reboot  power cycle the machine and select the upgrade line from the grub  boot menu     After the upgrade  check  usr local ibrix setup logs postupgrade  log for errors  or warnings     If configuration restore fails on any node  look at   usr local ibrix autocfg logs appliance 1log on that node to determine which  feature restore failed  Look at the specific feature log file under  usr local ibrix setup   logs  for more detailed information     To retry the copy of configuration  use the following command      usr local ibrix autocfg bin ibrixapp upgrade  f  s    Troubleshooting upgrade issues 21    e If the install of the new image succeeds  but the configuration restore fails and you need to  revert the server to the previous install  run the following command and then reboot the machine   This step causes the server to boot from the old ve
210. mplete the following steps   L Take the appropriate actions     e   If Active Directory authentication is used  join the restored node to the AD domain  manually     e   H Local user authentication is used  create a temporary local user on the GUI and apply  the settings to all servers  This step resynchronizes the local user database  Then remove  the temporary user     2  Run the following command   ibrix ftpconfig  R  h HOSTNAME   3  Verify that HTTP services have been restored  Use the GUI or CLI to identify a share served  by the restored node and then browse to the share    All Vhosts and FTP shares should now be restored on the node     The ibrix_auth command fails after a restore  Ifthe ibrix auth command fails after a QR restore of a server in a cluster with a message similar  to the following     ibrix_auth  n IBRQA1 HP COM  A administrator ibrgqal hp com  P password   h hostnameX    Iad error on host hostnameX failed command   lt HIDDEN COMMAND gt   status   1  output   Joining to AD Domain  IBRQA1 HP COM With Computer DNS Name   hostsnameX ibrqal hp com      Verify that the content of the  etc resolve conf file is not empty  If the content is empty  copy  the contents of the  etc resolve conf file on another server to the empty resolve conf  file     150 Recovering a file serving node       16 Support and other resources  Contacting HP    For worldwide technical support information  see the HP support website   http   www hp com support    Before contacting HP  co
211. n  This is a persistent change  If the server  is hosting the active FM  it transitions to another server     3  If NIC monitoring is configured  the Fusion Manager activates the standby NIC and transfers  the IP address  or VIF  to it     Clients that were mounted on the failed over server may experience a short service interruption  while server failover takes place  Depending on the protocol in use  clients can continue operations  after the failover or may need to remount the file system using the same VIF  In either case  clients  will be unaware that they are now accessing the file system on a different server    To determine the progress of a failover  view the Status tab on the GUI or execute the   ibrix_ server  1 command  While the Fusion Manager is migrating segment ownership  the  operational status of the node is Up  InFailover or Down InFailover  depending on whether  the node was powered up or down when failover was initiated  When failover is complete  the  operational status changes to Up  FailedOver or Down FailedOver  For more information  about operational states  see    Monitoring the status of file serving nodes     page 93      Both automated and manual failovers trigger an event that is reported on the GUI   Automated failover can be configured with the HA Wizard or from the command line     Configuring automated failover with the HA Wizard    The HA wizard configures a backup server pair and  optionally  standby NICs on each server in  the pair  It al
212. n  file system    The  n option lists needed conversions but does not attempt them  The  v option provides more  information     160 Cascading Upgrades    Upgrading pre 6 1 1 file systems for data retention features    Data retention was automatically enabled for file systems created with StoreAll 6 1 1 or later  If  you want to enable data retention for file systems created with StoreAll 6 0 or earlier  run the  ibrix reten adm  u command  as described in this section     To enable data retention     l     2     If you have a pre 6 0 file system  run the upgrade60   sh utility  as described in Section   page 159     Run the following command on a node that has the file system mounted   ibrix_reten_adm  u  f FSNAME   In this instance  FSNAME is the name of the file system you want to upgrade for data retention  features    The command enables data retention and unmounts the file system on the node    After the command finishes upgrading the file system  re mount the file system    Enter the ibrix fs command to set the file system s data retention and autocommit period  to the desired values  See the HP StoreAll Storage CLI Reference Guide for additional    information about the ibrix_fs command     Troubleshooting upgrade issues    If the upgrade does not complete successfully  check the following items  For additional assistance   contact HP Support     Automatic upgrade    Check the following     If the initial execution of  usr local ibrix setup upgrade fails  check   usr 
213. n  the output will report  FusionServer  fusion manager name not set   active  quorum is not  configured         When a Fusion Manager is installed  it is registered in the Fusion Manager configuration  To view  a list of all registered management consoles  use the following command     ibrix fm  1  Agile Fusion Manager and failover    Using an agile Fusion Manager configuration provides high availability for Fusion Manager  services  If the active Fusion Manager fails  the cluster virtual interface will go down  When the  passive Fusion Manager detects that the cluster virtual interface is down  it will become the active    Agile management consoles 53    console  This Fusion Manager rebuilds the cluster virtual interface  starts Fusion Manager services  locally  transitions into active mode  and take over Fusion Manager operation     Failover of the active Fusion Manager affects the following features     e   User networks  The virtual interface used by clients will also fail over  Users may notice a brief  reconnect while the newly active Fusion Manager takes over management of the virtual  interface     e GUI  You must reconnect to the Fusion Manager VIF after the failover     Failing over the Fusion Manager manually    To fail over the active Fusion Manager manually  place the console into nofmfailover mode  Enter  the following command on the node hosting the console     ibrix_ fm  m nofmfailover   The failover will take approximately one minute    Run to see which node
214. n 6 3  manually complete  each file system upgrade as described in    Required steps after the StoreAll Upgrade     page 20      Upgrading Linux StoreAll clients    Be sure to upgrade the cluster nodes before upgrading Linux StoreAll clients  Complete the following  steps on each client     1  Download the latest HP StoreAll client 6 3 package    2  Expand the tar file    3  Run the upgrade script     ibrixupgrade  tc  f  The upgrade software automatically stops the necessary services and restarts them when the  upgrade is complete    4  Execute the following command to verify the client is running StoreAll software      etc init d ibrix_ client status  IBRIX Filesystem Drivers loaded  IBRIX IAD Server  pid 3208  running       The IAD service should be running  as shown in the previous sample output  If it is not  contact HP  Support     Installing a minor kernel update on Linux clients    18    The StoreAll client software is upgraded automatically when you install a compatible Linux minor  kernel update     If you are planning to install a minor kernel update  first run the following command to verify that  the update is compatible with the StoreAll client software    usr local ibrix bin verify_client_update  lt kernel_update_version gt    The following example is for a RHEL 4 8 client with kernel version 2 6  9 89 ELsmp       usr local ibrix bin verify_client_update 2 6 9 89 35 1 ELsmp   Kernel update 2 6 9 89 35 1 ELsmp is compatible    If the minor kernel update is compa
215. naged Systems    in the HP Insight Remote Support Advanced A 05 50  Operations Guide     Configuring HP Insight Remote Support on StoreAll systems 43    Enter the following custom field settings in HP SIM     e Custom field settings for 9300 9320    Servers are discovered with their IP addresses  When a server is discovered  edit the system  properties on the HP Systems Insight Manager  Locate the Entitlement Information section of  the Contract and Warranty Information page and update the following     o Enter the StoreAll enclosure product number as the Customer Entered product number  o Enter 9000 as the Custom Delivery ID     Select the System Country Code    o   Enter the appropriate Customer Contact and Site Information details    e Custom field settings for MSA Storage Management Utility    Configure SNMP settings on the MSA Storage Management Utility   For more information   see    Configuring SNMP event notification in SMU    in the 2300 Modular Smart Array Reference    Guide This document is available at http   www hp com support manuals  On the Manuals  page  select storage  gt Disk Storage Systems  gt  MSA Disk Arrays  gt HP 2000sa G2 Modular    Smart Array or HP P2000 G3 MSA Array Systems   Refer to the HP StorageWorks 2xxx  Modular Smart Array Reference Guide for other MSA versions     A Modular Storage Array  MSA  unit should be discovered with its IP address  Once discovered   locate the Entitlement Information section of the Contract and Warranty Information 
216. nd file system have an OK status  the MIF status can be cleared since the Express  Query service will be recovering and restarting automatically     In some very rare cases  a database corruption might occur  as a result of these external events  or from some internal dysfunction  Express Query contains a recovery mechanism that tries to  rebuild the database from information Express Query is keeping specifically for that critical situation   Express Query might be unable to recover from internal database corruption  Even though it is  unlikely  it is possible and it might occur in the following two cases     e A corrupted database has to be rebuilt from data that has been already backed up  If the  data needed has been backed up  there is no automated way for Express Query to recover  since that information has been deleted from the Store All file system after the backup  It is  however possible to replay the database logs from the backup     e Some data needed to rebuild the database is corrupted and therefore it cannot be used     142 Troubleshooting    Even though database files  as well as information used in database recovery are well protected  against corruption  corruption occurrence might occur        NOTE  When a file system is in the MIF state  Express Query event recording is still occurring   When the database is re enabled  the recorded events are processed and the database is  synchronized with the file system again        To recover from an Express Query Manu
217. nfigured to provide  time synchronization with an external time source  The list of NTP servers is stored in the Fusion  Manager configuration  The active Fusion Manager node synchronizes its time with the external  source  The other file serving nodes synchronize their time with the active Fusion Manager node   In the absence of an external time source  the local hardware clock on the agile Fusion Manager  node is used as the time source  This configuration method ensures that the time is synchronized  on all cluster nodes  even in the absence of an external time source     On StoreAll clients  the time is not synchronized with the cluster nodes  You will need to configure  NTP servers on StoreAll clients     List the currently configured NTP servers   ibrix_clusterconfig  i  N   Specify a new list of NTP servers    ibrix_clusterconfig  c  N SERVERI      SERVERn     Configuring HP Insight Remote Support on StoreAll systems             IMPORTANT  In the StoreAll software 6 1 release  the default port for the StoreAll SNMP agent  changed from 5061 to 161  This port number cannot be changed           NOTE  Configuring Phone Home enables the hp snmp agents service internally  As a result  a  large number of error messages  such as the following  could occasionally appear in   var log hp snmp agents cma log     Feb 08 13 05 54 x946s1 cmahostd 25579   cmahostd  Can t update OS filesys  object   ifsl  PEER3023     The cmahostd daemon is part of the hp snmp agents service  This erro
218. nitoring the status of file serving nodes    The dashboard on the GUI displays information about the operational status of file serving nodes   including CPU  I O  and network performance information     To view this information from the CLI  use the ibrix server  1 command  as shown in the  following sample output     ibrix_server  1l    SERVER_NAME STATE CPU    NET _IO MB s  DISK_IO MB s  BACKUP HA    Monitoring the status of file serving nodes 93    nodel Up  HBAsDown 0 0 00 0 00 off  node2 Up  HBAsDown 0 0 00 0 00 off  File serving nodes can be in one of three operational states  Normal  Alert  or Error  These states    are further broken down into categories describing the failover status of the node and the status  of monitored NICs and HBAs           State Description  Normal Up  Operational   Alert Up Alert  Server has encountered a condition that has been logged  An event will appear in the Status    tab of the GUI  and an email notification may be sent     Up InFailover  Server is powered on and visible to the Fusion Manager  and the Fusion Manager is  failing over the server s segments to a standby server     Up FailedOver  Server is powered on and visible to the Fusion Manager  and failover is complete        Error Down InFailover  Server is powered down or inaccessible to the Fusion Manager  and the Fusion  Manager is failing over the server s segments to a standby server     Down FailedOver  Server is powered down or inaccessible to the Fusion Manager  and failo
219. no default values        NOTE  The default SNMP agent port was changed from 5061 to 161 in the StoreAll 6 1 release   This port number cannot be changed        The  c and  s options are also common to all SNMP versions  The  c option turns the encryption  of community names and passwords on or off  There is no encryption by default  Using the  s  option toggles the agent on and off  it turns the agent on by starting a listener on the SNMP port   and turns it off by shutting off the listener  The default is off     The format for a v1 or v2 update command follows    ibrix_snmpagent  u  v  1 2    p PORT    r READCOMMUNITY    w WRITECOMMUNITY      t SYSCONTACT    n SYSNAME    o SYSLOCATION    c  yes no     s  on off     The update command for SNMPv1 and v2 uses optional community names  By convention  the  default READCOMMUNITY name used for read only access and assigned to the agent is public   No default wRITECOMMUNITY name is set for read write access  although the name private is  often used      The following command updates a v2 agent with the write community name private  the agent s  system name  and that system   s physical location     ibrix_snmpagent  u  v 2  w private  n agenthost domain com  o DevLab B3 U6    The SNMPv3 format adds an optional engine id that overrides the default value of the agent   s  host name  The format also provides the  y and  z options  which determine whether a v3 agent  can process v1 v2 read and write requests from the management station 
220. node hosting the active Fusion Manager       shutdown  t now    now     Broadcast message from root  pts 4   Mon Mar 12 17 10 13 2012    The system is going down to maintenance mode NOW     When the command finishes  the server is powered off  standby      Powering off the hardware    Power off the file serving nodes in any order  The step completely shuts down the cluster     Starting the system    To start the system  first power on the file serving nodes  and then start the X900 Software     Starting the StoreAll software    To start the StoreAll software  complete the following steps     108 Maintaining the system    L Power on the node hosting the active Fusion Manager    2  Power on the file serving nodes   root segment   segment 1  power on owner first  if possible     3  Monitor the nodes on the GUI and wait for them all to report UP in the output from the following  command     ibrix_server  1l  4  Mount file systems and verify their content  Run the following command on the file serving  node hosting the active Fusion Manager   ibrix_mount  f fs_name  m  lt mountpoint gt   On Linux StoreAll clients  run the following command     ibrix_lwmount  f fsname  m  lt mountpoint gt     5  Enable HA on the file serving nodes  Run the following command on the file serving node  hosting the active Fusion Manager   ibrix_ server  m   6  From the active Fusion Manager  enter the following command to move all Fusion Managers  into passive mode   ibrix_fm  m passive  A    The Store
221. notification process     70 Configuring cluster event notification    Associating events and email addresses    You can associate any combination of cluster events with email addresses  all Alert  Warning  or  Info events  all events of one type plus a subset of another type  or a subset of all types     The notification threshold for Alert events is 90  of capacity  Threshold triggered notifications are  sent when a monitored system resource exceeds the threshold and are reset when the resource  utilization dips 10  below the threshold  For example  a notification is sent the first time usage  reaches 90  or more  The next notice is sent only if the usage declines to 80  or less  event is  reset   and subsequently rises again to 90  or above     To associate all types of events with recipients  omit the  e argument in the following command   ibrix_event  c   e ALERT WARN INFO EVENTLIST   m EMAILLIST    Use the ALERT  WARN  and INFO keywords to make specific type associations or use EVENTLIST  to associate specific events     The following command associates all types of events to admin hp com   ibrix_event  c  m admin hp com  The next command associates all Alert events and two Info events to admin hp com     ibrix_event  c  e ALERT server registered filesystem space  full   m admin hp com    Configuring email notification settings    To configure email notification settings  specify the SMTP server and header information and turn  the notification process on or off     ibri
222. nstalled   A virtual interface  is used for the cluster network interface   One or more user network interfaces may also have been  created  depending on your site s requirements  You can add user network interfaces as necessary     Adding user network interfaces    Although the cluster network can carry traffic between file serving nodes and either  NFS SMB HTTP FTP or StoreAll clients  you may want to create user network interfaces to carry  this traffic  If your cluster must accommodate a mix of NFS SMB FTP HTTP clients and Store All  clients  or if you need to segregate client traffic to different networks  you will need one or more  user networks  In general  it is better to assign a user network for protocol  NFS SMB HTTP FTP   tratfic because the cluster network cannot host the virtual interfaces  VIFs  required for failover  HP  recommends that you use a Gigabit Ethernet port  or faster  for user networks     When creating user network interfaces for file serving nodes  keep in mind that nodes needing to  communicate for file system coverage or for failover must be on the same network interface  Also   nodes set up as a failover pair must be connected to the same network interface     For a highly available cluster  HP recommends that you put protocol traffic on a user network and  then set up automated failover for it  see    Configuring High Availability on the cluster     page 54     This method prevents interruptions to the traffic  If the cluster interface is use
223. nt by users in private household in the European Union    This symbol means do not dispose of your product with your other household waste  Instead  you should  protect human health and the environment by handing over your waste equipment to a designated  collection point for the recycling of waste electrical and electronic equipment  For more information   please contact your household waste disposal service       Bulgarian recycling notice    Maxvprpnane HO OTNAGAbYHO o6opyABaHe OT notpe6utenu B HuOCTHM DOMAKUMHCTBC B Esponenckna  CbrO3    To3v CMMBON BbPXy NpOpyKTA UNN ONAKOBKATA MY NOKA3BA  de NPOAYKTET He TpA6Ba DO Ce U3xB_pNA 3aenHO  c apyrnte 6utosu oTnagpyn  BmMecto Tosa  tTpa6ea pa NpepNasute YOBeELUKOTO 3ApaBe n OKONHaTA CDen   KATO Npefagete oTNAAbYHOTO O6opyABaHe B NPeAHASHAYeH 30 CbOupaHeTO My NYHKT 30 peuWKnMpaHe HA  HEV3NON3BAEMO ENEKTPUYECKO MM eNneKTPOHHO SopypBaHe  3a ponbnhntenia nH  opmayna ce cBbpxxeTe      upMata no ynctota  dato ycnyrn u3nonsBaTe        Czech recycling notice    Likvidace za    zen   v dom  cnostech v Evropsk   unii    Tento symbol znamen      e nesm  te tento produkt likvidovat spolu s jin  m domovn  m odpadem  M  sto  toho byste m  li chr  nit lidsk   zdrav   a   ivotn   prost  ed   t  m    e jej p  ed  te na k tomu ur  en   sb  rn    pracovi  t    kde se zab  vaj   recyklac   elektrick  ho a elektronick  ho vybaven    Pro v  ce informac   kontaktujte  spole  nost zab  vaj  c   se sb  rem a svozem domovn  ho odpadu        Danish recyc
224. ntrollers  providing redundancy by design  Setting up StoreAll HBA  monitoring is not commonly used for these servers  However  if a server has only a single HBA   you might want to monitor the HBA  then  if the server cannot see its storage because the single  HBA goes offline or faults  the server and its segments will fail over     You can set up automatic server failover and perform a manual failover if needed  If a server fails  over  you must fail back the server manually     When automatic HA is enabled  the Fusion Manager listens for heartbeat messages that the servers  broadcast at one minute intervals  The Fusion Manager initiates a server failover when it fails to  receive five consecutive heartbeats  Failover conditions are detected more quickly when NIC HA  is also enabled  server failover is initiated when the Fusion Manager receives a heartbeat message  indicating that a monitored NIC might be down and the Fusion Manager cannot reach that NIC   If HBA monitoring is enabled  the Fusion Manager fails over the server when a heartbeat message  indicates that a monitored HBA or pair of HBAs has failed     54 Configuring failover    What happens during a failover    The following actions occur when a server is failed over to its backup    L The Fusion Manager verifies that the backup server is powered on and accessible    2  The Fusion Manager migrates ownership of the server   s segments to the backup and notifies  all servers and StoreAll clients about the migratio
225. nues during the upgrade   The upgrade process takes approximately 45 minutes  regardless of the number of nodes     The total I O interruption per node IP is four minutes  allowing for a failover time of two minutes  and a failback time of two additional minutes     Client   O having a timeout of more than two minutes is supported     Preparing for the upgrade    To prepare for the upgrade  complete the following steps     l     2   3     Ensure that all nodes are up and running  To determine the status of your cluster nodes  check  the dashboard on the GUI or use the ibrix_health commond     Ensure that High Availability is enabled on each node in the cluster    Verify that ssh shared keys have been set up  To do this  run the following command on the  node hosting the active instance of the agile Fusion Manager    ssh  lt server_name gt     Repeat this command for each node in the cluster and verify that you are not prompted for a  password at any time     Ensure that no active tasks are running  Stop any active remote replication  data tiering  or  rebalancer tasks running on the cluster   Use ibrix task  1 to list active tasks   When the  upgrade is complete  you can start the tasks again    The 6 1 release requires that nodes hosting the agile management be registered on the cluster  network  Run the following command to verify that nodes hosting the agile Fusion Manager  have IP addresses on the cluster network     ibrix_fm  f    If a node is configured on the user netw
226. o cause harmful interference  in which  case the user will be required to correct the interference at personal expense     Class B equipment    This equipment has been tested and found to comply with the limits for a Class B digital device   pursuant to Part 15 of the FCC Rules  These limits are designed to provide reasonable protection  against harmful interference in a residential installation  This equipment generates  uses  and can  radiate radio frequency energy and  if not installed and used in accordance with the instructions   may cause harmful interference to radio communications  However  there is no guarantee that  interference will not occur in a particular installation  If this equipment does cause harmful  interference to radio or television reception  which can be determined by turning the equipment    196 Regulatory compliance notices    off and on  the user is encouraged to try to correct the interference by one or more of the following  measures     e   Reorient or relocate the receiving antenna   e Increase the separation between the equipment and receiver     e Connect the equipment into an outlet on a circuit that is different from that to which the receiver  is connected     e   Consult the dealer or an experienced radio or television technician for help     Modification    The FCC requires the user to be notified that any changes or modifications made to this device  that are not expressly approved by Hewlett Packard Company may void the user s authorit
227. omment    remplacer et jeter vos piles     206 Regulatory compliance notices    German battery notice    Hinweise zu Batterien und Akkus    A VORSICHT  Dieses Produkt enth  lt unter Umst  nden eine Batterie oder  einen Akku       Versuchen Sie nicht  Batterien und Akkus au  erhalb des Ger  tes wieder aufzuladen      Sch  tzen Sie Batterien und Akkus vor Feuchtigkeit und Temperaturen   ber 60        Verwenden Sie Batterien und Akkus nicht missbrauchlich  nehmen Sie sie nicht  auseinander und vermeiden Sie mechanische Besch  digungen jeglicher Art      Vermeiden Sie Kurzschlisse  und setzen Sie Batterien und Akkus weder Wasser noch  Feuer aus      Ersetzen Sie Batterien und Akkus nur durch die von HP vorgesehenen Ersatzteile     Batterien und Akkus d  rfen nicht   ber den normalen Hausm  ll entsorgt werden   Um sie der Wiederverwertung oder dem Sonderm  ll zuzuf  hren  nutzen Sie die    ffentlichen Sammelstellen  oder setzen Sie sich bez  glich der Entsorgung mit  einem HP Partner in Verbindung     Weitere Informationen zum Austausch von Batterien und Akkus oder zur  sachgemiifien Entsorgung erhalten Sie bei Ihrem HP Partner oder Servicepartner     Italian battery notice    Istruzioni per la batteria    AN AVVERTENZA  Questo dispositivo pu   contenere una batteria       Non tentare di ricaricare le batterie se rimosse      Evitare che le batterie entrino in contatto con l acqua o siano esposte a temperature  superiori a 60   C      Non smontare  schiacciare  forare o utilizzare
228. onderdelen  repareren     Gebruik voor de laserapparatuur geen andere knoppen of instellingen en voer    geen andere aanpassingen of procedures uit dan die in deze handleiding worden  beschreven       Alleen door HP geautoriseerde technici mogen het apparaat repareren     French laser notice       A AVERTISSEMENT   cet appareil pout   tre   quip   d un laser class   en  tant que Produit laser de classe 1 et conforme    la r  glementation de la FDA am  ricaine  et    la norme 60825 1 de l IEC  Ce produit n   met pas de rayonnement dangereux     L utilisation de commandes  de r  glages ou de proc  dures autres que ceux qui sont  indiqu  s ici ou dans le manuel d installation du produit laser peut exposer l utilisateur     des rayonnements dangereux  Pour r  duire le risque d exposition    des rayonnements  dangereux        Ne tentez pas d ouvrir le boitier renfermant l appareil laser  Il ne contient aucune pi  ce  dont la maintenance puisse   tre effectu  e par l utilisateur       Tout contr  le  r  glage ou proc  dure autre que ceux d  crits dans ce chapitre ne doivent  pas   tre effectu  s par l utilisateur       Seuls les Mainteneurs Agr    s HP sont habilit  s    r  parer l appareil laser     German laser notice    AN VORSICHT  Dieses Ger  t enth  lt m  glicherweise einen Laser  der nach  den US amerikanischen FDA Bestimmungen und nach IEC 60825 1 als Laserprodukt  der Klasse 1 zertifiziert ist  Gesundheitsschddliche Laserstrahlen werden nicht emittiert     Die Anleitungen 
229. ork  see    Node is not registered with the cluster network       page 22  for a workaround     Performing the upgrade    The online upgrade is supported only from the StoreAll 6 x to 6 1 release  Complete the following  steps     l     2     Obtain the latest HP StoreAll 6 1 ISO image from the StoreAll software dropbox  Contact HP  Support to register for the release and obtain access to the dropbox     Mount the ISO image and copy the entire directory structure to the  root ibrix directory  on the disk running the OS     Change directory to  root ibrix on the disk running the OS and then run chmod  R 777    on the entire directory structure     Run the upgrade script and follow the on screen directions       auto_online_ibrixupgrade    Upgrading the StoreAll software to the 6 1 release 155    5   6     Upgrade Linux StoreAll clients  See    Upgrading Linux StoreAll clients     page 18    If you received a new license from HP  install it as described in the    Licensing    chapter in this  guide     After the upgrade    Complete these steps     l     2     Start any remote replication  rebalancer  or data tiering tasks that were stopped before the  upgrade    If your cluster includes G6 servers  check the iLO2 firmware version  The firmware must be at  version 2 05 for HA to function properly  If your servers have an earlier version of the iLO2  firmware  download iLO2 version 2 05 using the following URL and copy the firmware update  to each G   server  Follow the installat
230. ort order  of each column  When this feature is available  mousing over a column causes the label to change  color and a pointer to appear  Click the pointer to see the available options  In the following    Management interfaces 31    example  you can sort the contents of the Mountpoint column in ascending or descending order   and you can select the columns that you want to appear in the display                           Mountpoints   Host Mountpoint Access State  mwr3lmvmi H3 rei  1  sort Ascending Mounted  myvr3imym2  43_fs1 Sc Sort Descending Mounted  mwr3lmym3 2 Trei J   myer3invnd DE Columns b  WI Host         Mountpoint      Access    1s Kl 8    State                Adding user accounts for Management Console access    StoreAll software supports administrative and user roles  When users log in under the administrative  role  they can configure the cluster and initiate operations such as remote replication or snapshots   When users log in under the user role  they can view the cluster configuration and status  but cannot  make configuration changes or initiate operations  The default administrative user name is ibrix   The default regular username is ibrixuser     User names for the administrative and user roles are defined in the  etc group file  Administrative  users are specified in the ibrix admin group  and regular users are specified in the ibrix user  group  These groups are created when StoreAll software is installed  The following entries in the   etc group fi
231. ory   named ibrix that contains the installer program  For example  if you expand the tarball in    root  the installer is in  root  ibrix     5  Change to the installer directory on the active management console  if necessary  Run the  following command       auto_ibrixupgrade    The upgrade script performs all necessary upgrade steps on every server in the cluster and  logs progress in the upgrade   1og file  The log file is located in the installer directory     Manual upgrades  Upgrade paths    There are two manual upgrade paths  a standard upgrade and an agile upgrade     e The standard upgrade is used on clusters having a dedicated Management Server machine  or blade running the management console software     e The agile upgrade is used on clusters having an agile management console configuration   where the management console software is installed in an active passive configuration on  two cluster nodes     To determine whether you have an agile management console configuration  run the ibrix fm   i command  If the output reports the status as quorum is not configured  your cluster  does not have an agile configuration     Be sure to use the upgrade procedure corresponding to your management console configuration   e   For standard upgrades  use Page 168   e For agile upgrades  use Page 172     Online and offline upgrades  Online and offline upgrade procedures are available for both the standard and agile upgrades     e Online upgrades  This procedure upgrades the sof
232. oudsacneeeareeseeanten 196  Regulatory compliance identification number 196  Federal Communications Commission notice cis   ecactaeinciaeaestoisteatnauiaeaubisacasiauetesaaeiiateniear ened 196  FCC rating EE 196  Class A e escini eiiiai aE RE E T 196   Tes Be C E 196   E EE 197  eleng E 197  Canadian notice  Avis Con EE 197  el A equi PMen EE 197  E E EE 197  European  Union EE 197  Japanese EIER ttegr  ee gegen Etgen 198  Japanese VCCI A Niciceineiocieeiinmemreudadacnati omnia 198  Japanese VCE BEE eege 198  Japanese VCCI aig E 198  Japanese power cord statement             ssssssenssssseotsesesseetessessrttestsssettrsrssrretetrsssrrtersssrerere 198  Korean MEIER 198  Class A equ OS EE 198  Class EENEG eege 198  Taiwanese MONCES E 199  e 199  Taiwan battery recycle SION  soviesoiiecatexssnenoosdeuied Glass somtoundecadbasd beep udeudteeadsbeneanteccdaseitadad  199  Turkish recycling EN 199  Vietnamese Information Technology and Communications compliance morkimg  199  Leedung eigene g  eegente reen 199  English  laser EE 199  Dutch Jesse eegene 200  French laser Get stsadloniealarnianalninaiad i saclsaialaiaunercesiauanianieiatal manent 200  E RE 200  IEN 201    8 Contents    Japanese Taser EIERE 201    Spanish laser E 201  Recycling  TEE 202  English recycling WN etna epsscnd bees Peewee deeculea vesuttnarisditedlexespederariaeuadoearepieansdoeemennneueces 202  Bulgarian recycling E 202  El 202  Danish recycling WEE 202  delt recycling DEE reegen 202  Estonian recycling Dees
233. page and  update the following     o   Enter 9000 as the Custom Delivery ID     Select the System Country Code    o   Enter the appropriate Customer Contact and Site Information details    e Contract and Warranty Information    Under Entitlement Information  specify the Customer Entered serial number  Customer Entered  product number  System Country code  and Custom Delivery ID        Contract and Warranty Information    Entitlement Information  Customer Entered serial number  Customer Entered product number   System Country code  Choose a country  7  Enttiement type X  Enttiement O  Obigation D    Custom Detvery O    System Site Information    Customer Contact  Primary customer contact    None selected    BLD tats  in  Seceedary customer contact  None selected  e  Primary Service contact  None selected  e          44 Getting started       NOTE  For storage support on 9300 systems  do not set the Custom Delivery ID   The MSA is an  exception  the Custom Delivery ID is set as previously described         Verifying device entitlements    To verify the entitlement information in HP SIM  complete the following steps   1  Go to Remote Support Configuration and Services and select the Entitlement tab   2  Check the devices discovered        NOTE  If the system discovered on HP SIM does not appear on the Entitlement tab  click  Synchronize RSE        3  Select Entitle Checked from the Action List   Click Run Action     5  When the entitlement check is complete  click Refresh      gt
234. pgrades    All file serving nodes and management consoles must be up when ca pel the upgrade  If a  node or management console is not up  the upgrade script will fail  To determine the status of your  cluster nodes  check the dashboard on the GUI or use the ibrix health command     To upgrade all nodes in the cluster automatically  complete the following steps    1  Check the dashboard on the management console GUI to verify that all nodes are up    2  Obtain the latest release image from the HP kiosk at http   www sottware hp com kiosk  you  will need your HP provided login credentials     3  Copy the release   iso file onto the current active management console    4  Run the following command  specifying the location of the local iso copy as the argument    usr local ibrix setup upgrade  lt iso gt     The upgrade script performs all necessary upgrade steps on every server in the cluster and  logs progress in the file  usr local ibrix setup upgrade 1log  After the script  completes  each server will be automatically rebooted and will begin installing the latest  software     5  After the install is complete  the upgrade process automatically restores node specific  configuration information and the cluster should be running the latest software  If an UPGRADE  FAILED message appears on the active management console  see the specified log file for  details     Manual upgrades    The manual upgrade process requires external storage that will be used to save the cluster  configu
235. port        NOTE  A failback might not succeed if the time period between the failover and the failback is  too short  and the primary server has not fully recovered  HP recommends ensuring that both servers  are up and running and then waiting 60 seconds before starting the failback  Use the   ibrix server  1 command to verify that the primary server is up and running  The status should  be Up Failedover before performing the failback        Setting up HBA monitoring    You can configure High Availability to initiate automated failover upon detection of a failed HBA   HBA monitoring can be set up for either dual port HBAs with built in standby switching or single port  HBAs  whether standalone or paired for standby switching via software  The StoreAll software    64 Configuring failover    does not play a role in vendor  or software mediated HBA failover  traffic moves to the remaining  functional port with no Fusion Manager involvement    HBAs use worldwide names for some parameter values  These are either worldwide node names   WWNN  or worldwide port names  WWPN   The WWPN is the name an HBA presents when   logging in to a SAN fabric  Worldwide names consist of 16 hexadecimal digits grouped in pairs     In StoreAll software  these are written as dot separated pairs  for example   21 00 00 e0 8b 05 05 04      To set up HBA monitoring  first discover the HBAs  and then perform the procedure that matches  your HBA hardware     e   For single port HBAs without built in standb
236. portfs  U  h FSN4  p    ibfsl   ibrix cifs  d  s ibfsl  h FSN4   Unmount ibfs1 from FSN4 and delete the mountpoint on FSN4 from the cluster   ibrix_umount  f ibfsl  h FSN4   ibrix mountpoint  d  h FSN4  m  ibfsl   Remove FSN4 from AgileFM quorum participation    ibrix_fm  u FSN4   Delete FSN4 from the cluster    ibrix_server  d  h FSN4    Reconfigure High Availability on FSN3  if needed   Enable HA     ibrix_ server  m  Save the new cluster configuration     ibrix_fm  B    Removing a node from a cluster 121    14  Uninstall the StoreALL OS software from FSN4      usr local ibrix local installation ibrix ibrixinit  u  F       NOTE  Ifthe same StoreAll OS version will be reinstalled on FSN4  use the following  command instead      usr local ibrix local installation ibrix ibrixinit  U       The node is no longer in the cluster   Maintaining networks    Cluster and user network interfaces  StoreAll software supports the following logical network interfaces     e   Cluster network interface  This network interface carries Fusion Manager traffic  traffic between  file serving nodes  and traffic between file serving nodes and clients  A cluster can have only  one cluster interface  For backup purposes  each file serving node can have two cluster NICs     e   User network interface  This network interface carries traffic between file serving nodes and  clients  Multiple user network interfaces are permitted     The cluster network interface was created for you when your cluster was i
237. procedures other than those  specified herein or in the laser product s installation guide may result in hazardous radiation  exposure  To reduce the risk of exposure to hazardous radiation     e Do not try to open the module enclosure  There are no user serviceable components inside     e Do not operate controls  make adjustments  or perform procedures to the laser device other  than those specified herein     e   Allow only HP Authorized Service technicians to repair the unit        Taiwanese notices 199    The Center for Devices and Radiological Health  CDRH  of the U S  Food and Drug Administration  implemented regulations for laser products on August 2  1976  These regulations apply to laser  products manufactured from August 1  1976  Compliance is E Gt products marketed in  the United States     Dutch laser notice    AN WAARSCHUWING  dit apparaat bevat mogelijk een laser die is  geclassificeerd als een laserproduct van Klasse 1 overeenkomstig de bepalingen van  de Amerikaanse FDA en de richtlijn IEC 60825 1  Dit product geeft geen gevaarlijke  laserstraling af     Als u bedieningselementen gebruikt  instellingen aanpast of procedures uitvoert op een  andere manier dan in deze publicatie of in de installatichandleiding van het laserproduct  wordt aangegeven  loopt u het risico te worden blootgesteld aan gevaarlijke straling    Het risico van blootstelling aan gevaadijke straling beperkt u als volgt       Probeer de behuizing van de module niet te openen  U mag zelf geen 
238. property  Follow all cautions and warnings included in the installation instructions        A WARNING  To reduce the risk of personal injury or damage to the equipment   e Observe local occupational safety requirements and guidelines for heavy equipment handling   e Obtain adequate assistance to lift and stabilize the product during installation or removal   e Extend the leveling jacks to the floor   e   Rest the full weight of the rack on the leveling jacks   e Attach stabilizing feet to the rack if it is a single rack installation   e   Ensure the racks are coupled in multiple rack installations     e Fully extend the bottom stabilizers on the equipment  Ensure that the equipment is properly  supported braced when installing options and boards     e Be careful when sliding rack components with slide rails into the rack  The slide rails could  pinch your fingertips     e   Ensure that the rack is adequately stabilized before extending a rack component with slide  rails outside the rack  Extend only one component at a time  A rack could become unstable  if more than one component is extended for any reason        Equipment symbols 193       WARNING  Verify that the AC power supply branch circuit that provides power to the rack is  not overloaded  Overloading AC power to the rack power supply circuit increases the risk of  personal injury  fire  or damage to the equipment  The total rack load should not exceed 80 percent  of the branch circuit rating  Consult the electrical au
239. ps in the first group   You can then reboot one group at a time     To perform the rolling reboot  complete the following steps on each file serving node     1  Reboot the node directly from Linux   Do not use the  Power Off  functionality in the GUI  as  it does not trigger failover of file serving services   The node will fail over to its backup   2  Wait for the GUI to report that the rebooted node is Up     3  From the GUI  failback the node  returning services to the node from its backup  Run the  following command on the backup node     ibrix server  f  U  h HOSTNAME  HOSTNAME is the name of the node that you just rebooted     Powering file serving nodes on or off 109    Starting and stopping processes    You can start  stop  and restart processes and can display status for the processes that perform  internal StoreAll software functions  The following commands also control the operation of  PostgreSQL on the machine  The PostgreSQL service is available at  usr local ibrix init      To start and stop processes and view process status on the Fusion Manager  use the following  command      etc init d ibrix_fusionmanager  start   stop   restart   status        To start and stop processes and view process status on a file serving node  use the following  command  In certain situations  a follow up action is required after stopping  starting  or restarting  a file serving node      etc init d ibrix_server  start   stop   restart   status   To start and stop processes and vie
240. r    To make a drive appear in Explorer  after mounting it  log off and then log back on  or reboot the  machine  You can also open a MS DOS command window and access the drive manually     Mounted drive not visible when using Terminal Server  Refresh the browser s view of the system by logging off and then logging back on     Troubleshooting specific issues 141    StoreAll client auto startup interferes with debugging    The StoreAll client is set to start automatically  which can interfere with debugging a Windows   StoreAll client problem  To prevent this  reboot the machine in safe mode and change the Windows   StoreAll client service mode to manual  which enables you to reboot without starting the client    1  Open the Services control manager  Control Panel  gt  Administrative Tools  gt  Services     2  Right click StoreAll client Services and select Properties    3  Change the startup type to Manual  and then click OK    4  Debug the client problem  When finished  switch the Windows StoreAll client service back to  automatic startup at boot time by repeating these steps and changing the startup type to  Automatic     Synchronizing information on file serving nodes and the configuration  database    To maintain access to a file system  file serving nodes must have current information about the file  system  HP recommends that you execute ibrix health ona regular basis to monitor the health  of this information  If the information becomes outdated on a file serving node 
241. r  for user networks     e NFS  SMB  FTP  and HTTP clients can use the same user VIF  The servers providing the VIF  should be configured in backup pairs  and the NICs on those servers should also be configured  for failover  See    Configuring High Availability on the cluster    in the administrator guide for  information about performing this configuration from the GUI     e   For Linux and Windows StoreAll clients  the servers hosting the VIF should be configured in  backup pairs  However  StoreAll clients do not support backup NICs  Instead  StoreAll clients  should connect to the parent bond of the user VIF or to a different VIF    e   Ensure that your parent bonds  for example bondO  have a defined route     1  Check for the default Linux OS route gateway for each parent interface bond that was  defined during the HP StoreAll installation by entering the following command at the  command prompt       route    The output from the command is the following     Destination Gateway Genmask Flags Metric Ref Use Iface  15 226 48 0   255  255  255 0 u 0 0 o  172 16 0 0 e 255 255  248 0 u 0 0 0  169 254 0 0 o 235 2393 0 0 u 0 o 0  default 15 226 48 1 0 0 0 0 UG 0 o 0    Kernel IP routing table    The default destination is the default gateway route for Linux  The default destination   which was defined during the HP StoreAll installation  had the operating system default  gateway defined but not for StoreAll     2  Display network interfaces controlled by StoreAll by entering th
242. r ib69s2 and assign a standby NIC on server  ib69s1  You can also specify a physical interface such eth4 and create a standby NIC on the  backup server for it     The NICs panel on the GUI shows the NICs on the selected server  In the following example  there  are four NICs on server ib69s1  bondo  the active cluster FM IP  bondo   0  the management  IP VIF  this server is hosting the active FM   bondo 1  the NIC created in this example  and  bondo   2  a standby NIC for an active NIC on server ib69s2     60   Configuring failover       Servers             Updated Jun  15  2012  4 32 36 PM PDT Status Hame State CPU    Het  MB s  Disk  MB s  Backup HA    4        ib6gs1 Up 4 Dm 0 00 b  s  on  Event Status  24 houer 157 3 117     ib69s2 Up 1 0 02 0 00 ib69s1 on  fil Dashboard E  ge Cluster Configuration  E Filesystems     D Snapshots          File Shares                      in  ost NICs        Summary Name IP Type State Route Standby Server Standby Interface  a9 HBAs  bond0 2 Inactive   Standby User Inactive  Standby NIA MA   8 10 30 69  i i 8  EI Meanipnkks bond  0 30 69 151 User Up  LinkUp ib69s2 bond     NFS bond 10 30 69 1 Cluster Up  LinkUp    CES bond0 0 10 30 69 131 Cluster Up  LinkUp  Active      A4  Power     Events                         The NICs panel for the ib69s2  the backup server  shows that bonao  1 is an inactive  standby  NIC and bondo  2 is an active NIC     E   summary Name IP Type State Route Standby Server Standby Interface              10 30 69 2 Clust
243. r message occurs because  the file system exceeds  lt n gt  TB  If this occurs  HP recommends that before you perform operations  such as unmounting a file system or stopping services on a file serving node  using the  ibrix_server command   you disable the hp snmp agent service on each server first     service hp snmp agents stop    After remounting the file system or restarting services on the file serving node  restart the  hp snmp agents service on each server     service hp snmp agents start       Configuring NTP servers 35    Prerequisites    The required components for supporting StoreAll systems are preinstalled on the file serving nodes   You must install HP Insight Remote Support on a separate Windows system termed the Central  Management Server  CMS      e   HP Insight Manager  HP SIM   This software manages HP systems and is the easiest and least  expensive way to maximize system uptime and health     e Insight Remote Support Advanced  IRSA   This version is integrated with HP Systems Insight  Manager  SIM   It provides comprehensive remote monitoring  notification advisories  dispatch   and proactive service support  IRSA and HP SIM together are referred to as the CMS    e The Phone Home configuration does not support backup or standby NICs that are used for  NIC failover  If backup NICs are currently configured  remove the backup NICs from all nodes  before configuring Phone Home  After a successful Phone Home configuration  you can  reconfigure the backup NICs 
244. r upgrades from StoreAll software 5 6 x to the 6 1 release   Complete the following steps     l     Obtain the latest HP StoreAll 6 1 ISO image from the StoreAll software dropbox  Contact HP  Support to register for the release and obtain access to the dropbox     Mount the ISO image and copy the entire directory structure to the  root ibrix directory  on the disk running the OS    Change directory to  root ibrix on the disk running the OS and then run chmod  R 777    on the entire directory structure    Run the following upgrade script      auto_ibrixupgrade    The EE SE automatically stops the necessary services and restarts them when the  upgrade is complete  The upgrade script installs the Fusion Manager on all file serving nodes   The Fusion Manager is in active mode on the node where the upgrade was run  and is in    Upgrading the StoreAll software to the 6 1 release 157    Ss    passive mode on the other file serving nodes  If the cluster includes a dedicated Management  Server  the Fusion Manager is installed in passive mode on that server    Upgrade Linux StoreAll clients  See    Upgrading Linux StoreAll clients     page 18     If you received a new license from HP  install it as described in the    Licensing    chapter in this  guide     After the upgrade    Complete the following steps     l        N    Run the following command to rediscover physical volumes    ibrix pv  a   Apply any custom tuning parameters  such as mount options    Remount all file systems    i
245. ration  Each server must be able to access this media directly  not through a network  as    the network configuration is part of the saved configuration  HP recommends that you use a USB  stick or DVD        NOTE  Be sure to read all instructions before starting the upgrade procedure        To determine which node is hosting the agile management console configuration  run the ibrix_ fm   i command     Preparing for the upgrade    Complete the following steps    1  Ensure that all nodes are up and running    2  On the active management console node  disable automated failover on all file serving nodes    lt ibrixhome gt  bin ibrix server  m  U    3  Run the following command to verify that automated failover is off  In the output  the HA column  should display of         lt ibrixhome gt  bin ibrix server  1    4  On the active management console node  stop the NFS and SMB services on all file serving  nodes to prevent NFS and SMB clients from timing out      lt ibrixhome gt  bin ibrix server  s  t cifs  c stop   lt ibrixhome gt  bin ibrix _server  s  t nfs  c stop  Verify that all likewise services are down on all file serving nodes   ps  ef   grep likewise   Use kill  9 to kill any likewise services that are still running     164 Cascading Upgrades    If file systems are mounted from a Windows StoreAll client  unmount the file systems using the  Windows client GUI     Unmount all StoreAll file systems      lt ibrixhome gt  bin ibrix umount  f  lt fsname gt     Saving the nod
246. re su un dispositivo  laser ad eccezione di quelle specificate da queste istruzioni      Affidare gli interventi di riparazione dell unit   esclusivamente ai tecnici del Assistenza  autorizzata HP     Japanese laser notice    A EE Ra  US PDASSBISEIUEC 60825 112   lt Class  Lg E ET    WSMAMHVET  SS AIKI SILKE EA       tt ROL H E LCD 00 EC  ME  PL   AIR hl  LIRE en Lt  UFOMBE    FotT lt ceSt     EVa   J gt b DV FO    VERITY CCEA  2       ARV RA SIV AR   AVHSSENT  EtA       REICRENTUSUROHRT  Lt FAAS  ME  IL CLC     HPO ERMY   E ARMA DAMNAI   YOBBEH TEN TWEET     Spanish laser notice    AN ADVERTENCIA  Este dispositivo podria contener un l  ser clasificado  como producto de l  ser de Clase 1 de acuerdo con la normativa de la FDA de EE UU   e IEC 60825 1  El producto no emite radiaciones l  ser peligrosas     El uso de controles  ajustes o manipulaciones distintos de los especificados aqui o en   la guia de instalaci  n del producto de l  ser puede producir una exposici  n peligrosa   a las radiaciones  Para evitar el riesgo de exposici  n a radiaciones peligrosas      No intente abrir la cubierta del m  dulo  Dentro no hay componentes que el usuario  pueda reparar      No realice m  s operaciones de control  ajustes o manipulaciones en el dispositivo  l  ser que los aqu   especificados      S  lo permita reparar la unidad a los agentes del servicio t  cnico autorizado HP     Laser compliance notices 201    Recycling notices    English recycling notice    Disposal of waste equipme
247. references  and allocation policies that have been set on their new host group  either restart  StoreAll software services on the clients or execute the following commands locally     e ibrix_lwmount  a to force the client to pick up mounts or allocation policies  e ibrix_lwhost   a to force the client to pick up host tunings    To delete a host group using the CLI   ibrix_hostgroup  d  g GROUPNAME    Other host group operations  Additional host group operations are described in the following locations     e   Creating or deleting a mountpoint  and mounting or unmounting a file system  see    Creating  and mounting file systems    in the HP StoreAll Storage File System User Guide     e Changing host tuning parameters  see    Tuning file serving nodes and StoreAll clients      page 110      e  Preferring a network interface  see    Preferring network interfaces     page 123      e   Setting allocation policy  see    Using file allocation    in the HP StoreAll Storage File System  User Guide     Viewing host groups 83       9 Monitoring cluster operations    This chapter describes how to monitor the operational state of the cluster and how to monitor cluster    health   Monitoring 9300 9320 hardware    The GUI displays status  firmware versions  and device information for the servers  virtual chassis   and system storage included in 9300 and 9320 systems     Monitoring servers    To view information about the server and chassis included in your system   1  Select Servers from t
248. rem 3 11 2  Fusion Manager Prinyy P adaess 10 311 100  gh  Otiraa A  Ai Gater Contgarston   gt  Fietysters  WW psat  j sve  Wi Ze sw  Km 0 tr SR CR RER  pb mame enormen  ae Date misto Size  WSA ED arah ran  31  Crashed Node fen 2113 Cobectedt 201 t 04 08 1 7 07 24 Cath 420 HB  2011  16 06 16 15_cros  _tewn 31 Crested Node fen 3113 Cohen 201 t 05 06 16 92 04 Crash 410 KB  taai  test  cotected at 2011 05 03 15 4004  Coiecied 201 RIED Mamai 18 Ke  tes  teati comected at D 1 05 05 15 34 25 Cobected 201 t 05 16 15 34 25 Mane 3075 HB  tet  collect    cofected   20 1 04 08 14 47 44 Cofected 201 RE 18474 Mamai rest ne  Ceci colection coftected at 201 1 05 05 14 4   Deeg EM KE 156 KB       3  The data is stored locally on each node in a compressed archive file   lt nodename gt   lt filename gt _ lt timestamp gt  tgz under  local ibrixcollect     Enter the name of the   tgz file that contains the collected data  The default location to store  this   tgz file is located on the active Fusion Manager node at  local ibrixcollect   archive     134 Troubleshooting    Save n locallibrtacotiectarchive    provide the name for De zp fle which wil coman the collected logs and  sts The zp fie wil be saver             4  Click Okay   To collect logs and command results using the CLI  use the following command   ibrix_ collect  c  n NAME       NOTE  Only one manual collection of data is allowed at a time     NOTE  When a node restores from a system crash  the vmcore under  var crash    lt timestamp 
249. revious StoreAll  installation on this node  the installer is in  root  ibrix     3  Expand the distribution tarball or mount the distribution DVD in a directory of your choice   Expanding the tarball creates a subdirectory named ibrix that contains the installer program   For example  if you expand the tarball in  root  the installer is in  root  ibrix     4  Change to the installer directory if necessary and execute the following command     ibrixupgrade  f  The upgrade automatically stops services and restarts them when the process is complete     5  When the upgrade is complete  verify that the StoreAll software services are running on the  node      etc init d ibrix server status    The output is similar to the following  If the IAD service is not running on your system  contact  HP Support     IBRIX Filesystem Drivers loaded  ibrcud is running   pid 23325  IBRIX IAD Server  pid 23368  running       Upgrading the StoreAll software to the 5 5 release 169    6  Verify that the ibrix and ipfs services are running   lsmod grep ibrix  ibrix 2323332 0  unused   lsmod grep ipfs  ipfsl 102592 0  unused   If either grep command returns empty  contact HP Support     7  From the management console  verify that the new version of StoreAll software FS IAS is  installed on the file serving node      lt ibrixhome gt  bin ibrix version  1  S   8  If the upgrade was successful  failback the file serving node    lt ibrixhome gt  bin ibrix server  f  U  h HOSTNAME   9  Repeat steps 1 throu
250. rix ndmp on  chkconfig ibrix_fusionmanager on    4  Unmount the file systems and continue with the upgrade procedure     Upgrading the StoreAll software to the 5 6 release    This section describes how to dees to the latest Store All software release  The management  console and all file serving nodes must be upgraded to the new release at the same time     Upgrades to the StoreAll software 5 6 release are supported for systems currently running StoreAll  software 5 5 x  If your system is running an earlier release  first upgrade to the 5 5 release  and  then upgrade to 5 6  The upgrade procedure upgrades the operating system to Red Hat Enterprise  Linux 5 5        IMPORTANT     e     Ensure that the NFS exports option subtree_check is the default export option for every  NFS export  See    Common issue across all upgrades from StoreAll 5 x     page 154  for more  information        e Do not start new remote replication jobs while a cluster upgrade is in progress  If replication  jobs were running before the upgrade started  the jobs will continue to run without problems  after the upgrade completes        The upgrade to StoreAll software 5 6 is supported only as an offline derre Because it requires  an upgrade of the kernel  the local disk must be reformatted  Clients will experience a short  interruption to administrative and file system access while the system is upgraded     There are two upgrade procedures available depending on the current installation  If you have a  S
251. rsion  the alternate partition       usr local ibrix setup boot_info  r    e   If the public network interface is down and inaccessible for any node  power cycle that node        NOTE  Each node stores its ibrixupgrade log file in  tmp        Manual upgrade    Check the following     e Ifthe restore script fails  check  usr local ibrix setup logs restore  log for  details     e If configuration restore fails  look at  usr local ibrix autocfg logs appliance log  to determine which feature restore failed  Look at the specific feature log file under  usr   local ibrix setup logs  for more detailed information     To retry the copy of configuration  use the following command    usr local ibrix autocfg bin ibrixapp upgrade  f  s    Offline upgrade fails because iLO firmware is out of date    If the iLO2 firmware is out of date on a node  the auto_ibrixupgrade script will fail  The  usr   local ibrix setup logs auto_ibrixupgrade 1og reports the failure and describes how  to update the firmware    After updating the firmware  run the following command on the node to complete the StoreAlll  software upgrade      local ibrix ibrixupgrade  f    Node is not registered with the cluster network    22    Nodes hosting the agile Fusion Manager must be registered with the cluster network  If the  ibrix_fm command reports that the IP address for a node is on the user network  you will need  to reassign the IP address to the cluster network  For example  the following commands report that  nod
252. rt StoreAll software services on the clients   reboot the clients  or execute ibrix lwmount  a or ibrix lwhost   a   When contacted   the Fusion Manager informs the clients about commands that were executed on host groups to  which they belong  The clients then use this information to perform the operation     You can also use host groups to perform different operations on different sets of clients  To do this   create a host group tree that includes the necessary host groups  You can then assign the clients  manually  or the Fusion Manager can automatically perform the assignment when you register a  StoreAll client  based on the client s cluster subnet  To use automatic assignment  create a domain  rule that specifies the cluster subnet for the host group     Creating a host group tree    The clients host group is the root element of the host group tree  Each host group in a tree can  have only one parent  but a parent can have multiple children  In a host group tree  operations  performed on lower level nodes take precedence over operations performed on higher level nodes   This means that you can effectively establish global client settings that you can override for specific  clients     For example  suppose that you want all clients to be able to mount file system ifs1 and to  implement a set of host tunings denoted as Tuning 1  but you want to override these global settings  for certain host groups  To do this  mount ifs1 on the clients host group  ifs2 on host group  A
253. rted  backup programs  do not run other applications directly on the nodes        Setting up the system    Installation steps    An HP service specialist sets up the system at your site  including the following tasks     Remove the product from the shipping cartons that you have placed in the location where the  product will be installed  confirm the contents of each carton against the list of included items  and check for any physical damage to the exterior of the product  and connect the product  to the power and network provided by you     Review your server  network  and storage environment relevant to the HP Enterprise NAS  product implementation to validate that prerequisites have been met     Validate that your file system performance  availability  and manageability requirements have  not changed since the service planning phase  Finalize the HP Enterprise NAS product  implementation plan and software configuration     Implement the documented and agreed upon configuration based on the information you  provided on the pre delivery checklist     Document configuration details     Additional configuration steps    When your system is up and running  you can continue configuring the cluster and file systems   The Management Console GUI and CLI are used to perform most operations   Some features  described here may be configured for you as part of the system installation      Cluster  Configure the following as needed     Firewall ports  See    Configuring ports for a fir
254. ry file  After this succeeds  the restore operation overwrites  the existing file  if it present in same destination directory  with the temporary file  When the    Backing up the Fusion Manager configuration 77    hard quota limit for the directory tree has been exceeded  NDMP cannot create a temporary  file and the restore operation fails     Configuring NDMP parameters on the cluster    Certain NDMP parameters must be configured to enable communications between the DMA and  the NDMP Servers in the cluster  To configure the parameters on the GUI  select Cluster Configuration  from the Navigator  and then select NDMP Backup  The NDMP Configuration Summary shows the  default values for the parameters  Click Modify to configure the parameters for your cluster on the  Configure NDMP dialog box  See the online help for a description of each field       Enable NDMP Sessions    Yes  v     Minimum Port Number  1025     Maximum Port Number  65535     Listener Port Number  10000     Username  ndmp     Password  ndmp     Log Level  0 v      TCP Window Size  Bytes     163840      Concurrent Sessions  128    DMA IP Addresses         gens  IP Address Delete      Required Value       SE RER  lt      To configure NDMP parameters from the CLI  use the following command           78 Configuring system backups    ibrix_ndmpconfig  c   d IP1 IP2 IP3        m MINPORT    x MAXPORT    n LISTENPORT     u USERNAME    p PASSWORD    e  0 disable 1l enable    v   0 10     w BYTES     z NUMSESSIONS    
255. s the following information     o Level  o Time  o Event    e Hardware  The Hardware panel displays the following information      The name of the hardware component     o The information gathered in regards to that hardware component     See    Monitoring hardware components     page 88  for detailed information about the Hardware  panel     Monitoring hardware components    The Management Console provides information about the server hardware and its components   The 9300 9320 can be grouped virtually  which the Management Console interprets as a virtual  chassis    To monitor these components from the GUI    1  Click Servers from the upper Navigator tree     2  Click Hardware from the lower Navigator tree for information about the chassis that contains  the server selected on the Servers panel  as shown in the following image               Hardware  4 anu Name Value  ay D  5 nics Name ch_500cOffl 1ach7000  8  Mountpoints Chassis type 9320  KG NFS Serial Number 500c0ff11ach7000  Gei CIFS  M Power  E  Events  Ass Server       Obtaining server details    The Management Console provides detailed information for each server in the chassis  To obtain  summary information for a server  select the Server node under the Hardware node     The following overview information is provided for each server     e   Status   e Type   e Name   e UUID   e     Serial number  e Model    e   Firmware version    88 Monitoring cluster operations    e Message   e Diagnostic Message   Column dynamic
256. s to Dumping   InFailover     3  The failed node continues with the failover  changing state to Dumping  FailedOver     4  After the core dump is created  the failed node reboots and its state changes to Up   FailedOver        IMPORTANT  Complete the steps in    Prerequisites for setting up the crash capture     page 68   before setting up the crash capture     Prerequisites for setting up the crash capture    68    The following parameters must be configured in the ROM based setup utility  RBSU  before a crash  can be captured automatically on a file server node in failed condition     1  Start RBSU     Reboot the server  and then Press F9 Key   2  Highlight the System Options option in main menu  and then press the Enter key  Highlight    the Virtual Serial Port option  below figure   and then press the Enter key  Select the COM1  port  and then press the Enter key     Bower Sih yrsa Dever Seybomed  OM Based Setup Utility  Version 3 88       Sopyright 1982  2818 Hewlett Packard Development Company  L P         Kt71L gt  Changes Configuration Selection  Enter gt  Saves Selection   lt ESC gt  to Cancel    Configuring failover    Highlight the BIOS Serial Console  amp  EMS option in main menu  and then press the Enter key   Highlight the BIOS Serial Console Port option and then press the Enter key  Select the COM1  port  and then press the Enter key    Highlight the BIOS Serial Console Baud Rate option  and then press the Enter key  Select the  115200 Serial Baud Rate    Highl
257. s were running before the upgrade started  the jobs will continue to run without problems  after the upgrade completes        e   If you are upgrading from a StoreAll 5 x release  ensure that the NFS exports option  subtree_check is the default export option for every NFS export  See    Common issue  across all upgrades from StoreAll 5 x     page 154  for more information           NOTE  If you are upgrading from a StoreAll 5 x release  any support tickets collected with the  ibrix_supportticket command will be deleted during the upgrade  Download a copy of the  archive files    tgz  from the  admin platform diag supporttickets directory        Upgrades can be run either online or offline     e Online upgrades  This procedure upgrades the software while file systems remain mounted   Before upgrading a file serving node  you will need to fail the node over to its backup node   allowing file system access to continue  This procedure cannot be used for major upgrades   but is appropriate for minor and maintenance upgrades     e Offline upgrades  This procedure requires that file systems be unmounted on the node and  that services be stopped   Each file serving node may need to be rebooted if NFS or SMB  causes the unmount operation to fail   You can then perform the upgrade  Clients experience  a short interruption to file system access while each tile serving node is upgraded     You can use an automatic or a manual procedure to perform an offline upgrade  Online upgrades  must
258. seconds for the failover to complete  and then run the following  command on the node that was the target for the failover     lt ibrixhome gt  bin ibrix fm  i   The command should report that the agile management console is now Active on this node   On the node with the active agile management console  move the  lt installer_ dir gt    ibrix directory used in the previous release installation to ibrix old  For example  if you  expanded the tarball in  root during the previous StoreAll installation on this node  the  installer is in    root ibrix    On the node with the active agile management console  expand the distribution tarball or  mount the distribution DVD in a directory of your choice  Expanding the tarball creates a  subdirectory named ibrix that contains the installer program  For example  if you expand  the tarball in    root  the installer is in  root  ibrix    Change to the installer directory if necessary and run the upgrade      ibrixupgrade  f    Upgrading the StoreAll software to the 5 5 release 173    20     21     22   23     The installer upgrades both the management console software and the file serving node  software on this node       Verify the status of the management console      etc init d ibrix fusionmanager status    The status command confirms whether the correct services are running  Output will be similar  to the following     Fusion Manager Daemon  pid 18748  running      Also run the following command  which should report that the console is
259. seedaaatiedsnccedinrlecudecceneevestdepaeOiesdeeshepauenebossieg waeusebiseltene 88  Monitoring storage and storage TT 92  Managing RE DE EE 93  Monitoring the status of file serving nodes 93  Monitoring EE 94  KEE UNENEE 94  Removing events from the events database oble  95  Monitoring cluster EE 95  e E 96  He  alth  check E 96  Viewing eege Eege dee 98  Viewing operating statistics for file serving nodes  98  10 Using Tie SiG E 100  Installing and configuring the Statistics 166  sj cccerncietaciconsanetivesiceeiinciatoismmeieaimancs 100  Installing the Statistics EE 100  Enabling collection and emeng trett ee geed gedet 100  Upgrading the Statistics tool from StoreAll software 0    101  Using the Historical Reports Ute  cece cad asiweciientciedawleandiitececiueeialdiarencintialdenteacieben 101  Ee AE E 102  WER 103  Maintaining the Statistics Jeng sawed Eeer 103  Space UI 103  Updating the Geelen eege Eed 104  Changing the Statistics tool conbgurotton   104  Fusion Manager failover and the Statistics tool conbhourofion  reeet 104  Checking the status of Statistics tool processes 105  Controlling Statistics VE  105  E E EE Mocretecdecenesenesdvedsdetenevinlonmabccrerecieousenrededuuoncseueseudseebicnwowieeiodes 105  Wee E 106  Uninstalling the Statistics HOG goede nt teccsecoendaresadsauehs wsteonentevaaccbanenundlutevbsectaevausaeeie aacluonsivacanives 106  UE EE IM zene sccececccscecescaaeeesceenteeaceestesaseente aseatioueeraeseeenss 107  Shutting down the ele 107  Shutt
260. segment is being evacuated   Running  this utility while the file system is active may result in data inconsistency or loss     To evacuate a segment  complete the following steps     1     Identify the segment residing on the physical volume to be removed  Select Storage from the  Navigator on the GUI  Note the file system and segment number on the affected physical  volume     Locate other segments on the file system that can accommodate the data being evacuated  from the affected segment  Select the file system on the GUI and then select Segments from  the lower Navigator  If segments with adequate space are not available  add segments to the  file system     Evacuate the segment  Select the file system on the GUI  select Segments from the lower  Navigator  and then click Rebalance Evacuate on the Segments panel  When the Segment  Rebalance and Evacuation Wizard opens  select Evacuate as the mode        Segment Rebalance and Evacuation Wizard        gt  Select Mode Select Mode     Evacuate Advanced  op 3 This wizard will guide you in starting a segment data rebalancer or evacuation task     Rebalancing segment data involves redistributing files among segments in a filesystem to balance utilization and server workload  Evacuation of    segment data alls the segment to be reallocated to another filesystem or retired for maintenance  Evacuation removes the data entirely and  rebalances it among remaining segments  Note the ability to rebalance data evenly depends on the avera
261. series        Contacting HP 151    Using HP MSA Disk Arrays    e HP 2000 G2 Modular Smart Array Reference Guide   e   HP 2000 G2 Modular Smart Array CH Reference Guide   e HP P2000 G3 MSA System CLI Reference Guide   e Online help for HP Storage Management Utility  SMU  and Command Line Interface  CLI     To find these documents  go the Manuals page  http   www hp com support manuals  and select  storage  gt Disk Storage Systems  gt  MSA Disk Arrays  gt HP 2000sa G2 Modular Smart Array or HP  P2000 G3 MSA Array Systems     Obtaining spare parts    For the latest spare parts information  go to http   partsurfer hp com  Enter your product SKU  number to view a list of parts  If you do not have the SKU number  click the Hierarchy tab and  navigate to your product to view a list of SKUs     HP websites    For additional information  see the following HP websites   e   http   www hp com go StoreAlll    e   http   7www hp com       e   http   www hp com go storage   e   http   7www hp com service locator   e   http   7www hp com support  manuals   e    http   7www hp com support downloads  e   http   7www hp com storage  whitepapers    Rack stability    A    Rack stability protects personnel and equipment        WARNING  To reduce the risk of personal injury or damage to equipment   e Extend leveling jacks to the floor    e   Ensure that the full weight of the rack rests on the leveling jacks    e Install stabilizing feet on the rack    e   In multiple rack installations  fasten ra
262. ses and to simplify recovery of files from accidental  deletion  Users can access the file system or directory as it appeared at the instant of the  snapshot     Block Snapshots  This feature uses the array capabilities to capture a point in time copy of a  file system for online backup purposes and to simplify recovery of files from accidental deletion   The snapshot replicates all file system entities at the time of capture and is managed exactly  like any other file system     File allocation  Use this feature to specify the manner in which segments are selected for storing  new files and directories     Data tiering  Use this feature to move files to specific tiers based on file attributes     For more information about these file system features  see the HP StoreAll Storage File System User  Guide     Localization support    Red Hat Enterprise Linux 5 uses the UTF 8  8 bit Unicode Transformation Format  encoding for  supported locales  This allows you to create  edit and view documents written in different locales  using UTF 8  StoreAll software supports modifying the  etc sysconfig i18n configuration file  for your locale  The following example sets the LANG and SUPPORTED variables for multiple  character sets     LANG  ko_KR utf8     SUPPORTED  en US utf  8 en_US en ko_ KR utf  8 ko_ KR ko zh_ CN utf  8 zh_CN zh   SYSFONT  lat0O sunl16   SYSFONTACM  iso015     Management interfaces    28    Cluster operations are managed through the StoreAll Fusion Manager  which provi
263. should be construed as constituting an additional warranty  HP shall    not be liable for technical or editorial errors or omissions contained herein    Acknowledgments   Microsoft   and Windows   are U S  registered trademarks of Microsoft Corporation    UNIX   is a registered trademark of The Open Group    Warranty   WARRANTY STATEMENT  To obtain a copy of the warranty for this product  see the warranty information website   http   www hp com go storagewarrant    Revision History             Edition   Date Software  Description  Version  1 December 2009   5 3 1 Initial release of the 9300 Storage Gateway and 9320 Network Storage System  administration guides   2 April 2010 5 4 Added network management and support ticket   3 August 2010 5 411 Added management console backup  migration to an agile management console    configuration  software upgrade procedures  and system recovery procedures                       4 August 2010 5 4 1 Revised upgrade procedure    5 December 2010   5 5 Added information about NDMP backups and configuring virtual interfaces  and updated  cluster procedures    6 March 2011 5 5 Updated segment evacuation information    7 April 2011 5 6 Revised upgrade procedure and updated server information    8 September 2011   6 0 Added or revised information about agile management console  NTP servers  Statistics  tool  Ibrix Collect  event notification  upgrades    9 June 2012 611 Combined the 9300 and 9320 administration guides  added or revised information 
264. sion indicators do not match   contact HP Support     6  Verify the health of the cluster    lt ibrixhome gt  bin ibrix health  1  The output should show Passed   on     Agile upgrade for clusters with an agile management console configuration    Use these procedures if your cluster has an agile management console configuration  The StoreAll  software 5 4 x to 5 5 upgrade can be performed either online or offline  Future releases may  require offline upgrades        NOTE  Be sure to read all instructions before starting the upgrade procedure        Agile online upgrade   Perform the agile online upgrade in the following order    e File serving node hosting the active management console  e   File serving node hosting the passive management console    e Remaining file serving nodes and StoreAll clients    Upgrading the file serving nodes hosting the management console    Complete the following steps     172 Cascading Upgrades    10     11     12     On the node hosting the active management console  force a backup of the management  console configuration      lt ibrixhome gt  bin ibrix fm  B    The output is stored at  usr local ibrix tmp fmbackup  zip  Be sure to save this file  in a location outside of the cluster     On the active management console node  disable automated failover on all file serving nodes    lt ibrixhome gt  bin ibrix server  m  U   Verify that automated failover is off     lt ibrixhome gt  bin ibrix server  1l   In the output  the HA column should display
265. so configures a power source such as an iLO on each server  The Fusion Manager  uses the power source to power down the server during a failover     On the GUI  select Servers from the Navigator                      Servers  Updated Jun  14  2012  2 20 31 PM PDT Status Name State CPU     Net  MB s  Disk  MB s  Backup HA  oA          ib69s1 Up 4 0 01 0 00 ib69s2 off   Event Status  24 hours 0 1 3     ib63s2 Up 1 0 03 0 00 ib69s1 off   ilh Dashboard     42 Cluster Configuration   E Filesystems   D Snapshots E   23 Servers   2 File Shares   re                   Click High Availability to start the wizard  Typically  backup servers are configured and server HA  is enabled when your system is installed  and the Server HA Pair dialog box shows the backup  pair configuration for the server selected on the Servers panel     If necessary  you can configure the backup pair for the server  The wizard identifies the servers in  the cluster that see the same storage as the selected server  Choose the appropriate server from  the list     The wizard also attempts to locate the IP addresses of the iLOs on each server  If it cannot locate  an IP address  you will need to enter the address on the dialog box  When you have completed  the information  click Enable HA Monitoring and Auto Failover for both servers     Configuring High Availability on the cluster 55       o Server HA Pair Server HA Pair    NICHA Setup    Server High Availability works by designating server pairs as backups of each
266. sociated bitmasks that identify which  subidentitiers are significant to the views definition  Using the bitmasks  individual OID subtrees  can be included in or excluded from the view    An instance of a managed object belongs to a view if    e The OID of the instance has at least as many sub identifiers as the OID subtree in the view     e   Each sub identifier in the instance and the subtree match when the bitmask of the corresponding  sub identifier is nonzero    The Fusion Manager automatically creates the excludeA11 view that blocks access to all OIDs    This view cannot be deleted  it is the default read and write view if one is not specified for a group   with the ibrix snmpgroup command  The catch all OID and mask are    OID    1   Mask    1    Consider these examples  where instance  1 3 6 1 2 1 1 matches  instance  1 3 6 1 4 1 matches   and instance  1 2 6 1 2 1 does not match        OID    1 3 6 1 4 1 18997  Masks  oiled  Died  ides aL  OID    1  3 6 1 2 1   Mask    EE EL    To add a pairing of an OID subtree value and a mask value to a new or existing view  use the  following format     ibrix_snmpview  a  v VIEWNAME   t  include exclude    o OID _SUBTREE   m MASK BITS     The subtree is added in the named view  For example  to add the StoreAll software private MIB  to the view named hp  enter     ibrix_snmpview  a  v hp  o  1 3 6 1 4 1 18997  m  1 1 1 1 1 1 1    Configuring groups and users    74    A group defines the access control policy on managed objects for
267. static sensitive devices  This type of damage could reduce  the life expectancy of the device     Preventing electrostatic discharge    To prevent electrostatic damage  observe the following precautions     Avoid hand contact by transporting and storing products in staticsafe containers    Keep electrostaticsensitive parts in their containers until they arrive at staticfree workstations   Place parts on a grounded surtace before removing them from their containers    Avoid touching pins  leads  or circuitry     Always be properly grounded when touching a staticsensitive component or assembly     Grounding methods    There are several methods for grounding  Use one or more of the following methods when handling  or installing electrostatic sensitive parts     Use a wrist strap connected by a ground cord to a grounded workstation or computer chassis   Wrist straps are flexible straps with a minimum of 1 megohm     10 percent resistance in the  ground cords  To provide proper ground  wear the strap snug against the skin     Use heel straps  toe straps  or boot straps at standing workstations  Wear the straps on both  feet when standing on conductive floors or dissipating floor mats     Use conductive field service tools     Use a portable field service kit with a folding static dissipating work mat     If you do not have any of the suggested equipment for proper grounding  have an HP authorized  reseller install the part        NOTE  For more information on static electricity or
268. system components  see    System component and cabling diagrams for 9320 systems      page 183     For a complete list of system components  see the HP StoreAll Storage System QuickSpecs  which  are available at     htto   www hp com go  StoreAll    HP StoreAll software features    HP StoreAll software is a scale out  network attached storage solution including a parallel file system  for clusters  an integrated volume manager  high availability features such as automatic failover  of multiple components  and a centralized management interface  StoreAll software can scale to  thousands of nodes     Based on a segmented file system architecture  StoreAll software integrates I O and storage systems  into a single clustered environment that can be shared across multiple applications and managed  from a central Fusion Manager     9300 Storage Gateway 25    StoreAll software is designed to operate with high performance computing applications that require  high LO bandwidth  high IOPS throughput  and scalable configurations     Some of the key features and benefits are as follows     Scalable configuration  You can add servers to scale performance and add storage devices  to scale capacity     Single namespace  All directories and files are contained in the same namespace   Multiple environments  Operates in both the SAN and DAS environments    High availability  The high availability software protects servers    Tuning capability  The system can be tuned for large or small block I
269. t     ibrix server  n  h SRCHOST  A DESTHOST IFNAME  ibrix_client  n  h SRCHOST  A DESTHOST IFNAME    Execute this command once for each destination host that the file serving node or StoreAll client  should contact using the specified network interface  TFNAME   For example  to prefer network  interface eth3 for traffic from file serving node s1 hp com to file serving node s2 hp com     ibrix server  n  h sl hp com  A s2 hp com eth3    Preferring a network interface for a Windows StoreAll client    If multiple user network interfaces are configured on the cluster  you will need to select the preferred  interface for this client  On the Windows StoreAll client GUI  specify the interface on the Tune Host  tab  as in the following example     Ibrix Client  ell ES    Status   Registration   Mount   Umount Tune Host   Active Directory Settings      Host  libeven 3 1 5 2    Host Interface Name  fethal    me  _ Hep         Preferring a network interface for a host group    You can prefer an interface for multiple StoreAll clients at one time by specifying a host group  To  prefer a user network interface for all StoreAll clients  specify the clients host group  After  preferring a network interface for a host group  you can locally override the preference on individual  StoreAll clients with the command ibrix_lwhost     To prefer a network interface for a host group  use the following command     124 Maintaining the system    ibrix_hostgroup  n  g HOSTGROUP  A DESTHOST IFNAME    Th
270. t Naming Service    WWN World Wide Name  A unique identifier assigned to a Fibre Channel device    WWNN World wide node name  A globally unique 64 bit identifier assigned to each Fibre Channel node  process    WWPN World wide port name  A unique 64 bit address used in a FC storage network to identify each    device in a FC network     210 Glossary       Index    Symbols   etc syscontig i18n file  28  9300 system  components  25  configuration  27  features  25  management interfaces  28  shut down  107  software  25  start  108  9320 system  components  25  configuration  27  features  25  management interfaces  28  shutdown  107  software  25  start  108    A  agile Fusion Manager  53  AutoPass  128    B    backups  file systems  77  Fusion Manager configuration  77  NDMP applications  77   battery replacement notices  206    C  CLI  32  clients  access virtual interfaces  50  cluster  events  monitor  94  health checks  95  license key  128  license  view  128  log files  98  operating statistics  98  version numbers  view  140  cluster interface  change IP address  125  change network  125  defined  122  components  9300 diagrams  180  9320 diagrams  183  contacting HP  15   core dump  68    D    Disposal of waste equipment  European Union  202  document    related information  151  documentation   HP website  151   providing feedback on  153    E    email event notification  70   events  cluster  add SNMPv3 users and groups  74  configure email notification  70  configure
271. t of cluster storage space that is currently free or in use     File systems    The current health status of the file systems in the cluster  The overview reports the number of  file systems in each state  healthy  experiencing a warning  experiencing an alert  or unknown      Segment Servers    The current health status of the file serving nodes in the cluster  The overview reports the number  of nodes in each state  healthy  experiencing a warning  experiencing an alert  or unknown      Services  Whether the specified file system services are currently running        O One or more tasks are    running        Fa No tasks are running           30   Getting started    Statistics  Historical performance graphs for the following items     e   Network I O  MB s    e   Disk I O  MB s    e CPU usage       e Memory usage       On each graph  the X axis represents time and the Y axis represents performance     Use the Statistics menu to select the servers to monitor  up to two   to change the maximum  value for the Y axis  and to show or hide resource usage distribution for CPU and memory     Recent Events  The most recent cluster events  Use the Recent Events menu to select the type of events to display     You can also access certain menu items directly from the Cluster Overview  Mouse over the  Capacity  Filesystems or Segment Server indicators to see the available options     Navigator    The Navigator appears on the left side of the window and displays the cluster hierarchy  You
272. t_additional_data directory  which contains the output of the add on  script      root host2 archive   cd  host2 logs add_on_script local ibrixcollect ibrix collect additional data     In this instance  host2 is the name of the host     7  View the contents of the   lt hostname gt  logs add_on_script local ibrixcollect   ibrix_ collect _additional_data directory      root host2 ibrix collect additional data  ls  1   The command displays the following output    total 4    rw r  r   1 root root 2636 Dec 20 12 39 63 AddOnTest out  In this instance  63_ AddOnTest   out displays the output of the add on script     Viewing data collection information    To view data collection history from the CLI  use the following command     Collecting information for HP Support with the lbrixCollect 139    ibrix_collect  1    To view data collection details such as date  of creation   size  description  state and initiator  use  the following command     ibrix_ collect  v  n  lt Name gt     Adding deleting commands or logs in the XML file    To add or change the logs that are collected or commands that are executed during data collection   you can modify the lbrix Collect XML files that are stored in the directory  usr local ibrix   ibrixcollect     The  usr local ibrix ibrixcollect commands executed and the logs collected during  data collection are maintained in the following files under  usr local ibrix ibrixcollect  directory     e fm _summary xml     Commands pertaining to the Fusion Manag
273. the agile management console     usr local ibrix autocfg bin ibrixapp upgrade  f  s  If the install of the new image succeeds  but the configuration restore fails and you need to    revert the server to the previous install  execute boot_info  r and then reboot the machine   This step causes the server to boot from the old version  the alternate partition      If the public network interface is down and inaccessible for any node  power cycle that node     166 Cascading Upgrades    Manual upgrade    Check the following     e Ifthe restore script fails  check  usr local ibrix setup logs restore  log for  details     e If configuration restore fails  look at  usr local ibrix autocfg logs appliance log  to determine which feature restore failed  Look at the specific feature log file under  usr   local ibrix setup logs  for more detailed information     To retry the copy of configuration  use the command appropriate for your server     o A file serving node    usr local ibrix autocfg bin ibrixapp upgrade  s       An agile node  a file serving node hosting the agile management console     usr local ibrix autocfg bin ibrixapp upgrade  f  s    Upgrading the StoreAll software to the 5 5 release          This section describes how to upgrade to the StoreAll software 5 5 release  The management  console and all file serving nodes must be upgraded to the new release at the same time     IMPORTANT     e Do not start new remote pea jobs while a cluster e is in progress  If replication  job
274. the following  command   ibrix_ collect  C  r NUMBER   2  To configure emails containing a summary of collected information of each node to be sent  automatically to your desktop after every data collection event   a  Select Cluster Configuration  and then select Ibrix Collect   b  Click Modify     c  Under Email Settings  enable or disable sending cluster configuration by email by checking  or unchecking the appropriate box     d Fill in the remaining required fields for the cluster configuration and click Okay     To set up email settings to send cluster configurations using the CLI  use the following command     ibrix_ collect  C  m  lt Yes No gt    s  lt SMTP_server gt     f  lt From gt     t  lt To gt      136 Troubleshooting       NOTE  More than one email ID can be specified for  t option  separated by a semicolon     The    From    and    To    command for this SMTP server are Ibrix Collect specific        Obtaining    custom logging from ibrix collect add on scripts    You can create add on scripts that capture custom StoreAll and operating system commands and    logs      To activate an add on script  place it in the specified location  and the add on script will run    when the ibrix collect command is executed  Output of these add on scripts is packaged    into    IbrixCollect tar files     Table 3 ibrix collect add on scripts                            Step   Description Where to find more information   1   Create an add on script     Creating an add on script   
275. the following based on the Proposed Action and Severity       Status in Proposed Action column   Status in Severity column Go to    UPGRADE MANDATORY Step 3  UPGRADE RECOMMENDED Step 3 is optional  However  it is    recommended to perform step 3 for  system stability and to avoid any  known issues       NONE or DOWNGRADE   MANDATORY Step 4      NONE or DOWNGRADE RECOMMENDED Step 4 is optional  However  it is  recommended to perform step 4 for  system stability and to avoid any  known issues        3  Perform the flash operation by entering the following command and then go to step 5   hpsp_ fmt  flash  c  lt components name gt     The following screen shot displays a successful flash operation        P root ib149 124      it may take 10 minutes      Flash succeeded for Integrated Lights Out  iLO   using CP016462 scexe   Reboot Required  No   Coordinated Reboot Required    Description Reboot Flash Status    Integrated Lights Out  iLO        4  Perform the flash operation by entering the following command and then go to step 5   hpsp_ fmt  flash  c  lt components name gt    force    5  If the components require a reboot on flash  failover the FSN for continuous operation as  described in the following steps     NOTE  Although the following steps are based on a two node cluster  all steps can be used  in a multiple node clusters     Steps for upgrading the firmware 131    a  Determine whether the node to be flashed is the active Fusion Manager by enter the  following command    
276. thority having jurisdiction over your facility  wiring and installation requirements        Device warnings and precautions    A    WARNING  To reduce the risk of electric shock or damage to the equipment   e   Allow the product to cool before removing covers and touching internal components     e Do not disable the power cord grounding plug  The grounding plug is an important safety  feature     e   Plug the power cord into a grounded  earthed  electrical outlet that is easily accessible at all  times     e Disconnect power from the device by unplugging the power cord from either the electrical  outlet or the device     e Do not use non conductive tools that could bridge live parts     e Remove all watches  rings  or loose jewelry when working in hot plug areas of an energized  device     e Install the device in a controlled access location where only qualified personnel have access  to the device     e Power off the equipment and disconnect power to all AC power cords before removing any  access covers for non hot pluggable areas     e Do not replace non hot pluggable components while power is applied to the product  Power  off the device and then disconnect all AC power cords     e Do not exceed the level of repair specified in the procedures in the product documentation   All troubleshooting and repair procedures are detailed to allow only subassembly or  module level repair  Because of the complexity of the individual boards and subassemblies   do not attempt to make rep
277. tible  install the update with the vendor RPM and reboot the  system  The StoreAll client software is then automatically updated with the new kernel  and StoreAll    client services start automatically  Use the ibrix version  1  C command to verify the kernel  version on the client        NOTE  To use the verify_client command  the StoreAll client software must be installed        Upgrading the StoreAll software to the 6 3 release    Upgrading Windows StoreAll clients    Complete the following steps on each client     1   2   3   4     5   6     7     Remove the old Windows StoreAll client software using the Add or Remove Programs utility  in the Control Panel     Copy the Windows StoreAll client MSI file for the upgrade to the machine    Launch the Windows Installer and follow the instructions to complete the upgrade    Register the Windows StoreAll client again with the cluster and check the option to Start Service  after Registration    Check Administrative Tools   Services to verify that the StoreAll client service is started     Launch the Windows StoreAll client  On the Active Directory Settings tab  click Update to  retrieve the current Active Directory settings     Mount file systems using the StoreAll Windows client GUI        NOTE  If you are using Remote Desktop to perform an upgrade  you must log out and log back  in to see the drive mounted        Upgrading pre 6 3 Express Query enabled file systems    The internal database schema format of Express Query enabled 
278. to the 6 3 release       2 Product description    This guide provides information about configuring  monitoring  and maintaining HP StoreAll 9300  Storage Gateways and 9320 Storage           IMPORTANT  Itis important to keep regular backups of the cluster configuration     9300 Storage Gateway    The 9300 Storage Gateway is a flexible  scale out solution that brings gateway file services to HP  MSA  EVA  P4000  or 3rd party arrays or SANs  The system provides the following features     e Segmented  scalable file system under a single namespace   e NFS  SMB  Server Message Block   FTP  and HTTP support for accessing file system data  e Centralized CLI and GUI cluster management   e   Policy management    e Continuous remote replication    9320 Storage System    The 9320 Storage System is a highly available  scale out storage solution for file data workloads   The system combines HP StoreAll File Serving Software with HP server and storage hardware to  create an expansible cluster of file serving nodes  The system provides the following features     e Segmented  scalable file system under a single namespace   e NFS  SMB  FTP  and HTTP support for accessing file system data  e Centralized CLI and GUI cluster management   e Policy management   e Continuous remote replication   e Dual redundant paths to all storage components    e Gigabytes per second throughput    System Components  For 9300 system components  see    Component diagrams for 9300 systems     page 180      For 9320 
279. toreAll software 5 5 system that was installed through the QR procedure  you can use the automatic  upgrade procedure  If you used an upgrade procedure to install your StoreAll software 5 5 system   you must use the manual procedure  To determine if your system was installed using the QR  procedure  run the df command  If you see separate tile systems mounted on     local   stage   and  alt  your system was quick restored and you can use the automated upgrade procedure    If you do not see these mount points  proceed with the manual upgrade process     e Automatic upgrades  This process uses separate partitioned space on the local disk to save  node specific configuration information  After each node is upgraded  its configuration is  automatically reapplied     e Manual upgrades  Before each server upgrade  this process requires that you back up the  node specitic configuration information Ge the server onto an external device  After the  server is upgraded  you will need to copy and restore the node specific configuration  information manually     Upgrading the StoreAll software to the 5 6 release 163    The upgrade takes approximately 45 minutes for 9320 systems with a standard configuration        NOTE  If you are upgrading from a StoreAll 5 x release  any support tickets collected with the  ibrix_supportticket command will be deleted during the upgrade  Download a copy of the  archive files    tgz  from the  admin platform diag supporttickets directory        Automatic u
280. tput  the HA  column should display of         lt ibrixhome gt  bin ibrix server  1l   3  Stop the NFS and SMB services on all file serving nodes to prevent NFS and SMB clients from  timing out    lt ibrixhome gt  bin ibrix server  s  t cifs  c stop     lt ibrixhome gt  bin ibrix_ server  s  t nfs  c stop    170 Cascading Upgrades    Verify that all likewise services are down on alll file serving nodes    ps  ef   grep likewise   Use kill  9 to kill any likewise services that are still running   4  From the management console  unmount all StoreAll file systems      lt ibrixhome gt  bin ibrix_ umount  f  lt fsname gt     Upgrading the management console   Complete the following steps on the management console    1  Force a backup of the configuration    lt ibrixhome gt  bin ibrix fm  B  The output is stored at  usr local ibrix tmp fmbackup  zip  Be sure to save this file  in a location outside of the cluster     2  Move the  lt installer dir gt  ibrix directory used in the previous release installation to  ibrix old  For example  if you expanded the tarball in  root during the previous StoreAll  installation on this node  the installer is in  root  ibrix     3  Expand the distribution tarball or mount the distribution DVD in a directory of your choice   Expanding the tarball creates a subdirectory named ibrix that contains the installer program   For example  if you expand the tarball in  root  the installer is in  root  ibrix     4  Change to the installer directory if necess
281. tros lixos domiciliares   Ao inv  s disso  deve proteger a sa  de humana e o meio ambiente levando o seu equipamento para  descarte em um ponto de recolha destinado    reciclagem de res  duos de equipamentos el  ctricos e  electr  nicos  Para obter mais informa    es  contacte o seu servi  o de tratamento de res  duos dom  sticos        204 Regulatory compliance notices    Romanian recycling notice    Casarea echipamentului uzat de c  tre utilizatorii casnici din Uniunea European      Acest simbol   nseamn   s   nu se arunce produsul cu alte deseuri menajere    n schimb  trebuie s   proteja  i  s  n  tatea uman     i mediul pred  nd echipamentul uzat la un punct de colectare desemnat pentru reciclarea  echipamentelor electrice si electronice uzate  Pentru informa  ii suplimentare  va rug  m s   contacta  i  serviciul de eliminare a de  eurilor menajere local        Slovak recycling notice    Likvid  cia vyraden  ch zariaden   pou    vate  mi v dom  cnostiach v Eur  pskej   nii    Tento symbol znamen      e tento produkt sa nem   likvidova   s ostatn  m domov  m odpadom  Namiesto  toho by ste mali chr  ni   ludsk   zdravie a   ivotn   prostredie odovzdan  m odpadov  ho zariadenia na  zbernom mieste  ktor   je ur  en   na recykl  ciu odpadov  ch elektrick  ch a elektronick  ch zariaden     Dal  ie inform  cie z  skate od spolo  nosti zaoberaj  cej sa likvid  ciou domov  ho odpadu        Spanish recycling notice    Eliminaci  n de los equipos que ya no se utilizan en entornos do
282. turns an address failed exception     ibrix event  u  n EMAILADDRESS    Setting up email notification of cluster events 71    Viewing email notification settings    The ibrix event  L command provides comprehensive information about email settings and  configured notifications     ibrix_event  L    Email Notification   Enabled   SMTP Server   mail hp com   From    FM hp com   Reply To   MIS hp com   EVENT LEVEL TYPE DESTINATION  asyncrep completed ALERT EMAIL admin hp com  asyncrep failed ALERT EMAIL admin hp com    Setting up SNMP notifications  The StoreAll software supports SNMP  Simple Network Management Protocol  V1  V2  and V3   Pp p g    Whereas SNMPV2 security was enforced by use of community password strings  V3 introduces  the USM and VACM  Discussion of these models is beyond the scope of this document  Refer to  RFCs 3414 and 3415 at http   www ietf org for more information  Note the following        e   In the SNMPV3 environment  every message contains a user name  The function of the USM  is to authenticate users and ensure message privacy through message encryption and  decryption  Both authentication and privacy  and their passwords  are optional and will use  default settings where security is less of a concern     e With users validated  the VACM determines which managed objects these users are allowed  to access  The VACM includes an access scheme to control user access to managed objects   context matching to define which objects can be accessed  and MIB v
283. tus down  Server  lt server name gt  down    When the node is up  rescan Phone Home to add the node to the configuration  See    Updating  the Phone Home configuration     page 45      Fusion Manager IP is discovered as    Unknown      Verify that the read community string entered in HP SIM matches the Phone Home read community  string    Also run snmpwalk on the VIF IP and verify the information       snmpwalk  v 1  c  lt read community string gt   lt FM VIF IP gt   1 3 6 1 4 1 18997    Discovered device is reported as unknown on CMS  Run the following command on the file serving node to determine whether the Insight Remote  Support services are running       service snmpd status    service hpsmhd status    service hp snmp agents status    If the services are not running  start them       service snmpd start    service hpsmhd start    service hp snmp agents start    Alerts are not reaching the CMS    If nodes are configured and the system is discovered properly but alerts are not reaching the CMS   verify that a trapif entry exists in the cma conf configuration file on the file serving nodes     46 Getting started    Device Entitlement tab does not show GREEN    If the Entitlement tab does not show GREEN  verify the Customer Entered serial number and part  number or the device     SIM Discovery  On SIM discovery  use the option Discover a Group of Systems for any device discovery     Configuring HP Insight Remote Support on StoreAll systems 47       4 Configuring virtual int
284. tware while file systems remain mounted   Before upgrading a file serving node  you will need to fail the node over to its backup node   allowing file system access to continue  This procedure cannot be used for major upgrades   but is appropriate for minor and maintenance upgrades     e Offline upgrades  This procedure requires that you first unmount file systems and stop services    Each file serving node may need tabe rebooted if NFS or SMB causes the unmount operation  to fail   You can then perform the upgrade  Clients will experience a short interruption to file  system access while each file serving node is upgraded     Standard upgrade for clusters with a dedicated Management Server machine or blade    Use these procedures if e cluster has a dedicated Management Server machine or blade SE  the management console software  The StoreAll software 5 4 x to 5 5 upgrade can be performe  either online or offline  Future releases may require offline upgrades        NOTE  Be sure to read all instructions before starting the upgrade procedure        168 Cascading Upgrades    Standard online upgrade    The management console must be upgraded first  You can then upgrade file serving nodes and  StoreAll clients in any order     Upgrading the management console   Complete the following steps on the Management Server machine or blade    1  Disable automated failover on all file serving nodes    lt ibrixhome gt  bin ibrix server  m  U   2  Verity that automated failover is off    lt 
285. twork Best Practices  Guide     Configuring link state monitoring for iSCSI network interfaces    Do not configure link state monitoring for user network interfaces or VIFs that will be used for SMB  or NFS  Link state monitoring is supported only for use with iSCSI storage network interfaces  such  as those provided with 9300 Gateway systems     To configure link state monitoring on a 9300 system  use the following command   ibrix nic  N  h HOST  A IFNAME    Configuring VLAN tagging 51    To determine whether link state monitoring is enabled on an iSCSI interface  run the following  command     ibrix nic  1    Next  check the LINKMON column in the output  The value yes means that link state monitoring  is enabled  no means that it is not enabled     52 Configuring virtual interfaces for client access       5 Configuring failover    This chapter describes how to configure failover for agile management consoles  file serving nodes   network interfaces  and HBAs     Agile management consoles    The agile Fusion Manager maintains the cluster configuration and provides graphical and   command line user interfaces for managing and monitoring the cluster  The agile Fusion Manager  is installed on all file serving nodes when the cluster is installed  The Fusion Manager is active on  one node  and is passive on the other nodes  This is called an agile Fusion Manager configuration     Agile Fusion Manager modes  An agile Fusion Manager can be in one of the following modes     e active
286. ty       aI AJOJN    System component diagrams    187    Cabling diagrams    Cluster network cabling diagram    Cluster Network Switch 1    10Gb CX4 interconnect    Cluster Network Switch 2  Cluster Management Server    File Serving Node    File Serving Node    MSA 2312sa  controller enclosure    MSA 2312sa  controller enclosure        second MSA controller pair present on  all 450GB SAS and all 96 disk models     188 System component and cabling diagrams for 9320 systems    SATA option cabling                ins    DCP  GE  SCH    Pad     A    A  S San             17562    Description    SAS I O pathController A    SAS LO pathController B          Cabling diagrams 189    SAS option cabling             17563       Line Description    SAS I O pathArray 1  Controller A       SAS I O pathArray 1  Controller B  SAS I O pathArray 2  Controller A          SAS I O pathArray 2  Controller B       190 System component and cabling diagrams for 9320 systems    Drive enclosure cabling                                                       17561    Item Description    1 SAS controller in 9300c controller enclosure    2 I O modules in four 9300cx drive enclosures       Cabling diagrams 191       D Warnings and precautions    Electrostatic discharge information    To prevent damage to the system  be aware of the precautions you need to follow when setting  up the system or handling parts  A discharge of static electricity from a finger or other conductor  could damage system boards or other 
287. uide   The segment evacuation process fails if a segment contains chunk files bigger than 3 64 T   you need to move these chunk files manually  The evacuation process generates a log reporting  the chunk files on the segment that were not moved  The log file is saved in the management  console log directory  the default is  usr local ibrix log  and is named  Rebalance_ lt jobID gt   lt FS ID gt   info  for example  Rebalance_29 ibfs1 info    Run the inum2name command to identify the symbolic name of the chunk file         inum2name   fsname ibfs 500000017  ibfs  sliced_dir file3 bin    Managing segments 119    After obtaining the name of the file  use a command such as cp to move the file manually   Then run the segment evacuation process again     The analyzer log lists the chunks that were left on segments  Following is an example of the       log    2012 03 13 11 57 35 0332834  lt INFO gt  1090169152 segment 3 not migrated  chunks 462   2012 03 13 11 57 35 0332855  lt INFO gt  1090169152 segment 3 not migrated  replicas 0   2012 03 13 11 57 35 0332864  lt INFO gt  1090169152 segment 3 not migrated  files 0   2012 03 13 11 57 35 0332870  lt INFO gt  1090169152 segment 3 not migrated  directories 0   2012 03 13 11 57 35 0332875  lt INFO gt  1090169152 segment 3 not migrated root  D   2012 03 13 11 57 35 0332880  lt INFO gt  1090169152 segment 3 orphan inodes 0  2012 03 13 11 57 35 0332886  lt INFO gt  1090169152 segment 3 chunk  inode  3099CC002 8E2124C4  poid 3099CC002 8E21
288. unctions on the storage system     Ensure that the storage system configuration is not being reconfigured during a firmware  update     e To avoid spurious failure indications and potential system crashes  you must suspend hardware  monitoring during the update process  You should execute hpspmonControl   pauseon  both servers in the couplet before performing the flash operation  You should then make sure  to execute hpspmonControl   resume on both servers in the couplet after the flash  operation is complete     e Do not cycle power or restart devices during a firmware update     To upgrade the firmware for components     1  Run the  opt hp platform bin hpsp fmt  fr command to verify that the firmware  on this node and subsequent nodes in this cluster is correct and up to date  This command  should be performed before placing the cluster back into service     The following figure shows an example of the firmware recommendation output and corrective  component upgrade flash                IMPORTANT  For some components in StoreAll 9320 9300  performing a firmware update    would require some prerequisite measures that must be followed before the flash operation   The Firmware update command  hpsm_fmt  flashrec  c  lt component names  provides  the prerequisites required and it asks users they have taken necessary steps for the firmware  update     IMPORTANT  Upgrade the firmware in the following order   L Server   2  Chassis   3  Storage       130 Upgrading firmware    2  Do 
289. uning Options from the Summary panel     The General Tunings dialog box specifies the communications protocol  TCP or UDP  and the  number of admin and server threads     TO Maintaining the system             General Tunings     IAD Tunings      Module Tunings    Servers   m Summary    General Tunings    All tunings  general  IAD  and module  are considered ADVANCED options and should only be modified by  expert users for specific scenarios  Typically  tunings are applied by qualified HP representatives during  installation              Protocol  TCP      TCP E  Number of Admin Threads  10   10    Number of Server Threads  10     10 e            Velo ege       The IAD Tunings dialog box configures the StoreAll administrative daemon                                                        o General Tunings IAD Tunings      IAD Tunings E E   o Module Tunings The following are advanced Ibrix Administrative Daemon tunings      Servers   a   ee Use default values for all IAD tune options  defaults defined in parenthesis    cudServiceReleaseAttempts  300     300 7  1 500   cudServiceReleaseRetryinterval  2   2  1 500   tsFreezeAttempts  10     10 j  1 500   gratArpinterval  30    30 use  hardwareStatusinterval  30     30    1 9999   hardwareStatusOn  false      no   M  healthOn  true   yes ei  healthReport  10   10  4 9999   jobControlinterval  60     60    1 99   logLevel  2    2    0 4           Tuning file serving nodes and StoreAlll clients    m    The Module Tunings dialog box a
290. up pairs    2  Identify power sources for the servers in the backup pair   3  Configure NIC monitoring    A  Enable automated failover     1  Configure server backup pairs    File serving nodes are configured in backup pairs  where each server in a pair is the backup for  the other  This step is typically done when the cluster is installed  The following restrictions apply     e The same file system must be mounted on both servers in the pair and the servers must see the  same storage     e   In a SAN environment  a server and its backup must use the same storage infrastructure to  access a Segment physical volumes  for example  a multiported RAID array      For a cluster using the unified network configuration  assign backup nodes for the bonao   1  interface  For example  node  is the backup for node2  and node  is the backup for node     L Add the VIF   ibrix nic  a  n bond0 2  h nodel node2 node3 node4  2  Setup a standby server for each VIF     ibrix nic  b  H nodel bond0 1 node2 bond0 2  ibrix nic  b  H node2 bond0 1 nodel1 bond0 2  ibrix nic  b  H node3 bond0 1 node4 bond0 2  ibrix nic  b  H node4 bond0 1 node3 bond0 2    2  Identify power sources    To implement automated failover  perform a forced manual failover  or remotely power a file  serving node up or down  you must set up programmable power sources for the nodes and their  backups  Using programmable power sources prevents a    split brain scenario    between a failing  file serving node and its backup  allow
291. urde batterijen     Batterijen  accu s en accumulators mogen niet worden gedeponeerd bij het normale  huishoudelijke afval  Als u de batterijen accu s wilt inleveren voor hergebruik   of op de juiste manier wilt vernietigen  kunt u gebruik maken van het openbare  inzamelingssysteem voor klein chemisch afval of ze terugsturen naar HP of een    geautoriseerde HP Business of Service Partner     Neem contact op met een geautoriseerde leverancier of een Business of  Service Partner voor meer informatie over het vervangen of op de juiste manier    vernietigen van accu s     battery notice    Avis relatif aux piles    A AVERTISSEMENT   cet appareil peut contenir des piles       N essayez pas de recharger les piles apr  s les avoir retir  es      Evitez de les mettre en contact avec de l eau ou de les soumettre  amp  des temp  ratures  sup  rieures    60  C      N essayez pas de d  monter  d   craser ou de percer les piles      N essayez pas de court circuiter les bornes de la pile ou de jeter cette derni  re dans  le feu ou l eau      Remplacez les piles exclusivement par des pi  ces de rechange HP pr  vues pour  ce produit     Les piles  modules de batteries et accumulateurs ne doivent pas   tre jet  s avec  les d  chets m  nagers  Pour permettre leur recyclage ou leur   limination  veuillez  utiliser les syst  mes de collecte publique ou renvoyez les    HP     votre Partenaire  Agr     HP ou aux agents agr    s     Contactez un Revendeur Agr     ou Mainteneur Agr     pour savoir c
292. ure can be used to back up and recover entire StoreAll software file systems  or portions of a file system  You can use any supported NDMP backup application to perform the  backup and recovery operations   In NDMP terminology  the backup application is referred to as  a Data Management Application  or DMA   The DMA is run on a management station separate  from the cluster and communicates with the cluster s file serving nodes over a configurable socket  port     The NDMP backup feature supports the following    e NDMP protocol versions 3 and 4   e Two way NDMP operations   e Three way NDMP operations between two network storage systems    Each file serving node functions as an NDMP Server and runs the NDMP Server daemon  ndmpd   process  When you start a backup or restore operation on the DMA  you can specify the node  and tape device to be used for the operation     Following are considerations for configuring and using the NDMP feature     e When configuring your system for NDMP operations  attach your tape devices to a SAN and  then verify that the file serving nodes to be used for backup restore operations can see the  appropriate devices     e When performing backup operations  take snapshots of your file systems and then back up  the snapshots     e When directory tree quotas are enabled  an NDMP restore to the original location fails if the  hard quota limit is exceeded  The NDMP restore operation first creates a temporary file and  then restores a file to the tempora
293. usion Manager        Installing the Statistics tool    The Statistics tool is installed automatically when the StoreAll software is installed on the file serving  nodes  To install or reinstall the Statistics tool manually  use the following command     ibrixinit  tt  Note the following   e Installation logs are located at  tmp stats install log     e By default  installing the Statistics tool does not start the Statistics tool processes  See     Controlling Statistics tool processes     page 105  for information about starting and stopping  the processes     e Ifthe Fusion Manager deamon is not running during the installation  Statstool is installed as  passive  When Fusion Manager acquires an active passive state  the Statstool management  console automatically changes according to the state of Fusion Manager     Enabling collection and synchronization    To enable collection and synchronization  configure synchronization between nodes  Run the  following command on the active Fusion Manager node  specifying the node names of all file  serving nodes      usr local ibrix stats bin stmanage setrsync  lt nodel_name gt    lt nodeN name gt     For example     stmanage setrsync ibr 3 31 1 ibr 3 31 2 ibr 3 31 3       NOTE  Do not run the command on individual nodes  All nodes must be specified in the same  command and can be specified in any order  Be sure to use node names  not IP addresses        To test the rsync mechanism  see    Testing access     page 105      100 Using the 
294. ve Fusion Manager node  disable automated failover on all file serving nodes    lt ibrixhome gt  bin ibrix server  m  U    Run the following command to verify that automated failover is off  In the output  the HA column  should display of         lt ibrixhome gt  bin ibrix server  1    Stop the SMB  NFS and NDMP services on all nodes  Run the following commands on the  node hosting the active Fusion Manager     ibrix_ server  s  t cifs  c stop   ibrix_server  s  t nfs  c stop   ibrix server  s  t ndmp  c stop   If you are using SMB  verify that all likewise services are down on all file serving nodes   ps  ef   grep likewise   Use kill  9 to stop any likewise services that are still running   If you are using NFS  verify that all NFS processes are stopped   ps  ef   grep nfs   If necessary  use the following command to stop NFS services    etc init d nfs stop   Use kill  9 to stop any NFS processes that are still running     If necessary  run the following command on all nodes to find any open file handles for the  mounted file systems     lsof  lt  mountpoint gt    Use kill  9 to stop any processes that still have open file handles on the file systems   Unmount each file system manually    ibrix umount  f FSNAME   Wait up to 15 minutes for the file systems to unmount     Troubleshoot any issues with unmounting file systems before proceeding with the upgrade   See    File system unmount issues     page 23      Performing the upgrade    This upgrade method is supported only fo
295. ver is  complete     Down  Server is powered down or inaccessible to the Fusion Manager  and no standby server is providing  access to the server   s segments           The STATE field also reports the status of monitored NICs and HBAs  If you have multiple HBAs  and NICs and some of them are down  the state is reported as HBAsDown or NicsDown     Monitoring cluster events    StoreAll software events are assigned to one of the following categories  based on the level of  severity     e Alerts  A disruptive event that can result in loss of access to file system data  For example  a  segment is unavailable or a server is unreachable     e Warnings  A potentially disruptive condition where file system access is not lost  but if the    situation is not addressed  it can escalate to an alert condition  Some examples are reaching  a very high server CPU utilization or nearing a quota limit     e   Information  An event that changes the cluster  such as creating a segment or mounting a file  system  but occurs under normal or nonthreatening conditions     Events are written to an events table in the configuration database as they are generated  To  maintain the size of the file  HP recommends that you periodically remove the oldest events  See     Removing events from the events database table     page 95      n    You can set up event notifications through email  see    Setting up email notification of cluster events   page 70   or SNMP traps  see    Setting up SNMP notifications  
296. w process status on a StoreAlll client  use the following command      etc init d ibrix client  start   stop   restart   status     Tuning file serving nodes and StoreAll clients    Typically  HP Support sets the tuning parameters on the file serving nodes during the cluster  installation and changes should be needed only for special situations        A CAUTION  The default values for the host tuning parameters are suitable for most cluster  environments  Because changing parameter values can alter file system performance  HP  recommends that you exercise caution before implementing any changes  or do so only under the  guidance of HP technical support        Host tuning changes are executed immediately for file serving nodes  For StoreAll clients  a tuning  intention is stored in the Fusion Manager  When StoreAll software services start on a client  the  client queries the Fusion Manager for the host tunings that it should use and then implements them   If StoreAll software services are already running on a client  you can force the client to query the  Fusion Manager by executing ibrix client or ibrix_lwhost   a on the client  or by  rebooting the client     You can locally override host tunings that have been set on StoreAll Linux clients by executing the  ibrix_lwhost command     Tuning file serving nodes on the GUI    The Modify Server s  Wizard can be used to tune one or more servers in the cluster  To open the  wizard  select Servers from the Navigator and then click T
297. ws the current  configuration     Configuring HP Insight Remote Support on StoreAll systems 37       Updated Mar  7  2012  9 56 19 AM PST    OAG  o 1        Event Status  24 hours            Filesystems      Snapshots  E  Servers    File Shares       Summary       Getting Started          Name  Cluster Name    Fusion Manager Primary IP Address    Value  ib69s1       Cluster Configuration                    E NDMP Backup Ls        i  Remote Clusters             Phone Home Setup    72 Email Name Value     Events a S  LEE e o  Phone Home    an Central Management Server IP  ZS Events Read Community String public   ll  6 File Sharing Authentication System Name  e Active Directory   System Location  S   Share Administrators     LDAP System Contact  G   LDAP ID Mapping Software Entitlement ID   amp  Local Users Country Code  E  Local Groups       Rescan             Click Enable to configure the settings on the Phone Home Settings dialog box  Skip the Software  Entitlement ID field  it is not currently used       Central Management Server IP     Read Community String   Write Community String   System Name    System Location   System Contact   Software Entitlement ID   Choose Country     Ch Required Value          The time required to enable Phone Home depends on the number of devices in the cluster  with  larger clusters requiring more time     To configure Phone Home settings from the CLI  use the following command     38 Getting started    ibrix pbonebome  c  i  lt IP Address of th
298. x_event  m on off  s SMTP  f from   r reply to    t subject     The server must be able to receive and send email and must recognize the From and Reply to  addresses  Be sure to specify valid email addresses  especially for the SMTP server  If an address  is not valid  the SMTP server will reject the email    The following command configures email settings to use the mail hp com SMTP server and turns  on notifications     ibrix event  m on  s mail hp com  f FM hp com  r MIS hp com  t Clusterl Notification       NOTE  The state of the email notification process has no effect on the display of cluster events  in the GUI        Dissociating events and email addresses    Testing    To remove the association between events and email addresses  use the following command   ibrix_event  d   e ALERT WARN INFO EVENTLIST   m EMAILLIST   For example  to dissociate event notifications for admin hp com    ibrix_event  d  m admin hp com   To turn off all Alert notifications for admin hp com    ibrix_event  d  e ALERT  m admin hp com    To turn off the server  registered and filesystem  created notifications for    admin1 hp com and admin2 hp com     ibrix_ event  d  e server registered filesystem created  m adminl hp com admin2 hp com    email addresses    To test an email address with a test message  notifications must be turned on  If the address is  valid  the command signals success and sends an email containing the settings to the recipient  If  the address is not valid  the command re
299. x_fm  m passive  A  9  Re enable Segment Server Failover on each node     ibrix_server  m  h  lt node gt     If your cluster includes G6 servers  check the iLO2 firmware version  This issue does not  affect G7 servers  The firmware must be at version 2 05 for HA to function properly  If your  servers have an earlier version of the iLO2 firmware  run the CP014256 scexe script as  described in the following steps     1  Make sure the  local ibrix folder is empty prior to copying the contents of pkgfull   When you upgrade the StoreAll software later in this chapter  this folder must contain  only   rpm packages listed in the build manifest for the upgrade or the upgrade will  fail     2  Mount the ISO image and copy the entire directory structure to the  local ibrix  directory     The following is an example of the mount command     mount  o loop   local pkg ibrix pkgfull FS 6 3 72 IAS 6 3 72 x86 64 signed iso   mnt  lt storeall gt     In this example   lt storeal1 gt can have any name   The following is an example of the copy command   cp  R  mnt storeall    local ibrix  3  Execute the firmware binary at the following location    local ibrix distrib firmware CP014256 scexe    Step  completed        Make sure StoreAll is running the latest firmware  For information on how to find the version  of firmware that StoreAll is running  see the Administrator Guide for your release        If you are using 1GBe with mode 6  consider switching to mode 4  See the HP StoreAll  Storage Best 
300. xpand  the tarball in  root  the installer is in  root ibrix     Change to the installer directory if necessary and run the upgrade     ibrixupgrade  f    The installer upgrades both the management console software and the file serving node  software on the node     Verify the status of the management console    etc init d ibrix fusionmanager status    The status command confirms whether the correct services are running  Output will be similar  to the following     Fusion Manager Daemon  pid 18748  running      Also run the following command  which should report that the console is passive    lt ibrixhome gt  bin ibrix fm  i   Check  usr local ibrix log fusionserver 1log for errors     If the upgrade was successtul  fail back the node  Run the following command on the node  with the active agile management console      lt ibrixhome gt  bin ibrix server  f  U  h HOSTNAME    174 Cascading Upgrades    24  Verity that the agile management console software and the file serving node software are  now upgraded on the two nodes hosting the agile management console    lt ibrixhome gt  bin ibrix version  1  S    Following is some sample output     Fusion Manager version  5 5 XXX    HOST NAME FILE SYSTEM IAD IAS IAD FS Os KERNEL VERSION ARCH  ib50 86 5 5 205 9000 5 5  5 5 XXX 5 5 XXX GNU Linux 2 6 18 128 e15 x86 ei  ib50 87 5 5 205 9000 5 5  5 5 XXX 5 5 XXX GNU Linux 2 6 18 128 e15 x86 ei       You can now upgrade any remaining file serving nodes     Upgrading remaining file serving nod
301. y switching  Turn on HBA monitoring for all ports  that you want to monitor for failure     e   For dual port HBAs with built in standby switching and single port HBAs that have been set  up as standby pairs in a software operation  Identify the standby pairs of ports to the  configuration database and then turn on HBA monitoring for all paired ports  If monitoring is  turned on for just one port in a standby pair and that port fails  the Fusion Manager will fail  over the server even though the HBA has automatically switched traffic to the surviving port   When monitoring is turned on for both ports  the Fusion Manager initiates failover only when  both ports in a pair fail     When both HBA monitoring and automated failover for file serving nodes are configured  the  Fusion Manager will fail over a server in two situations     e Both ports in a monitored set of standby paired ports fail  Because all standby pairs were  identified in the configuration database  the Fusion Manager knows that failover is required  only when both ports fail     e Amonitored single port HBA fails  Because no standby has been identified for the failed port   the Fusion Manager knows to initiate failover immediately     Discovering HBAs    You must discover HBAs before you set up HBA monitoring  when you replace an HBA  and when  you add a new HBA to the cluster  Discovery adds the WWPN for the port to the configuration  database     ibrix hba  a   h HOSTLIST     Adding standby paired HBA ports   
302. y to  operate the equipment     Cables    When provided  connections to this device must be made with shielded cables with metallic RFI EMI  connector hoods in order to maintain compliance with FCC Rules and Regulations     Canadian notice  Avis Canadien     Class A equipment    This Class A digital apparatus meets all requirements of the Canadian Interference Causing  Equipment Regulations     Cet appareil num  rique de la class A respecte toutes les exigences du R  glement sur le mat  riel  brouilleur du Canada     Class B equipment    This Class B digital apparatus meets all requirements of the Canadian Interference Causing  Equipment Regulations     Cet appareil num  rique de la class B respecte toutes les exigences du R  glement sur le mat  riel  brouilleur du Canada     European Union notice  This product complies with the following EU directives   e   Low Voltage Directive 2006 95 EC  e EMC Directive 2004 108 EC    Compliance with these directives implies conformity to applicable harmonized European standards   European Norms  which are listed on the EU Declaration of Conformity issued by Hewlett Packard  for this product or product family     This compliance is indicated by the following conformity marking placed on the product     This marking is valid for non Telecom products and EU    harmonized Telecom products  e g   Bluetooth         Certificates can be obtained from http   www hp com go certificates   Hewlett Packard GmbH  HQ TRE  Herrenberger Strasse 140  7103
303. ystems on the cluster nodes    ibrix umount  f  lt fs_name gt    To unmount file systems from the GUI  select Filesystems  gt  unmount   8  Verify that all file systems are unmounted    ibrix fs  1    If a file system fails to unmount on a particular node  continue with this procedure  The file  system will be forcibly unmounted during the node shutdown     9  Shut down all StoreAll Server services and verity the operation       pdsh  a  etc init d ibrix_server stop   dshbak    pdsh  a  etc init d ibrix server status   dshbak    10  Wait for the Fusion Manager to report that all file serving nodes are down     ibrix_ server  1   Tl  Shut down all nodes other than the node hosting the active Fusion Manager     pdsh  w HOSTNAME shutdown  t now  now   For example       pdsh  w x850s3 shutdown  t now  now     pdsh  w x850s2 shutdown  t now  now     12  Shut down the node hosting the active agile Fusion Manager   shutdown  t now    now     13  Use ping to verify that the nodes are down  For example       ping x850s2   PING x850s2 13domain 13lab com  12 12 80 102  56 84  bytes of data   x850s1 13domain 13lab com  12 12 82 101  icmp _seq 2 Destination Host  Unreachable    If you are unable to shut down a node cleanly  use the following command to power the node  off using the iLO interface       ibrix_server  P off  h HOSTNAME  14  Shut down the Fusion Manager services and verify       etc init d ibrix fusionmanager stop      etc init d ibrix fusionmanager status  15  Shut down the 
    
Download Pdf Manuals
 
 
    
Related Search
    
Related Contents
  ESTUFAS A GAS 30”  Manual del usuario de la L355    Copyright © All rights reserved. 
   Failed to retrieve file