Home
HP LeftHand Storage Solutions
Contents
1. Figure 76 Selecting the SAN iQ software network interface and updating the list of managers 4 Select an IP address from the list of Manager IP Addresses 5 Click Communication Tasks and select Select SAN iQ Address 6 Select an ethernet port for this address 7 Click OK Now this storage node connects to the IP address through the ethernet port you selected 116 Managing the network Updating the list of manager IP addresses Update the list of manager IP addresses to ensure that a manager running on this storage node is communicating correctly with all managers in the management group Requirements Each time you update the list of managers you must reconfigure application servers that use the management group to which this storage node belongs Only update the list mode if you have reason to believe that there is a problem with the communication between the other managers in the group and the manager on this storage node 1 2 3 In the navigation window select a storage node and log in Open the tree and select the TCP IP Network category Select the Communication tab 2d Management Console Jog a tepap TCP status DNS Routing Communication SANAQ Interface Motherboard Port2 a Communications Mode Multicast Off Unicast On Manager IP Addresses 10 0 60 32 10 0 61 16 10 0 61 17 Communication Tasks Y Figure 77 Viewing the list of manager IP
2. Hyt Configuration Summary amp Volumes 1 and Snapshots 2 EH i Exchange Name Description Status Replication L Provisioned S Utilization Provisioning Created BH Servers 1 AdmLgdaily 1 Daily logs back Normal Tem 2 Way 1GB ie Thin 08 10 2008 0 L Administration AdmLgdaily 2 Daily logs back Normal Tem 2 Way 16B 22 Thin 08 10 2008 0 Sites LogsAdmin Administrative Normal 2 Way 10GB 0 Full 08 08 2008 0 BS Logs Performance Monitor Storage Nodes 3 He EEG LogsAdmin 2 O AdmLadaily 2 _ amp AdmLgdaily 1 Figure 114 Delete multiple snapshots from the volumes and snapshots node Scripting snapshots Application based scripting is available for taking snapshots Using application based scripts allows automatic snapshots of a volume For detailed information see Chapter 16 on page 287 and the Cliq User Manual found in the Documentation directory under the CMC Program Files for information about the SAN iQ command line interface Rolling back a volume to a snapshot or clone point Rolling back a volume to a snapshot or a clone point replaces the original volume with a read write copy of the selected snapshot Rolling back a volume to a snapshot deletes any new snapshots that may be present so you have some options to preserve data in those snapshots Instead of rolling back use a SmartClone volume to create a new volume from t
3. Boulder Throughput Total B s 40 826 372 787 182 769 721 56 977 849 821 34 669 299 901 Auto 0 000001 Y Bouder OPS Total los 857 768 16 733 1 092 718 647 803 Auto 0 01 Y Boulder Average I O Size BAO 47 596 052 10 922667 60 870 817 53 518 242 Auto 0 001 Y B Bouder Queue Depth Total 64 000 10 000 64 000 50 658 Auto 1 Y Figure 149 Example comparing two clusters Load comparison of two volumes example This example shows the total throughput for a cluster and the total throughput of each volume in that cluster You can see that the Log volume generates most of the cluster s throughput Performance Monitor Boulder 4 38 00 PM 4 38 30 PM 4 39 00 PM 4 39 30 PM 4 40 00 PM 4 40 30 PM 4 41 00 PM 4 41 30 PM Mountain Standard Time America Denver fd enemies Bouder IOPS Total IOs 446337 412372 580 111 471 095 Auto 0 1 Y rai Glog ExchServer 1 10PS Total 10 307 129 291 729 aia 329 838 Auto 0 1 M Goer Bexchserver 1 ops total lOs 139 208 120 643 150943 141 287 oy Figure 150 Example comparing two volumes 302 Monitoring performance Accessing and understanding the Performance Monitor window The Performance Monitor is available as a tree node below each cluster To display the Performance Monitor window 1 In the navigation window log in to the management group 2 Select the Performance Monitor node for the clust
4. cccseceeeeseeseeeeeeeceseeceseeesneeesnseesnseeeeeenes 300 Planning for SAN improvements nsec cntnentcnnenaanesss tessncoonneaddsannesiibnndeaneaidannbalanhastnbae gaan casseecianntent 300 Network utilization to determine if NIC bonding could improve performance example 301 Load comparison of two clusters example ccscesessecsdessnseceessnecensnnasevntonsetntessonndecenenssvens 301 Load comparison of two volumes example 5 2cc i c53sisdeers Gos vemasciertaons nersdeeate Mons eededeasdGmnnenrs 302 Accessing and understanding the Performance Monitor window cccsccceeeesseeeetseeeeeteeeetseeeeeees 303 Performance Monitor toolbar 2 caicipaniieasineniieibataivaeninaasniute Uounidauresunadeendenieneiuansnaaions 304 Performance monitoring graph 41scesccss csensnerdeobnsscnoastencevnasadinnationtsnnedevaurserinesoershenateessbedsenss 305 Performance monitoring table 22cm shpsopunesidons vaso naed santa Sind aaeaiindin ieee nea ea aneneantorndn eet 306 Understanding the performance statistics ccccccccceseceeeseeeeeseeeceeeeeeseeeeeseaeeeseeeeesneeeeenses 306 Monitoring and comparing multiple clistets j 10iiccosmdiomaneiinarnd moornnauamamemeans 308 Performance monitoring and analysis concepts cceeeccceeeseeeeeeeeeeeeseeeeeeeeceeeeeceseeeeeseeeentseeees 309 MVOC Teet e cated teas esite set a eee et acerca ean dare EN 309 Access IV DG eras cence e E E E E ene nde tecuha E Janta E E E 309 Acess SIZE oo
5. 1 Enter desired in Quantity field and click Update Table Figure 128 Creating multiple SmartClone volumes New SmartClone Volumes x Original Volume Setup Management Group TrainingOS Volume Name CH Snapshot Name C _SCsnap SmartClone Volume Setup Base Name Ciclass Provisioning Thin v Server No Server v Permission Readitite Y Quantity Max of 25 oS SmartClone Volume Name Provisioning Server Name Permission C tclass_2 Thin Y No Server T lt C class_3 Thin Y No Server lt ha C class_4 Thin vV No Server Dadi di C class_5 Thin Y No Server v v Cancel 1 List of SmartClone volumes created after clicking Update Table Figure 129 Creating multiple SmartClone volumes 8 If you want to modify any individual characteristic do it in the list before you click OK to create the SmartClone volumes For example you might want to change the assigned server of some of the SmartClone volumes In the list you can change individual volumes server assignments 278 SmartClone volumes 9 Click OK to create the volumes The new SmartClone volumes appear in the navigation window under the volume folder EHP LeftHand Networks Centralized Management Console Woe alaan 2 a Details iSCSI Sessions Remote Snapshots Assigned Servers Map Vi
6. 2 In the CMC add the server connection to the management group and configure iSCSI access for that server See Adding server connections to management groups on page 290 3 In the CMC assign volumes to the server See Assigning server connections access to volumes on page 292 A In the iSCSI initiator on the server log on to the volume See Completing the iSCSI Initiator and disk setup on page 294 5 On the server configure the volume using disk management tools See Completing the iSCSI Initiator and disk setup on page 294 Former terminology release 7 0 and earlier Before release 8 0 you controlled server access to volumes using authentication groups and volume lists Starting with release 8 0 you work with server and volume connections 289 With release 8 0 and later you add each server to a management group and assign server connections to volumes or snapshots You can make the assignment from either the volume or the server HP LeftHand Networks Centralized Management Console Jog E Getting Started gt Available Modules 2 ay Exchange Details Volumes and Snapshots EHS Servers 1 mS Name Status Size Utilization Permission Gateway Connection LM Administration GExStore2 Normal 46B J Reaairite 10 0 6116 L sites G ExStore1 Normal 4GB Ean ReadiWrite 10 0 6117 2S stores Performance Monitoring 5
7. cccceeeceeeereeceeneeeeseneeecesneeeeeeeeesenueeeseeeeesenteeeseeeeeeaes 269 CTS Alles ee eae are aed a Gc ae ee 272 Shared snapshot sesccarinsnovscrdatersyroravactn aniarannwansatsiaaaaneul gina a rie a aia AEI ETER EEE ai Eiei hi 274 Creating SmartClone volumes aprs0ini aanstanraetncn eee asad uma ae Ran 276 To create a SmartClone Ver ae dase ca atrce eae cecte eng ca ay ecanaanstccalcieedtnataainaeeneaniainceteeaneexencanede 276 Viewing SmartClone volumes 9 oc rine daetnt ena aa aa oR uan aoe ee OE 279 Map VIEW eoira a E E E A A ayer vient A 279 Using VIEWS 05 ceckesniceneiineessianivdeedensdiv orden E EE E E E EE 280 Manipulating the Map View sccsicxcayriayyoasseecaesnaxininenpstatintadealanentasneusestendvsaeiaanhentibanentatens 280 Viewing clone points volumes and snapshots ccccscsseeceeeseeseeeeeeseeeeeeeeeeeentteeeeesesanees 282 Viewing utilization of clone points and SmartClone volumes ccesceceseeeereeeeeteteeeeeees 282 Editing SmartClone volumes 35 xodosneniiuan doulas ein vosnataeuaaa aunieuineimm aun nani rae manoranaenaas 283 To edit the SmartClone volumes access cccssaveecarseiecanatteasinsnecduaeveseuesecacaneeeanteonndaaleemtcneaaaneens 284 Deleting SmartClone volumes 37 625 cuieraieun esd mmatnte arena al mri banaue aeednes 284 Deleting the clone point S sssartaversacersas madssacniteneienaqiesnarwiand annawaeneetintaGenitenssaawia 285 Deleting multiple SmartClone volumes lt i si6cscrhaveeidseaacaue oor
8. EY NOTE Only NICs that are in the Active or Passive Ready state can be designated as the communication interface You cannot make a disabled NIC the communication interface When you initially set up a storage node using the Configuration Interface the first interface that you configure becomes the interface used for the SAN iQ software communication To select a different communication interface 1 In the navigation window select the storage node and log in 2 Open the tree and select the TCP IP Network category 3 Select the Communication tab to bring that window to the front GHP LeftHand Networks Centralized Management Console Ei asks Help etting Started ailable Modules 2 change SANAQ Interface Motherboard Port2 Servers 1 Sp Administration Sites BS stores O Performance Monitoring EH ss Storage Modules 3 CHS Denver 1 HE Alerts Hardware a SNMP igi Storage Tepap TCP Status DNS Routing Communication Communications Mode Multicast Off Unicast On Manager IP Addresses 10 0 60 32 10 0 6116 10 0 6117 E Denver 2 Had Alerts H Hardware a SNMP II Storage La TCPAP Network Denver 3 EH Volumes 2 70 Alerts Remaining Date Time Hostname IP Address Alert Message 70 02 08 20 Den 10 0 61 17 Management Group Exchange Storage Server Denver 1 Status storag lt
9. 1 Log in to the storage node 2 Select the TCP IP category from the tree 3 On the TCP IP tab select both NICs to bond 4 Click TCP IP Tasks and select New Bond 5 Select a bond type from the drop down list 6 Enter an IP address for the bond 7 Enter the Subnet mask 8 Optional Enter the default gateway 9 Click OK EY NOTE The storage node drops off the network while the bonding takes place The changes may take 2 to 3 minutes during which time you cannot find or access the storage node 101 10 Click OK to confirm the TCP IP changes A message opens prompting you to search for the bonded storage node on the network Search Network Search Info You have changed the network configuration for this device Completing the changes may take 2 to 3 minutes Then you must find the newly configured device on the network To find the device specify the device s hostname static IP address or subnet mask then click the Search button Continue searching until the device is located on the network Search Criteria Hostname or IP Address i0 0 24 56 SubnetMask Figure 69 Searching for the bonded storage node on the network 11 Search for the storage node by Host Name or IP address or by Subnet mask EY NOTE Because it can take a few minutes for the storage node to re initialize the search may fail the first time If the search fails wait a minut
10. Figure 139 List of SmartClone volumes in cluster Use Shift Click to select the SmartClone volumes to delete 285 Right click and select Delete Volumes and Snapshots A confirmation message opens When you are certain that you have stopped applications and logged off any iSCSI sessions check the box to confirm the deletion and click Delete It may take a few minutes to delete the volumes and snapshots from the SAN 286 SmartClone volumes 16 Working with scripting Scripting in the SAN iQ software through release 7 0 was accomplished by the java commandline CommandLine scripting tool In SAN iQ software release 8 0 the java commandline CommandLine scripting tool is replaced by SAN iQ CLIQ the HP LeftHand Storage Solution command line interface CLI The CLI takes advantage of the new SAN iQ API that provides comprehensive coverage of SAN iQ software functionality to support scripting integration and automation The java commandline CommandLine scripting tool will be supported after the 8 0 release to allow time for converting existing scripts that use java commandline CommandLine to the new CLI syntax Scripting documentation e The Command Line Interface User Manual is available from the HP LeftHand Networks website and it is installed with the CLI e ASAN iQ 8 0 Readme is available that describes the changes from java commandline Com mandLine to the new CLI syntax Sample scripts using the CLI are also
11. Magnify creates magnification area like a magnifying glass that you can move Q over sections of the map view Note that the magnify tool toggles on and off You must click the icon to use it and you must click the icon to turn it off Ea Zoom to Fit returns the map view to its default size and view i Select to Zoom allows you to select an area of the map view and zoom in on just that area D Rotate turns the map view 90 degrees at a time Click and drag You can left click in the Map View window and drag the map around the window Details iSCSI Sessions Remote Snapshots Assigned Servers Map View Date J aaner B hio Figure 134 Using the Magnify tool with Map View tree 281 Viewing clone points volumes and snapshots Viewing The navigation window view of SmartClone volumes clone points and snapshots includes highlighting that shows the relationship between related items For example in Figure 135 the clone point is selected in the tree The clone point supports the 7 C training class SmartClone volumes so it is displayed under those 7 volumes The highlight shows the relationship of the clone point to the original volume plus the 7 SmartClone volumes created from the original volume I HP LeftHand Networks Centralized Management Console WOE Details iSCSI Sessions Remote Snapshots Assigned Servers Map View Snapshot Getting
12. To avoid potential connectivity and performance problems with other devices on your network keep the frame size at the default setting The frame size on the storage node should correspond to the frame size on Windows and Linux application servers If you decide to change the frame size set the same frame size on all storage nodes on the network and set compatible frame sizes on all clients that access the storage nodes Consult with your network administrator for recommended storage node frame sizes and the corresponding frame sizes in bytes for Windows and Linux clients in your environment Jumbo frames Frame sizes that are greater than 1500 bytes are called jumbo frames Jumbo frames must be supported and configured on each Windows or Linux client accessing the storage node and also on each network switch between the storage node and the Windows or Linux clients Jumbo frames can co exist with 1500 byte frames on the same subnet if the following conditions are met e Every device downstream of the storage node on the subnet must support jumbo frames e If you are using 802 1q virtual LANs jumbo frames and non jumbo frames must be segregated into separate VLANs EY NOTE The frame size for a bonded logical interface must be equal to the frame size of the NICs in the bond Editing the NIC frame size To edit the frame size 110 Managing the network 1 In the navigation window select a storage node and log in
13. Configuring the Failover Manager using the VI Client After you have installed the Failover Manager files on the ESX Server you configure the Failover Manager using the VI Client Add Failover Manager to inventory iP 2 3 4 196 In the Inventory Panel select the VMware ESX Server In the Information Panel select the Configuration Tab In the Hardware section select Storage SCSI SAN and NFS In the Storage section right click the datastore icon and select Browse Datastore The Datastore Browser opens Using specialized managers Right click the FailoverMgr vmx file and select Add to Inventory In the Add to Inventory Wizard enter a name for the new Failover Manager and click Next Select the Inventory Locations to place the Failover Manager in the Add to Inventory wizard Verify the information and click Finish ODN AH Close the DataStore Browser Select a network connection 1 In the Inventory Panel select the Failover Manager 2 In the Information Panel select the Summary tab In the Commands section select Edit Settings The Virtual Machine Properties window opens 3 On the Hardware tab select Network Adapter 1 Select the appropriate network connection from the Network label list on the right 5 Click OK to exit the Virtual Machine Properties window Power on the Failover Manager and configure the IP address and host name 1 In the Inventory panel select the new Failover Manager and power
14. Troubleshooting the Failover Manager on VMware Server or Player Two issues may occur when running a Failover Manager The Failover Manager may not automatically restart upon a reboot The default Startup Shutdown Options settings may have accidentally changed You cannot find the Failover Manager in the CMC nor ping it on the network The default config uration may bridge to an incorrect host adapter Use the following instructions to correct the VMware Server settings for either of these issues Fix startup shutdown options 1 2 3 4 Or Open the VMware Server Console Select the Failover Manager virtual machine in the Inventory list Power off the Failover Manager From the menu select VM gt Settings or right click on the Failover Manager and select Settings The Virtual Machine Settings window opens Select the Options tab and select Startup Shutdown On the right side in the Virtual machine account section select an account for the virtual machine such as Local system account On the right side in the Startup Shutdown Options section select the following options e On host startup Power on virtual machine e On host shutdown Power off virtual machine When you are done click OK to save your changes Power on the Failover Manager 194 Using specialized managers Fix network settings to find Failover Manager oS oP oO YS Determine which interface on the windows host server is config
15. When you log in to one storage node in a management group you are logged in to all storage nodes in that group Choosing which storage node to log in to You can control which of the storage nodes in a management group you log in to 180 Working with management groups When the Log in to Node window opens click Cancel A message opens asking if you want to log in to a different storage node Click OK The Log in to Node window opens with a different storage node listed If that is the storage node you want go ahead and log in If you want to log in to a different storage node repeat Step 1 and Step 2 until you see the storage node you want Logging out of a management group Logging out of a management group prevents unauthorized access to that management group and the storage nodes in that group 1 In the navigation window select a management group to log out of 2 Click Management Group Tasks on the Details tab and select Log Out of Management Group Management group Maintenance tasks When you have an established management group you may need to perform maintenance activities on the group Starting and stopping managers on page 181 Editing a management group on page 182 Backing up a management group configuration on page 183 Restoring a management group on page 184 Safely shutting down a management group on page 184 Start the management group back up on page 185 Remo
16. 111 When you have enabled flow control on both NICs and then you bond those NICs the NIC flow control column shows the physical NICs as enabled and the bondO as disabled However flow control is enabled and working in this case Using a DNS server The storage node can use a DNS server to resolve host names For example if you enter a host name to specify an NTP time server the storage node will use DNS to resolve the host name to its IP address For example the time server in Boulder Colorado has a host name of time nist gov DNS resolves this host name to its IP address of 192 43 244 18 DNS and DHCP If you configure the storage node to use DHCP to obtain an IP address and if the DHCP server is configured to provide the IP addresses of the DNS servers then a maximum of three DNS servers will automatically be added to the storage node These DNS servers are listed as IP addresses in the storage node configuration window in the TCP IP Network category on the DNS tab You can remove these DNS servers but the storage node will not be able to resolve host names until you enter a new DNS server DNS and static IP addresses If you assigned a static IP address to the storage node and you want the storage node to recognize host names you must manually add a DNS server to the Network DNS tab EY NOTE If you initially set up the storage node to use DHCP and then change the configuration to use a static IP address the DNS serve
17. The units of measurement Whether the variable is permanent Permanent variables cannot be removed from active reporting Whether you can change the frequency with which the measurements are taken The default frequency of measurements The default action that occurs if the measured value of the variable reaches a threshold Table 28 List of monitored variables Variable Default ac idie Units status Perm variable Specify freq Default freq tion threshold BBU Capacity NSM 160 y BBU Capacity Test Overdue Status Yes Yes 1 hour pie i NSM 160 g Boot Device Status NSM 160 Status No Yes 1 minute ri a if NSM 260 g NSM 4150 CPU Utilization Percent No Yes 1 minute None 138 Reporting Variable f Default ac ERA Units status Perm variable Specify freq Default freq tion threshold Cache Status Status Yes Yes 1 minute CMC alert if changes CMC alert if the value ex Cluster Utiliza Percent Yes Yes 15 minutes ceeds 90 CMC tion alert if the value exceeds 95 Cluster Virtual CMC alert if IP Status Normal Faulty No Yes 1 hour noi Nemal CMC alert if Drive Health Status Yes Yes 1 minute change or critic al Drive Status NSM 160 1 through 4 NSM 260 1 through 12 DL 380 O through 5 DL 320s 1 through 12 Dell 2950 O through 5 Status No Yes 1 minute ris aa i NSM 2060 0 g through 5 NSM 4150 O through 14 HP LeftHand P4500 1 through
18. ccsssccesseeeseeeeereeeeseeenaeeeseeenteeetseeesereess 190 Failover Manager configuration pais iovardaisedsnen saniboabindntaeieneboainthiaiisnchubtbanncdaniddaldeanumsaeenesins 190 To install the Failover Manager icn cccisssesecnsanlerasnardeersevanheedaednarstendstnanseniegresercnnnbeosmocnnes 191 Uninstalling the Failover Manager for VMware Server or Player ccsscceeeeseeeeeeteeeeeteeeeeees 194 Troubleshooting the Failover Manager on VMware Server or Player cceeeeeeeceeeeeeeeeteeeeeeeeeeaes 194 Fix startup shutdown options gt ivsissaavzcaninsainenapiosbanhasiaaluoeawbubnigensaeh dui Aatiaekaiis katie baby sueureaadebanns 194 Fix network settings to find Failover Manager csscccesseeesseeeereeseseeeeseeenseenseeetsreeteeeeseatens 195 Using the Failover Manager on VMware ESX Server cccsessseeeeeesseneeeeeeeeseeeeeeeeeeneeeeeensnenaeeeeeess 195 Installing the Failover Manager on VMware ESX Server cccseeseeeeeeeeeeeereeeeeneeeeennneeeenneneees 195 Using the HP LeftHand Management DVD ssgcisinsiciuiesateachisheuseauuiansnnnaunsanphonswninivaninedoaren 195 Using the HP LeftHand Networks web site download ccescccceeseeeeesseeeeesseeeeenteeeeeeees 196 For ESX 9 9 t Or ESXi 2 soranicuvacrasirvevicenss E sen chemamansa tans ac E bali yesSepenlebedanadeatees 196 For ESX Server 3 0 10 3 0 2 een naacteanuteauiaaigudiiondnciananaaatieantinan E O OA 196 Configuring the Failover Manager using the
19. To shut down the management group click Shut Down Group Power off Node Shut Down Group Figure 8 Confirming storage node power off Depending on the configuration of the management group and volumes your volumes and snapshots can remain available Upgrading the SAN iQ software on the storage node When you upgrade the SAN iQ software on a storage node the version number changes Check the current software version by selecting a storage node in the navigation window and viewing the Details tab window Prerequisites Stop any applications that are accessing volumes that reside on the storage node you are upgrading and log off all related iSCSI sessions To view a list of available upgrades select Check for Upgrades from the Help menu Copying the upgrade files from web site Upgrade the SAN iQ software on the storage node when an upgrade or a patch is released The SAN iQ software upgrade installation takes about 10 to 15 minutes may be longer on certain platforms including the storage node reboot 49 EY NOTE For those models that contain 2 boot flash cards both boot flash cards must be in place to upgrade the SAN iQ software See Checking status of dedicated boot devices on page 51 Upgrading the storage node Install upgrades on storage nodes individually which is recommended If you are upgrading multiple storage nodes that are not in a management group you can upgra
20. e An interface can only be in one bond Record the configuration information of each interface before you create the bond Then if you delete the bond you can return to the original configuration if desired e When you delete an Active Passive bond the preferred interface assumes the IP address and configuration of the deleted logical interface 100 Managing the network e When you delete a Link Aggregation Dynamic Mode or an Adaptive Load Balancing bond one of the interfaces retains the IP address of the deleted logical interface The IP address of the other interface is set to 0 0 0 0 Ensure that the bond has a static IP address for the logical bond interface The default values for the IP address subnet mask and default gateway are those of one of the physical interfaces Verify on the Communication tab that the SAN iQ interface is communicating with the bonded interface A CAUTION To ensure that the bond works correctly you should configure it as follows Create the bond on the storage node before you add it to a management group Verify that the bond is created If you create the bond on the storage node after it is in a management group and if it does not work correctly you might Lose the storage node from the network Lose quorum in the management group for a while See Deleting a NIC bond on page 344 for information about deleting NIC bonds using the Configuration Interface Creating the bond
21. 2 Click Management Group Tasks on the Details tab and select Edit Management Group 3 Change the local bandwidth priority using the slider A default setting of 4 at the Application Access end of the slider is more appropriate for everyday situations where many servers are busy with the volume A setting of 40 at the Data Rebuild end of the slider is most commonly used for quick data migration or copies when rebuilding or moving damaged volumes 4 Click OK The new rate displays on the Details tab in the management group tab window Backing up a management group configuration Use Backup Configuration of Management Group to save one or both of the following configuration files e Back up the configuration creates a binary file bin of the management group configuration 183 Save the configuration description creates a text file txt listing the configuration characteristics of the management group The binary file enables you to automatically recreate a management group with the same configuration Use the text file for support information Your support representative will help you restore this back up EY NOTE Backing up the management group configuration does not save the configuration information for the individual storage nodes in that management group nor the data To back up storage node configurations Backing up the storage node configuration file on page 46 Backing up a management group wit
22. 2 Start xterm as follows xtem 3 In the xterm window start minicom as follows minicom c on 1 NSM Opening the Configuration Interface from the terminal emulation session 1 Press Enter when the terminal emulation session is established 2 Type start and press Enter at the log in prompt 3 When the session is connected to the storage node the Configuration Interface window opens Logging in to the Configuration Interface Once you have established a connection to the storage node log in to the Configuration Interface Table 76 Logging in depends on where the storage node is If the storage node is in From Configuration Interface entry window Available Nodes pool e Press Enter to log in The Configuration Interface main menu opens 1 Press Enter to log in the Configuration Interface Login window opens 2 Type the user name and password of the administrative user created Management group for the management group 3 Tab to Login and press Enter The Configuration Interface main menu opens 342 Using the Configuration Interface EY NOTE This user is viewable in the CMC under the management group Administration category Contiguring administrative users Use the Configuration Interface to add new administrative users or to change administrative passwords You can only change the password for the administrative user that you used to log in to the Configuration Interface 1 2
23. 261 Dell 2950 capacity RAIDS 58 disk arrangement in disk setup 77 disk status 77 RAID levels and default configuration 55 RAID rebuild rate 69 RAID10 initial setup 61 RAIDS initial setup 63 descriptions changing for clusters 21 changing for volumes 24 1 Details tab storage nodes 49 Details viewing for statistics 312 DHCP using 9 warnings when using 92 diagnostics hardware 144 list of diagnostic tests 145 viewing reports 45 disabled network interface configuring 107 disabling network interfaces 106 SNMP agent 132 SNMP traps 134 Disassociating Management Groups See Remote Copy Users Guide disaster recovery best practice 202 starting virtual manager 205 using a virtual manager 200 disk arrangement in platforms 74 disk setup report 72 managing 72 managing in storage node 72 powering off through the CMC 83 84 powering on through the CMC 83 85 replacement 83 85 86 replacement checklist 82 replacing in RAID1 10 and 5 50 83 replacing in replicated cluster 216 replacing in storage node 80 using Repair Storage Node when replacing VSA recreating 77 disk drive see disk 73 disk report 73 disk setup Dell 2950 77 DL 380 75 DL320s 76 HP LeftHand P4300 80 HP LeftHand P4500 79 NSM 2060 77 NSM 4150 78 report 73 tab 73 disk space usage 233 disk status Dell 2950 77 DL320s 76 DL380 75 HP LeftHand P4500 79 IBM x3650 77 NSM 160 74 NSM 2060 77 NSM 260 7
24. 3 On the TCP IP tab select the bond interface or physical bond that you want to delete 104 Managing the network 4 Click on TCP IP Tasks and select Delete Bond Because the IP addresses changes the Search Network window opens Search Network X Search Info You have changed the network configuration for this device Completing the changes may take 2 to 3 minutes Then you must find the newly configured device on the network To find the device specify the device s hostname static IP address or subnet mask then click the Search button Continue searching until the device is located on the network Search Criteria Hostname or IP Address 10 0 24 56 SubnetiMask Figure 74 Searching for the unbonded storage node on the network 5 Search for the storage node by Host Name or IP Address or Subnet Mask EY NOTE Because it can take a few minutes for the storage node to re initialize the search may fail the first time If the search fails wait a minute or two and choose Try Again on the Network Search Failed message You can also use the Configuration Interface to delete a NIC bond See Deleting a NIC bond on page 344 Verify communication setting after deleting a bond 1 Select a storage node and open the tree below it 105 2 3 Select the TCP IP Network category and click the Communication tab tratized Management Console Joe Tepap TCP
25. 48 Log in to the storage node Select Storage Node Tasks on the Details tab and select Power Off or Reboot Select Power Off The button changes to Power Off Working with storage nodes 4 In the minutes field type the number of minutes before the powering off should begin Enter any whole number greater than or equal to O If you enter O the storage node powers off shortly after you complete Step 5 EY NOTE If you enter O for the value when powering off you cannot cancel the action Any value greater than O allows you to cancel before the power off actually takes place 5 Click Power Off Centralized Management Console mea Powering off this storage node prepares it for maintenance or relocation For data integrity and availability t is recommended that only one storage node be off at any one time If you intended to power off all the storage nodes in the management group this can be safely accomplished by shutting down the management group Shutting down the management group places it in maintenance mode and automatically powers off each storage node in the management group Maintenance mode offers the highest degree of data protection when servicing storage nodes in the management group Before shutting down the management group verify that Servers are not connected to volumes or snapshots Volumes and snapshots are not restriping To power off this storage node click Power Off Node
26. 49 user adding a group to a user 24 administrative 123 administrative default user 123 changing a user name 24 deleting administrative 124 editing 124 password 24 utilization of clone points and SmartClone volumes 282 V variables monitored adding 136 downloading log file for 143 144 editing 137 list of permanent 138 removing 138 viewing summary of 141 verifying NIC bond 102 VI Client recreating disk for VSA 77 Viewing statistics details 312 viewing disk report 73 disk setup reportdisk 72 monitored variable summary 141 viewing clone points volumes and snapshots 282 viewing SmartClone volumes 279 viewing RAID setup report 59 Virtual IP Address host storage node 335 370 virtual IP address 335 and iSCSI 335 changing iSCSI 212 configuring for iSCSI for 210 gateway session when using load balancing 336 removing iSCSI volume 212 virtual machine 173 virtual manager adding 204 benefits of 201 configurations for using 200 configuring 204 function of 200 overview 200 removing 207 starting to recover quorum 205 stopping 207 virtual RAID data safety and availability 69 virtual storage node data replication 67 RAID device 60 VMware ESX Server 60 VMware Server 173 volume availability 214 volume sets creating application managed snapshots for 249 deleting application managed snapshots for 261 volume size best practice for setting 222 Volume
27. A third schedule would run monthly and keep 4 copies Planning how many snapshots For information about the recommended maximum number of volumes and snapshots that can be created in a management group see Configuration summary overview on page 174 and Chapter 9 on page 171 247 Creating a snapshot Create a snapshot to preserve a version of a volume at a specific point in time For information about snapshot characteristics see Guide for snapshots on page 246 1 N Ow Log in to the management group that contains the volume for which you want to create a new snapshot Right click on the volume and select New Snapshot If you want to use VSS to quiesce the application before creating the snapshot select the Application Managed Snapshot option This option requires the use of the VSS Provider For more information see Requirements for application managed snapshots on page 248 If the VSS Provider is not installed SAN iQ will let you create a point in time consistent snapshot not using VSS This option quiesces VSS aware applications on the server before SAN iQ creates the snapshot The system fills in the Description and Servers fields automatically Type a name for the snapshot Optional Enter a description of the snapshot Optional Assign a server to the snapshot Click OK when you are finished EY NOTE In the navigation window snapshots are listed below the volume in descending date
28. Bond NICs to ensure continuous network access or to improve bandwidth The VSA has only one network interface and does not support changing the following items NIC bonding NIC flow control Frame size TCP interface speed or duplex Network best practices Isolate the SAN including CMC traffic on a separate network If the SAN must run on a public network use a VPN to secure data and CMC traffic Configure all the network characteristics on a storage node before creating a management group or before adding the storage node to a management group and cluster Use static IP addresses or reserved addresses if using DHCP Configure storage node settings for speed and duplex frame size and flow control BEFORE bonding NICs and before putting the storage node into a management group and cluster If adding a second IP address to a storage node the second IP address must be on a separate subnet If the two IP addresses are on the same subnet they must be bonded Changing network contigurations Changing the network configuration of a storage node may affect connectivity with the network and application servers Consequently we recommend that you configure network characteristics on individual storage nodes before creating a management group or adding them to existing clusters If you do need to change the network characteristics of a storage node while it is in a cluster be sure to follow our recommended best practices Best pr
29. C _backup3 C _backup2 To continue with the operation click OK To create a new SmartClone volume instead click New SmartClone Volume To exit the operation without making changes click Cancel OK New SmartClone Volume Cancel Figure 115 Rolling back a volume Choosing a roll back strategy You have three choices for continuing from this message window Continue with standard roll back The following steps result with the original volume with its original name returned to the state of the rolled back snapshot 1 Click OK to continue The volume rolls back to the snapshot deleting any newer snapshots The rolled back snapshot remains intact underneath the volume and retains the data Any data that had been added to the volume since the snapshot was created is deleted If you rolled back an application managed snapshot use diskpart exe to change the resulting volume s attributes For more information see Making an application managed snapshot available on page 250 Reconnect iSCSI sessions to the volume and restart the applications Create a new SmartClone volume from the snapshot Instead of continuing with a standard roll back you can create a new SmartClone volume with a new name from the selected snapshot This choice preserves any newer snapshots and any new data in the original volume 1 Click New SmartClone Volume 259 Enter a name and configure the addit
30. Configuration summary overview The Configuration Summary provides an easy to use reference for managing the size and optimum configuration of your SAN The first time you create a management group a configuration summary table is created that resides immediately below the Getting Started Launch Pad in the navigation window Subsequent management groups are added to this Configuration Summary shown in Figure 82 For each management group the Configuration Summary displays an overview of the volumes snapshots and storage nodes in that management group The Summary roll ups display configuration information and guides you to optimal configurations for volumes and snapshots iSCSI sessions and the numker of storage nodes in the management group and in each cluster Summary roll up The summary roll up provided on the Configuration Summary panel is organized by management group Within each management group is listed the total number of volumes and snapshots storage nodes and iSCSI sessions contained in the management group EHP LeftHand Networks Centralized Management Console Joey Configuration Summary EGS Volumes amp Snapshots 7 S iscsi Sessions i o J Storage Nodes in Management Group ki J E Storage Nodes in Cluster Logs 0 Alerts Remaining Date Time Hostname JIP Address Alert Message Figure 82 Configuration summary is cre
31. O E cheese tual cea eh coe ee Mee va vasa tens a aa ant Me GSI EAE 149 Selected details of the Hardware report for the NSM 160 NSM 260 and VSA 152 Selected details of the Hardware Report for DL 380 DL 320s NSM 2120 HP LeftHand P4500 and HP LeftHand P4300 oo eee eececceeeseececccccecussesccccecauecsecccseeeusetescceeeeanes 157 Selected details of the hardware report for IBM x3650 0 ecccceccecceeceeeeesteceeeteeeneaees 162 Selected details of the hardware Report for Dell 2950 NSM 2060 and NSM 4150 163 MIGHE GES Gnd QUOTE nip cariseaccanideaaaueaias aesonainasd enenssaananeeasadcnieeaensegamsetemastiaemttns 172 Default number of managers added when a management group is created 0 06 174 Management group requirements seseeeeeeeeeecceeecceeeceeeeecencaeeeaaueeaeeseteeeteeeeeeees 178 Guide to local bandwidth priority settings cccccseesseeceeeessneeeeeeeeeeeeeeeseeeteeeeeenes 183 Troubleshooting for ESX Server installation cccccccesscccescecesseeeeeneeeeesneeeeseteeeesaaes 199 Requirements for using a virtual manager ccceccceeseeeceeeeceeeeeeeeeeceeeseeseeeeteeeeteess 201 Recommended SAN configurations for provisioning storage cscccceeseeesseeeeetseees 222 Volume provisioning methods a cust stuwstae ciuvreinaninaesieomasemu eis etuuediaatrmecndirns 222 Setting a replication level for a volume ceccccececccesseceeeeeeeeeeeceeceeeeeeseseeessueeeeseaeees
32. Select the management group 2 Select the Registration tab 3 Click Registration Tasks and select Feature Registration from the menu 4 Select the Scripting Evaluation tab 319 Clear the check box 6 Click OK Table 72 describes additional steps to safely back out of the scripting evaluation Table 72 Safely backing out of scripting evaluation Feature Being Evaluated Steps to Back Out e Back out of any remote copy operation e Delete any scripts Remote copy volumes and snapshots e Delete any primary or remote snapshots created by the scripts as indicated by viewing Created By on the snapshot Details tab EY NOTE Turning off the scripting evaluation ensures that no scripts continue to push the 30 day evaluation clock Registering advanced features When registering storage nodes for advanced features you must have your license entitlement certificate and submit the appropriate storage node feature key s to purchase the license key s You then receive the license key s and apply them to the storage node s Using license keys License keys are assigned to individual storage nodes License keys can be added to storage nodes either when they are in the Available Nodes pool or after they are in a management group One license key is issued per storage node and that key licenses all the advanced features requested for that storage node Therefore you register each storage node for which you want t
33. dau dcactavercnetemtaesvenesebensumeatoeauamanemeuacuannecdddianoausdenuns 275 Map View display tools S ciccxsacdtesadsste staan hievetaauunt ooamnanteieteierudataunesgute aus nous eaetuosy 281 Requirements for changing SmartClone volume characteristics ccscceeseeeeeeteeeteeees 283 Overview of configuring server access to VOLUMES cccceeseecesseeeeeteeeeseeeeeeeteeeeteaees 289 66 67 68 69 70 71 72 73 74 75 76 77 78 Entering CHAP information in a new server ceecceeeseeesseeeseeeeeeeeeeeeeeneeeeneeenteeeeseeess 291 Server connection permission levels ccccceeesceeeeeeetceeeeeeeeeeeeeesseeneeeeeeneettaeeeeees 293 Performance Monitor table columns iwi crisseenncevsavenncteuneus abuscewie waauecmmaccnigcnmremanin es 306 Performance Monitor statistics nj saicvaeon caus Coan satasdiasemamsiupedi yadeuadddundesanseunionerisergivaes 307 Descriptions of advanced features s c0i c2etciceicesavsaencanriatnebienibderradecsedaimsdeieeeaatenicents 318 Safely backing out of Remote Copy evaluation ccccccccceeesceceeeeeeeeeneeeeseteeeesteeeeeees 319 Safely backing out of scripting evaluation cccccceeessceeeesceecesseeceeeeeeeseeeeeeseesenseees 320 Replacing the ghost storage node with the repaired storage node sscceeeeeseeeenees 332 Configuring AS IAP a ein sat cad irate eater a teases ona da nena en Ea aa a sa teahuenes 338 HSS E TTA INA Oe yop sss seein EAE war vectra owe vente ek ree aS
34. eeeeeeeeeeceeeeeneeeeeenteeeeeeeeecseeeceeeeeceeececeeeeeeeeeeeeaeeaaaaeaaeneas 179 Create cluster and assign a VIP siciesiaiethessidaiuioauve downzdamannveaanieansitedgadbhewedeni due bnnnsbonhennes 179 Create a volume and finish creating management group cseeceeeseeeeteeeeseeeeteeeeereeeeees 179 Adding a storage node to an existing management group ccsseeeeeeeeceeeeeeeeeeestteeeeeeeetsees 180 Logging in to a management group 21 andensansaiecaegnin tenet Greinaian inne tamnadenan 180 Choosing which storage node to log in to sacuiscavacciutioansionnssuubersuuebawniionmiaaliteauedaiehanunaies 180 Logging out of a management group eeeeceseeeceeseseeeeeseeecensceesesaceceesaeeeeeeeeeetseeeeeteeeeeses 18 Management group Maintenance tasks ccs sis dave ticlasnsienatisancicenancatibiave cau suits bates sie laiysiieaitateiicavunles 181 Starting and stopping managers 5 5 i crmaorneenaareareenceenodiaianenientulantormuisnoaaangananunedaane 181 Starting additional managers ai sa2sgsicaeasecasdansencaayioasehiunbiedsthawediayelombeanvecssabiaudlnasaiaubbans 181 SHOPPING MANAGENS srecne aamen ean E a E A E E EE EEE REE e R ae 181 Editingia management grop sses sirsa aia E a aa E EA 182 Setting or changing the local bandwidth priority cccssceeeeecreeseeeeeeeeeeeeeesieeeeeseterenneees 183 Set or change local bandwidth priority cinncdsansalsiscannaconines ith conthiameddanmaddidbaniebauedbewusenueiuiis 183 Backing up a manag
35. or a two node configuration You can also use a virtual manager to maintain quorum during storage node maintenance procedures such as firmware upgrades Disaster recovery using a virtual manager The virtual manager functions as an on demand manager in a disaster recovery situation As an on demand manager it can be used to regain quorum and maintain access to data Management group across two sites with shared data Using a virtual manager allows one site to continue operating if the other site fails The virtual manager provides the ability to regain quorum in the operating site if one site becomes unavailable or in one selected site if communication between the sites is lost Such capability is necessary if volumes in the management group reside on storage nodes in both locations Management group in a single location with two storage nodes If you create a management group with only two storage nodes that management group is not a fault tolerant configuration Using one manager provides no fault tolerance Using two managers also provides no fault tolerance due to loss of quorum if one manager becomes unavailable See Managers and quorum on page 172 for more information Running two managers and adding a virtual manager to this management group provides the capability of regaining quorum if one manager becomes unavailable 200 Using specialized managers Storage node maintenance using a virtual manager A virtual manager can
36. the storage node is returned to active duty and resynced with the data that has changed since its quarantine Volumes that depend on this storage node will then show Resyncing on the volume Details tab Storage Server Inoperable The Inoperable status indicates that the storage node is unable to repair the slow I Os which may indicate a potential hardware problem Volumes that depend on this storage node are unavailable For information about how to determine volume availability see the section Determining volume and snapshot availability on page 51 Rebooting the storage node may return the status to Normal Auto Performance Protection and the VSA The VSA will not report the Overloaded status because there is no way to determine what may be affecting I O on the underlying hardware However the VSA can accurately report when I Os are not completing and can return the Inoperable status 214 Working with clusters Auto Performance Protection and other clusters Auto Performance Protection operating on a storage node in one cluster will not affect performance for other clusters in the management group Checking storage node status You can easily identity whether Auto Performance Protection is active on a storage node in a cluster with performance issues 1 Select the affected storage node in the navigation window The storage node icon will be blinking in the navigation tree 2 Check the Status line on the Details ta
37. volumes Note that the clone point contains data that is shared by SmartClone volumes and the volumes themselves do not contain separate copies of that data That is why the volumes Utilization graphs show 0 In the example below the clone point is at 90 of its 5 GB capacity with the C training class desktop configuration The 5 SmartClone volumes shared out for the 5 individual users contain no data at the time they are created for use by those 5 individuals Only as each user writes data to his or her individual volume through the file share system mounted on that volume do those volumes fill up on the SAN Figure 136 shows the utilization graph of the clone point on the Details tab 282 SmartClone volumes Getting Started Configuration Summary Available Nodes 1 ainingOS EHE Servers 1 H Administration H stes RS Programming Pertormance Monitor H Storage Nodes 3 fg Volumes 6 and Snapshots 1 EG ca US ca SCsnap EHO C class 1 1 LS c _SCsnap EEG caclass_2 1 US c _scsnap EEG caclass_3 1 LS c scsnap GEE Ciclass_4 1 EEE C ciass_5 1 G sysAdm HP LeftHand Networks Centralized Management Console Wow Map View Snapshot Name C SCsnap Description Cluster Programming Status Normal Temp space None Type Primary Clone Point Created by Manual Size 5GB Created 08 05 2008 02 29 05 PM MDT Repli
38. 1 In the navigation window select a storage node and log in 2 Open the tree under the storage node and select the SNMP category Enabling SNMP agents Most storage nodes allow enabling and disabling SNMP agents On the DL 380 and the DL 320s NSM 2120 SNMP is always enabled EY NOTE To query the LEFTHAND NETWORKS NSM CLUSTER_MIB you must have a manager running on the storage node servicing the SNMP request 129 Adding an SNMP agent includes these tasks Enabling the SNMP Agent Adding a community string The community string acts as an authentication password It identifies hosts that are allowed read only access to the SNMP data The community public typically denotes a read only community This string is entered into an SNMP management tool when attempting to access the system Community strings for DL 380 and DL 320s NSM 2120 Both the DL 380 and DL 320s NSM 2120 have two reserved community strings that cannot be modified or removed Sanmon is a read write community Public is a read only community If you use HP System Insight Manager with the DL 380 or DL 320s NSM 2120 the user name and password for logging into the system are sanmon sanmon Adding access control for SNMP clients You can either enter a specific IP address and the IP Netmask as None to allow a specific host to access SNMP or you can specify the Network Address with its netmask value so that all hosts matching that
39. 12 HP LeftHand P4300 1 through 8 Fan Status Status Yes Yes 1 minute CMC alert if not Normal CMC alert if the value ex LogPart Utiliza f ceeds 95 tion Percent Yes Yes 2 minutes CMC alert if the value ex ceeds 80 Management Group Mainten True False Yes Yes 15 minutes ne alert if rue ance Mode Memory iliz Percent No Yes 1 minute ove agtta ation 90 139 Variable A Default ac ae Units status Perm variable Specify freq Default freq tion threshold Network Inter CMC alert if f Status No Yes 1 minute NIC status ace Status changes Power Supply Status No Yes 1 minute CMC alert if Status status changes RAID Status Status Yes Yes 15 seconds ris leri changes Play Copy True False No Yes 15 minutes CMC lert omplete when True Remote aby True False No Yes 15 minutes CMC alert Failover when True Remote Copy Status No Yes 15 minutes enc alei Status not Normal nemete on CMC alert if agement Up Down No Yes 1 minute h Group Status ae SAN iQ Memory Re Status Yes No 1 minute ae aoe quirement ane Snapshot CMC alert if Schedule Status No Yes 1 minute snapshot status Status is not Normal sore neve Milliseconds Yes No 1 minute CME calor alt Latency 60 seconds Storage Server UpeDewn No Yes 1 minute CMC alert if Status not Up CMC alert if warning level reached CMC alert and shutdown if critical level Temperature reached Status Salus Hes Ne IU See the alert or the Har
40. 172 Regular managers and specialized managers ccseeeseeeeeeeeeteceeeeeeeneeeeeeseeeeeeeeeeeeeeteeeeeees 172 Failover GAMES 53 nsupdnocsaruoronnantaneonan Frinucenacnuartian dunes anneapacea pron tanta thea tidotmataansGanntea mma 173 Vitval Managers ese cbs ie wo ica ssn i esd SN Ub nD Cie i lead 173 Creating a management group and default managers cccseecceeeseeeeeneeeeesceeeeeeeeeeetneeeees 173 Configuration summary BVEIVIEW a ceisiinnidsanunoaiunnamnndanpinsdnniaiyaaivnsanehonnuthbaunapntincaalmiaailanesiounntidaisen 174 Summary rollup renerien an a E eine seaneiaan tctano tan Ginn ae betas AR RNE E ESE 174 Configuration guidance sss eh ee eae i e ene mene ern TEREE 174 Best prciclices cytes emana a a A E R R 175 Reading the configuration summary jwissascisavig aainaradhiunhy andy baa vidnanianupdenancmnniecaanieaiaeiab Rina 175 Creating a management group e eeeeeeeeeeeeeeceeeeecceceeceecececeeeeeaeeeaaeeeaeaesetetseeeeseeeeeeteeeeeecetenees 177 Guide to creating management groupS vic Laivxdnwsasawsnssssenviueiconianannineusiawicie Nace woes itaieseeuurans 178 Getting INC TE enean E E E E T A A a E A A 178 Creating a new management Group eeeeeeeeeceeeeeeeeececeeeececaeeeeaeesaeeeseeeeseteeeseeeeeeeeeeceeeeess 178 Create management group and add storage nodes cccceeeeseeeeeeeetceeeeeeeeeteeeeeeeeeaes 179 Add administrative USER soa i als aa a a ail ta nd 179 Set management group time
41. 2 Open the tree and select the TCP IP Network category 3 Select the TCP Status tab 4 Select the interface you want to edit 5 Click TCP Status Tasks and then select Edit 6 Select Set To in the Frame Size section 7 Enter a value between 1500 and 9000 bytes in the Set To field 8 Click OK A series of status messages display Then the changed setting displays in the TCP status report EY NOTE You can also use the Configuration Interface to edit the frame size Changing NIC flow control You can enable flow control on the NICs to prevent data transmission overruns that result in packets being dropped With flow control enabled network packets that would otherwise be dropped will not have to be re transmitted EY NOTE The VSA does not support changing flow control settings Requirements e These settings must be configured before creating NIC bonds e All NICs should have or must have if they are bonded the same flow control settings Flow control cannot be changed when the port is disabled Enabling NIC flow control To enable NIC flow control ONAN ER wWHN In the navigation window select a storage node and log in Open the tree and select the TCP IP Network category Select the TCP Status tab Select the interface you want to edit Click TCP Status Tasks and then select Edit Select On to enable the flow control on the NIC Click OK Repeat Step 4 through Step 7 for the NICs you want to enable
42. 223 Storage node availability and volume access by replication level and priority setting 225 Information on the Use Summary tab xonccysriewnumeuawieiy nama naauins 228 Information on the Volume Use tab ccccccceesecceeeseceeeeeeeceeseeeeeeeecseeeeessaeeeenaeees 230 Information on the Node Use tel bites ance fasces cuca tena puaeigasnnea meme renee widen taoncdaaeasacees 232 Common native filessyste ts ic oes exon Caces eater eevee satel Cases rade ca Saaz coved es cadeacugas uss tdeg 233 Characteristics tor new volumes 2a va eiiac ius Saiaage liye ciocunten ttedieot tnuehaniedauanes 238 Requirements for changing volume characteristics ccccccccceeseeceeeeeeeeeeeeeeeteeeeeeaees 240 Snapshot characteristics snesen nn e a a sents 246 Common applications daily change rates sscccsecceeeeseeceeeeeeeseceeeeeeeeneeeeeeneeeneeeees 246 Requirements for scheduling snapshots estscniiniwn Mindbenders 254 Characteristics for creating a schedule to snapshot a volume cccccceeeceeeeeteeeeeeees 255 Terms used for SmartClone features cocasssccascverasonseanyenoeannneavenineverrumennataunaemaewdguanee sens 264 Characteristics for new SmartClone volumes ccesecceeeeseeeeeseeeeeseeeenseeeeeteeeeneaaes 267 Characteristics of SmartClone volumes fx og eeely ne Gls oeds cd aeons 272 How it works clone Points scsvecs die cutecs waxnnpiea Baw elas Gaus Cal unhcne in tan enue wnt 273 How it works shared Sng pShOls
43. 725 916 696 372 1 009 45 747139 Auto 0 01 Y Queue Depth Total Figure 142 Example showing overview of cluster activity Workload characterization example This example lets you analyze the workload generated by a server ExchServer 1 including IOPS reads writes and total and the average IO size Performance Monitor Denver 2 30 00 PM 2 31 00 PM 2 32 00 PM 2 33 00 PM 2 34 00 PM 2 35 00 PM 2 36 00 PM 2 37 00 PM Mountain Standard Time America Denver joan G vet ExchServer 1 IOPS Total lOs 726 025 691 373 776 894 728957 Auto 0 1 Y Gost ExchServer 1 Throughput Total B s 5 947 594 111 5 883 725 842 6 364 315 769 5 971 616 331 Auto 0 00001 Y P vet RS ExchServer 1 Queue Depth Total 31 000 30 000 32 000 31 800 Auto 1 Y G vet Ga exchServer 1 Average VO Size BAO 8192000 8192000 8192000 8 192000 Auto 0 01 Y Figure 143 Example showing volume s type of workload 298 Monitoring performance Fault isolation example This example shows that the Denver 1 storage node dotted line pegged at the top of the graph has a much higher IO read latency than the Denver 3 storage node Such a large difference may be due to a RAID rebuild on Denver 1 To improve the latency you can lower the rebuild rate Performance Monitor Denver 100 90 80 70 60 50 40 30 20 10 0 3 06 00 PM 3 07 00 PM 3 08 00 PM 3 09 00 PM 3 10 00 PM 3 11 00
44. E E E Ri 337 Requirements for configuring CHAP sisciisi igcncnsdoivivnasie dander cuniy danichaninniniienbalseileiyobutieiuwennteswaiis 338 iSCSI and CHAP Teint ys crnaasansaswninsacendsiesensna ba sa connseaiasatianietoneqiettes Baues baunbatola tenes aaneacaranmuneeehetane 338 Sample 100 5 COMMUN OS ssisyn casestraasatoree nto e a teil was walls a teins 339 B st Practices n E a ese cean tenn nea R 340 About HP LeftHand DSM for MPIO sigiastsisssntasubsaapdaninidaieintstiwapiuguhioupianumlanteianRsanis oapsiampenuiisounas 340 23 Using the Conligurafion WWenGee sscecesrcccezesesraeesecesiecsevensccaveucerans 341 Connecting to the Configuration Interface ccccccecssceeesseceeeeneeeeeneeeneeeeeneeeeeeeseeeseaeeesneaeeeenaaes 34 Establishing a terminal emulation session on a Windows system ccssseeeereeseeeeneeeeseeeesreees 34 Establishing a terminal emulation session on a Linux Unix system ccccccceeeeeeeeeeeteeeeeteeeeeees 341 Opening the Configuration Interface from the terminal emulation session cc ccceceeeeeseees 342 Logging in to the Configuration Interface ccesccceeeecceeeeeeeceseeeeceeeeeneeeeeesaeeeeeeeeseeseeeseeeeees 342 Configuring administrative users oka stiefuschy etoradin aed ainv eh soescvavdas eeale ae On eiaO adganlata oem 343 Configuring a network CONMEGHON lt 0i inccotevedauesncesneshaunresasaniesaicerteaatola sudterpolavrmserteereabianeeontieie 343 Deleting NIC bond 5 ccsuscner
45. EY NOTE If you are replicating volumes across a cluster Configuring the storage node for RAID1 10 consumes half the capacity of the storage node Configuring the storage node for RAID5 50 provides redundancy within each storage node while allowing most of the disk capacity to be used for data storage RAID6 provides greater redundancy on a single storage node but consumes more disk space than RAIDS Table 10 summarizes the differences in data availability and safety of the different RAID levels on stand alone storage nodes versus on those RAID levels with replicated volumes in a cluster Table 10 Data availability and safety in RAID configurations Data availability if entire storage node Configuration Safety and availability during disk failure fails or if network connection to storage node lost Stand alone storage nodes RAIDO No No Stand alone storage nodes RAID1 Yes In any configuration 1 disk per mirrored N 10 RAID10 spare pair can fail 68 Storage Configuration Safety and availability during disk failure Data availability if entire storage node fails or if network connection to storage node lost Stand alone storage nodes RAID5 Yes for 1 disk per array No 5 spare 50 Stand alone storage nodes RAID6 Yes for 2 disks per arra No g p y Replicated volomesen clistered Yes However if any disk in the storage node ae e nodes RAID a ee fails the entire storage node must be cop
46. File Tasks and select Save to File Select a location for the file This file will never be more than one megabyte 143 Saving the alert history of a specific variable To save the history of a specific variable on a specific storage node save a copy of the log file for that variable This copy is a text file with file name the same as the variable 1 2 3 4 Select a storage node in the navigation window and log in Open the tree below the storage node and select Alerts Select the Alert Setup tab Highlight the variable for which you want to save the log file This selects it Use CTRL and click to select several variables A separate file is created for each In the Alert Setup Task pull down menu select Save Log Files A Save window opens Choose a location for the file Click Save The file is saved to the location you specified Use a file manager window and text editor to check it Using hardware information reports The Hardware category found in the tree under every storage node includes multiple types of information and reporting capabilities Review a Hardware report of system statistics hardware and configuration information Use the Hardware category to Run hardware diagnostics See Running diagnostic reports on page 144 View storage node hardware information in real time See Using the hardware information re port on page 150 View and save a storage node s log files See
47. IP and netmask combination can access SNMP Additional agents and traps can be added and modified as on other storage nodes EY NOTE Use the CMC ping feature to verify IP addresses while configuring access control See Pinging an IP address on page 91 Enabling an SNMP agent Sor e 0 YS Log in to the storage node and expand the tree Select the SNMP category from the tree On the SNMP General tab window click SNMP General Tasks and select Edit SNMP Settings Select the Enabled radio button to activate the SNMP Agent fields Enter the Community String Optional Enter System Location information for the storage node For example this information may include the address building name room number and so on Optional Enter System Contact information Normally this will be network administrator information such as email address or phone number for the person you would contact if you could not connect to SNMP clients 130 Using SNMP Adding an SNMP client By IP address 1 2 3 By host name 1 In the Access Control section click Add to add an SNMP client that you can use to view SNMP You can add SNMP clients by specifying either IP addresses or host names Select By Address and type the IP address Select an IP Netmask from the list Select Single Host if adding only one SNMP client Click OK The IP address and netmask entry appear in the Access Control list Click OK in the Edit SNMP Sett
48. MS Windows performs a write when the snapshot is mounted via iSCSI Microsoft Volume Shadow Copy Service VSS and other backup programs write to the snapshot when backing it up The amount of temporary space initially provisioned on the SAN is minimal However if you do write data to the snapshot it goes to the temporary space which then grows as necessary to accommodate the amount of data written You can see how much temporary space is being used for a snapshot on the Volume Use tab in the Cluster tab window 235 Managing snapshot temporary space You can manage the temporary space two ways delete it or convert it to a volume Delete the space to free up space on the cluster The additional temporary space is deleted when the snapshot is deleted If you need to free up the extra space before the snapshot is deleted you can do so manually in the CMC or through your snapshot scripts The next time an application or operating system accesses the snapshot a new empty temporary space is created For instructions to delete snapshot temporary space see Delete the temporary space on page 253 Convert temporary space to a volume If you have written data to a mounted snapshot and you need to permanently save or access that data you can convert the temporary space to a volume That volume will contain the original snapshot data plus any additional data written after the snapshot was mounted For instructions to convert snap
49. NSM 160 as RAID5 a spare thus using most of the available storage and enhancing the reliability of the cluster Setting RAID rebuild rate Choose the rate at which the RAID configuration rebuilds if a disk is replaced The RAID rebuild rate is set as a priority against other operating system tasks except for the NSM 260 platform for which the rebuild rate is a percentage of the throughput of the RAID card EY NOTE You cannot set the RAID rebuild rate on a VSA since there is nothing to rebuild 69 General Guidelines Setting the rate high is good for rebuilding RAID quickly and protecting data However it slows down user access Setting the rate low allows users quicker access to data during the rebuild but slows the rebuild rate A CAUTION In the IBM x3650 the RAID rebuild rate cannot currently be changed from high This setting may affect the SAN if RAID10 or RAIDS needs to be rebuilt Set RAID rebuild rate 1 In the navigation window log in to a storage node and select the Storage category 2 On the RAID Setup tab click RAID Setup Tasks and select the RAID Rebuild Rate Priority choice The RAID Rebuild Rate Priority window opens This window will be different for different platforms as described above 3 Change the rebuild settings as desired Click OK The settings are then ready when and if RAID rebuild takes place Recontiguring RAID Reconfiguring RAID on a storage node or a VSA destro
50. On the Configuration Interface main menu tab to General Settings and press Enter To add an administrative user tab to Add Administrator and press Enter Then enter the new user s name and password Confirm password tab to Ok and press Enter To change the password for the user that you are currently logged in as tab to Change Password and press Enter Then enter the new password Confirm password tab to Ok and press Enter On the General window tab to Done and press Enter Configuring a network connection The storage node comes with two Ethernet interfaces Table 77 lists where interfaces are labeled and the label name Table 77 Identifying ethernet interfaces on the storage node Ethernet Interfaces Where labeled What the label says TCP IP Network configuration category in Name the CMC ethO eth TCP IP tab Motherboard PortO Motherboard Port1 TCP Status tab G4 Motherboard Port1 G4 Motherboard Port2 Motherboard Port1 Motherboard Port2 Configuration Interface Intel Gigabit Ethernet or Broadcom Gigabit Ethernet EthO Eth or represented by a graphical symbol similar to the symbols below Label on the back of the storage node e ee Hg Hal Once you have established a connection to the storage node using a terminal emulation program you can configure an interface connection using the Configuration Interface 1 2 On the Configuration Interface main menu tab to Network TCP IP Settings and
51. Secret Initiator Secret Secret N A EY NOTE The initiator node name and secrets set in the SAN iQ CMC must match what you enter in the server s iSCSI initiator exactly 338 iSCSI and the HP LeftHand Storage Solution Sample iSCSI configurations Figure 171 illustrates the configuration for a single host authentication with CHAP not required with Microsoft iSCSI r Prope General Discovery Targets Persistent Targets Bound Volumes Devices e The iSCSI protocol uses the following information to uniquely identify this initiator and authenticate targets Initiator Node Name ign 1991 05 com microsoft Igallagh corp lefthand networks com To rename the initiator node click Change To authenticate targets using CHAP click Secret to specify a CHAP secret To configure IPSec Tunnel Mode addresses click Name Description iSCSI Security Z Allow access via iSCSI V Enable load balancing Enabling load balancing on non compliant initiators can compromise volume jon correctly load balancing requires that the cluster virtual IP be configured availability E E Figure 171 Viewing the MS iSCSI initiator to copy the initiator node name Figure 172 illustrates the configuration for a single host authentication with 1 way CHAP required General IPSec Connect by using Local
52. SmartClone volume that shares them The selected shared snapshot is highlighted in the navigation window under both the volumes with which it is shared Shared snapshots can be deleted 275 Creating SmartClone volumes You create SmartClone volumes from existing volumes or snapshots When you create a SmartClone volume from another volume you first take a snapshot of the original volume When you create a SmartClone volume from a snapshot you do not take another snapshot To create a SmartClone volume When you create SmartClone volumes you set the characteristics for the entire group or set them G S New SmartClone Volumes X C Original Volume Setup Management Group TrainingOS Citclass_1 c _SCsnap Provisioning Thin X No Server w Permission ReadWte w n r e Quantity Max of 25 1 SmartClone Volume Name Provisioning g Server Name Permission C _SCsnap_1 2 Thir V No Server as bl cerca 1 Set characteristics for multiples here 2 Edit individual clones here Figure 127 Setting characteristics for SmartClone volumes For details about the characteristics of SmartClone volumes see Defining SmartClone volume characteristics on page 267 1 Log in to the management group in which you want to create a SmartClone volume 2 Select the volume or snapshot from which to create a SmartClone volume From the main menu you can select Task
53. Status label in the tab window shows the failure 1 If the storage node is running a manager stop the manager See Stopping managers on page 181 2 Right click the storage node and select Repair Storage Node 216 Working with clusters 3 From the Repair Storage Node window select the item that describes the problem you want to solve Click More for more detail about each selection e Repair a disk problem If the storage node has a bad disk be sure to read Replacing a disk on page 80 before you begin the process Storage Node problem Select this choice if you have verified that the storage node must be removed from the man agement group to fix the problem For more information about using Repair Storage Node with a disk replacement see Replacing disks on page 328 Not sure This choice allows you to confirm whether the storage node has a disk problem by taking you directly to the Disk Setup window so that you can verify disk status As in the first choice be sure plan carefully for a disk replacement 4 Click OK The storage node leaves the management group and moves to the Available Nodes pool A placeholder or ghost storage node remains in the cluster It is labeled with the IP address instead of the host name and a special icon amp 5 Replace the disk in the storage node and perform any other physical repairs e Depending on the model you may need to power on the disk and reconfigure RAI
54. Storage Modules 3 gt Denver 1 gt Denver 2 Denver 3 Volumes 2 E ExStore1 0 LP ExStore2 0 1 Server connection in the navigation window with one server 2 Volumes and Snapshots tab shows two assigned volumes that the server can access Figure 140 Server assignments in the navigation window and the Volumes and Snapshots tab Adding server connections to management groups Add each server connection that needs access to a volume to the management group where the volume exists Afteryou add a server connection to a management group you can assign the server connection to one or more volumes or snapshots For more information see Assigning server connections access to volumes on page 292 Prerequisites e Each server must have an iSCSI initiator installed e You know where to find the initiator node name in the iSCSI initiator See iSCSI and CHAP ter minology on page 338 1 In the navigation window log in to the management group Click Management Group Tasks and select New Server Entera name and description optional for the server connection The server connection name is case sensitive It cannot be changed later unless you delete and recreate the connection 4 Select the check box to allow access via iSCSI 5 If you plan to use iSCSI load balancing click the link in the window to review the list of compliant iSCSI initiators Scroll down to see the entire list If y
55. VI Client cescceeseeceereeeeeeseeesneeceteeesneeensees 196 Add Failover Manager to inventory ccccceeeeeeeeeeeceeeenaeeeeccesnaeeeceeseaeeeeecseeeseeeeeeeeeas 196 Select arnetw rk connection acteurs anderen astra ein laanieitan cS a a E 197 Power on the Failover Manager and configure the IP address and host name 008 197 Finishing op ovithe VI Client casianichutacteeaatiotadns anti adi osihiencdianalelljastnnans E E 198 Uninstalling the Failover Manager from VMware ESX Server cccsccceceeeeeeeseeeeeeseeecesteeeeenseeeenaees 199 Troubleshooting the Failover Manager on ESX Server cccsssscceeeeeesseeeeeeeeeeeeeeeeseenneeeeseeeenaeeees 199 Virtual manager Overview 52525525 ateecan sect cadatecentceac asia hdeustenumettaceqeeeaeanueteaaniaeneantuanaes 200 When to Use a virtval mandager sn3 ianticutalasia cats agian tees dacisani eOated dabei a a ae a 200 Disaster recovery using a virtual manager ececeeeeeseeeeeteeeeeeeeeeeeeeeeseneeeeseeteeesieeeesntateeeaes 200 Management group across two sites with shared data ccescceeeeeteeeeeeeeesenteeeseteeesnatees 200 Management group in a single location with two storage nodes sseeeeereeeeereeeeenneees 200 Storage node maintenance using a virtual manager ccccceeeeseeeeeeeeeeteeeeeeeeneeeeeeeeeteeeeeeees 201 Benefits of a virtual manager sacarsisnaierindeauneckiolacsannes emacs innenneeanmeanntaaantiaaeseea eeaeeniaeaonbio nec
56. a storage node in the navigation window and log in Open the tree below the storage node and select Hardware Select the Log Files tab Click Log File Tasks and select Add Remote Log Destination In the Log Type drop down list select the log you want to direct to a remote computer The Log Type list only contains logs that support syslog In the Destination field type the IP address or host name of the computer that will receive the logs For a Windows operating system find out the name of the remote computer with Control Panel gt System Properties gt Computer Name Click OK The remote log displays in the Remote logs list on the Log Files window Reporting Configuring the remote log target computer Configure syslog on the remote log target computer Refer to the syslog product documentation for information about configuring syslog EY NOTE The string in parentheses next to the remote log name on the Log Files tab includes the facility and level information that you will configure in syslog For example in the log file name auth error auth warning the facility is auth and the level is warning Editing remote log targets You can select a different log file or change the target computer for a remote log 1 2 a gt Select a storage node in the navigation window and log in Open the tree below the storage node and select Hardware Select the Log Files tab Select the log in the Remot
57. alized Management Console Loe tepap TCP Status DNS Routing Communication Name Description Speediethod DuplexiMethod Status Frame Size_ NIC Flow Co Preferred bondo Logical Failo Auto Auto Auto Auto Active Default Disabled Motherbo Broadcom C Unknown Unknown Passive Fail Default Disabled Motherbo Broadcom C 1000Mbs A Full 7 Auto Active Default Disabled 1 Neither interface is preferred Figure 73 Viewing the status of a link aggregation dynamic mode bond EY NOTE If the bonded NIC experiences rapid sequential Ethernet failures the CMC may display the storage node as failed flashing red and access to data on that storage node fails However as soon as the Ethernet connection is reestablished the storage node and the CMC display the correct information Deleting a NIC bond When you delete an Active Passive bond the preferred interface assumes the IP address and configuration of the deleted logical interface The other NIC is disabled and its IP address is set to 0 0 0 0 When you delete either a Link Aggregation Dynamic Mode or an Adaptive Load Balancing bond one of the active interfaces in the bond retains the IP address of the deleted logical interface The other NIC is disabled and its IP address is set to 0 0 0 0 1 Log in to the storage node and expand the tree 2 Select the TCP IP category from the tree
58. application managed snapshot available cccccseceeeeeeceeeeeeeeseeeeenseeeeneeeeens 250 Making an application managed snapshot available on a stand alone server 0000 251 Making an application managed snapshot available on a server in a Microsoft cluster 252 Managing snapshot temporary Space ccarhoeaacetwinmanwar anna eonasiane ion anatwmaed 253 Convert the temporary Space ccameroncansccenoeuse ad natarsaratactaranatantnanriamiainsamedinetie 253 Delete the temporary space 4iniisgayiersenirdien ama niona aenanesnadaanuaenaauien 253 Creating a schedule to snapshot a volume c ie c0css sete sacssedstvecadentavondavsineensantecaronasenetstaassbalankedstens 254 Best practices for scheduling snapshots of volumes cccsecsceeeeeseeseeeeeeeeeeeeeeseeeesseeeeeneeeees 254 Creating schedules to snapshot a volume s ssscesscnscmasvncsssersndnasienessvsnaandvennaasann coueacemqunens 255 Editing scheduled snapshots 2 5 aia nxsiderouieisndsiiaev1esiyacneednsee iain eecodbeneitianenee ein eranenuanacubows 255 Pausing and resuming scheduled snapshots ccccccceeececeeeeeeseeceeceeeeeeseeeeeceseeeecsseeeenaes 256 Pause CUSCHEOUIE presi garer inan dima E a E eat NN 256 Resume a schedule esiseina aiee e ei aE I EE eNi IE Fara 256 Deleting schedules to snapshot a volume 0 ccccccsceceeeeeeeseseeeeeseeeecaeeeceaeeeesaaeeeseeeeenseeeenes 256 Scripting snapshots prathcee is ec osecnipicccescsne eastern sate re
59. as though they were running on the bare iron A manager that is added to a management group but is not started on a storage node until it is needed to regain quorum A logical entity that is made up of storage on one or more storage nodes It can be used as raw data storage or it can be formatted with a file system and used by a host or file server Two or more volumes used by an application For example you may set up Ex change to use two volumes to support a StorageGroup one for mailbox data and one for logs Those two volumes make a volume set For release 7 0 and earlier provide the link between designated volumes and the authentication groups that can access those volumes Not used in release 8 0 and later The size of the virtual device communicated to the operating system and the applic ations Volume Shadow Copy Service HP LeftHand P4000 VSS Provider is the hardware provider that supports the Volume Shadow Copy Service on the HP LeftHand Storage Solution See Temporary Space 351 352 Glossary Index Symbols 1000BASE T interface 90 30 day evaluation for add on applications 317 3650 See IBM x3650 55 A access control SNMP 130 Access Rights See Permission Levels 293 Access Volume wizard servers 39 accessing volumes from servers 289 activating dedicated boot devices 54 active interface in active passive bond 95 in adaptive load balancing bond 99 in link aggregation dynamic mode bond 97 a
60. available on the HP LeftHand Networks website 287 288 Working with scripting 17 Controlling server access to volumes Application servers servers also called clients or hosts access storage volumes on the SAN using the iSCSI protocol You set up each server that needs to connect to volumes in a management group in the SAN iQ software We refer to this setup as a server connection You can set up servers to connect to volumes three ways All three ways use the virtual IP VIP for discovery and to log in to the volume from a server s iSCSI initiator e iSCSI with VIP and load balancing use the load balancing option when you set up a server connection in the SAN iQ software to balance connections to the SAN e HP LeftHand DSM for MPIO if using automatically establishes multiple connections to the SAN e iSCSI with VIP only EY NOTE Before setting up a server connection make sure you are familiar with the iSCSI information in Chapter 22 on page 335 Setting up server connections to volumes requires the general tasks outlined below Table 65 Overview of configuring server access to volumes Do This For More Information 1 Ensure that an iSCSI initiator is installed on the server If you are using HP LeftHand DSM for MPIO ensure that the Microsoft MPIO and the SAN iQ HP LeftHand DSM for MPIO are installed on the server also Refer to the HP LeftHand P4000 Windows Solution Pack User Manual
61. back again Using snapshots Deleting a snapshot When you delete a snapshot the data necessary to maintain volume consistency are moved up to the next snapshot or to the volume if it is a primary volume and the snapshot is removed from the navigation window The temporary space associated with the snapshot is deleted A CAUTION Typically you do not want to delete individual snapshots that are part of a snapshot set A future release will identify snapshot sets in the CMC For information about snapshot sets see Requirements for application managed snapshots on page 248 Typically you want keep or delete all snapshots for a volume set If you need to roll back to a snapshot you want to roll back each volume in the volume set to its corresponding snapshot Prerequisites Stop any applications that are accessing the snapshot and log off all related iSCSI sessions Delete the snapshot 1 Log in to the management group that contains the snapshot that you want to delete 2 In the navigation window select the snapshot that you want to delete 3 Review the Details tab to ensure you have selected the correct snapshot 4 Click Snapshots Tasks on the Details tab and select Delete Snapshot A confirmation message opens 5 Click OK 261 262 Using snapshots 15 SmartClone volumes Overview of SmartClone volumes SmartClone volumes are space efficient copies of existing volumes or snapshots They appear as multi
62. ceecneesiareatnacndt acs A A E E A 123 Managing administrative Users sssessscisuiinii i a a a 123 Default administrative user wi caisaine cosoarnndatines teaquisaandonedaeqceuapiiendndssuatonmueacaventacamaeatoraehdpears 123 Adding a new administrative USER sseeeseeeeseeissrertsterstterttstrererrsrttesstttrstttrsttersteet rte 123 Editing administrative users 55 0 s est be acuatan exec actaa peace ta oekenatucnematontlabettacstie pase ataancatenaoneence 123 Changing a user s description a6 icra yssensiedla saw Aiaina ues Mien aiublunanda wiilovgnnein site ass anlvnit 124 Changing a user s password sccicaoniasteadantanincacommnneieoupannaancncivaeaaaaceatenenmsaaconenananinn 124 Adding group membership to a user ccccecseceescceeeseeeeeeeeeeeeseeeeeeeeesseteeeseeeeeesneeeensas 124 Removing group membership from a user cssccesessceeeeeceeeeseeeeceseeeensaeeeeeeseeecseteeeeeaees 124 Deleting an administrative user vinsiccrirnniendsrasserasanasiiaetants cormaaacimunstsennetnerdaematieemelas 124 Managing administrative groups 32 24 cassia ncceasannep apondeasosneieniusraiaaranouduurdalevad aeanenaniptoonunweneos 125 Default administrative groups is sa so xasivipsnnioensonssdaaceisnationaldnedsndedetnnedeeseorsieshipistoensdaxiatnlackecencnes 125 Adding administrative groups ccxceseesdessniniioresUensna vase vendvena deans aeadvorsdutre Manbiein madeadaainaniont 125 Editing administrative groups asestsvarieconssvecsaradesaanesedipv
63. date and time for the first snapshot created by this schedule Click OK when you are finished setting the date and time Select a recurrence schedule Specify the retention criteria for the snapshot 30 OO ND Click OK when you have finished creating the schedule To view the schedule just created select the Schedules tab view scheduled snapshots You can edit everything in the scheduled snapshot window except the name 1 In the navigation window select the volume for which you want to edit the scheduled snapshot 2 In the tab window click the Schedules tab to bring it to the front 255 Select the schedule you want to edit Click Schedule Tasks on the Details tab and select Edit Schedule Change the desired information Click OK An w Pausing and resuming scheduled snapshots At times it may be convenient to prevent a scheduled snapshot from taking place Use these steps to pause and then resume a snapshot schedule When you pause a snapshot schedule the snapshot deletions for that schedule are paused as well When you resume the schedule both the snapshots and the snapshot deletions resume according to the schedule Pause a schedule 1 In the navigation window select the volume for which you want to pause the snapshot schedule Click the Schedules tab to bring it to the front Select the schedule you want Click Schedule Tasks on the Details tab and select Pause Schedule In the Confirm window click OK oS
64. done on the management group containing the remote snapshot See the Remote Copy User Manual in Chapter 2 Using Remote Copy the section about Setting the Remote Bandwidth Specialized editing tasks include e Disassociating management groups Setting Group Mode to normal 182 Working with management groups After making any changes to the management group be sure to save the configuration data of the edited management group See Backing up a management group configuration on page 183 Setting or changing the local bandwidth priority After a management group has been created edit the management group to change the local bandwidth priority This is the maximum rate per second that a manager devotes to non application processing such as moving data The default rate is 4 MB per second You cannot set the range below 25 MB sec Local bandwidth priority settings The bandwidth setting is in MB per second Use Table 41 as a guide for setting the local bandwidth Table 41 Guide to local bandwidth priority settings Network type Throughput MB sec Throughput rating Minimum 0 25 2 Mbps Ethernet 1 25 10 Mbps Factory Default 4 00 32 Mbps Fast Ethernet 12 50 100 Mbps Half GigabitEthernet 62 50 500 Mbps Gigabit Ethernet 128 00 1 Gbps Bonded Gigabit Ethernet 2 256 00 2 Gbps Bonded Gigabit Ethernet 4 512 00 4 Gbps Set or change local bandwidth priority 1 In the navigation window select a management group and log in
65. dow 09 26 20 BP nsm1 10 0 52 41 Management Group rth30 Remote Copy Volume rth 9G_Sch_RS_1_Rmt 1185 C Alert Tasks 1 Navigation window 2 Tab window 3 Alerts window Figure 1 Viewing the three parts of the CMC Navigation window The left vertical pane displays the architecture of your network The physical and logical elements of your network include e Management groups Servers e Administration e Sites e Failover Managers and Virtual Managers e Clusters Storage Nodes and their configuration categories e Volumes including SmartClones Snapshots e Remote Copies Tab window For each element selected in the navigation window the tab window on the right displays information about it Commands related to the element are accessible from the Tasks menu on the bottom left of the tab window Alert window View and delete alerts that display there Performing tasks in the CMC using the menu bar The menu bar provides access to the following task menus 30 Getting started e File Lets you exit the CMC gracefully e Find Finds storage nodes on the network that can be managed through the CMC e Tasks Lets you access all storage configuration tasks The tasks in this menu are grouped by lo gical or physical items Tasks are also accessible through right click menus and from the Tasks button in the tab window e Help Lets you access on
66. down list select the permission each server connection should have to volume or snapshot 5 Click OK You can now log on to the volume from the server s iSCSI initiator See Completing the iSCSI Initiator and disk setup on page 294 Assigning volumes from a server connection You can assign one or more volumes or snapshots to any server connection For the prerequisites see Assigning server connections access to volumes on page 292 1 In the navigation window right click the server connection you want to assign 2 Select Assign and Unassign Volumes and Snapshots 293 3 Click the Assigned check box for each volume or snapshot you want to assign to the server connection 4 From the Permission drop down list select the permission the server should have 5 Click OK You can now connect to the volume from the server s iSCSI initiator See Completing the iSCSI Initiator and disk setup on page 294 Editing server connection and volume assignments You can edit the assignment of volumes and server connections to Unassign the volume or server connection e Change the permissions Editing server connection assignments from a volume You can edit the assignment of one or more server connections to any volume or snapshot A CAUTION If you are going to unassign a server connection or restrict permissions stop any applications from accessing the volume or snapshot and log off the iSCSI session fro
67. iSNS server you may not need to add Target Portals in the Microsoft iSCSI Initiator 1 In the iSCSI tab view open the iSCSI Tasks menu and click Add iSNS Server The Add iSNS Server window opens Enter the IP address of the iSNS server Click OK Click OK when you have finished The cluster is created and displayed inside the management group 5 Select the cluster to open the Clusters tab window Tracking cluster usage The Use Summary Volume Use and Node Use tabs provide detailed information about provisioning of volumes and snapshots and space usage in the cluster See Ongoing capacity management on page 227 for information about the information reported on these tabs 210 Working with clusters EY NOTE An overprovisioned cluster occurs when the total provisioned space of all volumes and snapshots is greater than the physical space available on the cluster This can occur when there are snapshot schedules and or thinly provisioned volumes associated with the cluster Editing a cluster When editing a cluster you can change the description and add or remove storage nodes You can also edit or remove the virtual IP and iSNS servers associated with the cluster Prerequisite You must log in to the management group before you can edit any clusters within that group Getting there 1 In the navigation window select the cluster you want to edit 2 Click Cluster Tasks and select Edit Cluster Adding a new s
68. in Table 25 Users assigned to either of these groups assume the privileges associated with that group Table 25 Using default administrative groups Name of group Management capabilities assigned to group Fili Adminicttiter oo all functions read write access to all func View_Only_Administrator View only capability to all functions read only Administrative groups can have e Different levels of access to the storage node such as read write e Access to different management capabilities for the SAN such as configuring network capabilities Adding administrative groups When you create a group you also set the management permissions for the users assigned to that group The default setting for a new group is Read Only for each category 1 Log in to the management group and select the Administration node 2 Click Administration Tasks in the tab window and select New Group 3 Type a Group Name and optional Description 125 4 Select the permission level for each function for the group you are creating See Table 26 Table 26 Descriptions of group permissions Management area Activities controlled by this area Change Password User can change other administrative users passwords User can set the RAID configuration for the storage node Shut down disks restart RAID and hot swap disks Create management groups Management Groups RAID Drive Hot Swap User can choose type of network connection set th
69. in the list Display the volume s attributes typing att vol The volume will show that it is hidden read only and shadow copy Change these attributes by typing att vol clear readonly hidden shadowcopy Exit diskpart by typing exit Reboot the server Verify that the disk is available by launching Windows Logical Disk Manager You may need to assign a drive letter but the disk should be online and available for use If the server is running Windows 2008 or later and you promoted a remote application managed snapshot to a primary volume start the HP LeftHand Storage Solution CLI and clear the VSS volume flag by typing clearvssvolumeflags volumename drive_lett r where drive_letter is the corresponding drive letter such as G Reboot the server 251 Making an application managed snapshot available on a server in a Microsoft cluster Use this procedure to make an application managed snapshot available on servers that are in a Microsoft cluster EY NOTE We recommend contacting Customer Support before performing this procedure S 7 SS 10 11 12 13 14 15 16 252 Disconnect the iSCSI sessions Do one of the following based on what you need to do with the application managed snapshot Convert temporary space e Create a SmartClone e Promote a remote volume to a primary volume Failover Failback Volume Wizard and selecting the Failover the Primary Volume to the Selected Remo
70. interface acts as a slave The logical master interface monitors each physical slave interface to determine if its link to the device to which it is connected such as a router switch or repeater is up As long as the interface link remains up the interface status is preserved Table 16 NIC status in active passive configuration If the NIC Status is The NIC is Active Currently enabled and in use Passive Ready Slave to a bond and available for failover Passive Failed Slave to a bond and no longer has a link 94 Managing the network If the active NIC fails or if its link is broken due to a cable failure or a failure in a local device to which the NIC cable is connected then the status of the NIC becomes Passive Failed and the other NIC in the bond if it has a status of Passive Ready becomes active This configuration remains until the failed preferred interface is brought back online When the failed interface is brought back online it becomes Active The other NIC returns to the Passive Ready state Requirements for active passive To configure Active Passive Both NICs should be enabled e NICs should be connected to separate switches Which physical interface is preferred When the Active Passive bond is created if both NICs are plugged in the SAN iQ software interface becomes the active interface The other interface is Passive Ready For example if EthO is the preferred interface it will b
71. is rebuilding on the RAID Setup or Disk Setup tabs Replacing a disk in a hot swap platform NSM 160 NSM 260 DL380 DL320s NSM 2120 Dell 2950 NSM 2060 NSM 4150 HP LeftHand P4500 HP LeftHand P4300 Complete the checklist for replacing a disk in RAID1 10 5 50 or RAID6 Then follow the appropriate procedures for the platform A CAUTION You must always use a new drive when replacing a disk in an Dell 2950 NSM 2060 or NSM 4150 Never reinsert the same drive or another drive from the same Dell 2950 NSM 2060 or NSM 4150 Replace the disk You may remove and replace a disk from these hot swap platforms after checking that the Safe to Remove status indicates Yes for the drive to be replaced Physically replace the disk drive in the storage node See the hardware documentation that came with your storage node for information about physically replacing disk drives in the storage node RAID rebuilding After the disk is replaced RAID starts rebuilding on the replaced disk Note that there may be a delay of up to a couple of minutes before you can see that RAID is rebuilding on the RAID Setup or Disk Setup tabs alized Management Console RAID Setup Disk Setup Boot Devices RAID Status Rebuilding RAID Configuration RAID 5 6 disks RAID Rebuild Rate Percent 100 Device Name Device Status Subdevices ideviscsihost0bus1 target0lu RAIDS Rebuilding 7 complete estimating 82 6 i
72. it on using the Power On command on the Information panel Click the Console Tab and wait for the Failover Manager to boot When the Failover Manager has finished booting a log in prompt opens 10 20 64 135 VMware Infrastructure Client MEX Ele Edit view Inventory Administration Plugins Help a Inventory Administration Name Target Status Initiated by Time Start Time Complete Time Figure 93 Logging in to the SAN iQ Configuration Interface 4 Log in and use the SAN iQ Configuration Interface to configure an IP address and host name for the Failover Manager Setting the IP address Use the Configuration Interface to set the IP address for the Failover Manager 197 Enter start and press Enter Press Enter to log in Tab to Network TCP IP Settings and press Enter Tab to the network interface and press Enter Tab to the Hostname field if necessary amp SS SS Press Backspace in the Hostname field to delete the default name and enter your own host name This host name displays in the CMC It does not change the name of the original FOM vmx file nor does it change the name in VMware 7 Configure the IP Address one of two ways Using DHCP Configure IP address manually 1 Tab to the choice Obtain IP address automatic ally using DHCP and press Enter to select it Tab to OK and press Enter A message opens asking you to verify the re quest Tab to the ch
73. management group for Exchange Ensure added administrative security For example you could give the system administrator in charge of Exchange access to the Exchange management group but not the Oracle management group Prevent some storage resources from being used unintentionally If a storage node is not in a management group the management group cannot use that storage node as a storage resource For example all of the storage nodes in a management group can be pooled into clusters for use by volumes in that group To prevent a new storage node from being included in this pool of storage put it in a separate management group e Contain clustering managers Within a management group one or more of the storage nodes acts as the managers that control data transfer and replication Requirements for creating a management group IP addresses for the storage nodes that go into the group e Type of cluster you are planning standard or Multi Site e If a Multi Site configuration the physical sites and the storage nodes that go in them already cre ated e Virtual IP addresses and subnet masks for the cluster Optional Storage requirements for the volumes Managers overview Within a management group managers are storage nodes that govern the activity of all of the storage nodes in the group All storage nodes contain the management software but you must designate which storage nodes run that software by starting managers
74. navigation window select the storage node from the Available Nodes pool 2 Select the Feature Registration tab 3 Click Feature Registration Tasks and select Edit License Key from the menu 4 Copy and paste the Feature Key into the Edit License Key window EY NOTE When you paste the license key into the window be sure there are no leading or trailing spaces in the box Such spaces prevent the license key from being recognized Click OK The license key appears in the Feature Registration window reys Have your icense entitlement Key below under Feature Registration Tasks to added the 1641 CSBA SFOA 0028 93A4 7A3E 160 1 082 4708 EDA3 63AF E35C 1976 EF52 FBFA DC8C B857 8840 925F ASD8 904 82C7 EBCD A472 BBDC FC50 8B57 9C74 4F71 EFS2 4 6E Figure 161 Storage node with a license key Registering storage nodes in a management group Storage nodes that are in a management group are licensed through the management group You license the storage nodes on the Registration tab for the management group The Registration tab displays the following information e The license status of all the advanced features including the progress of the 30 day evaluation period and which advanced features are in use and not licensed 321 e Version information about software components of the operating system e Customer information Submitting storage node feature keys Submit the feature keys
75. node is stored in a file If a storage node failure occurs restore the backed up configuration to a replacement storage node The replacement storage node will be configured identically to the original storage node at the time it was backed up EY NOTE You must restore the configuration to the replacement storage node BEFORE you add it to the management group and cluster Backing up storage node does not save all settings Backing up the configuration file for a storage node does not save data Neither does it save information about the configuration of any management groups or clusters that the storage node belongs to It also does not back up license key entries for registered features To save the management group configuration see Backing up a management group configura tion on page 183 To preserve a record of the management group license keys see Saving license key informa tion on page 324 45 EY NOTE Back up the storage node configuration every time you change storage node settings This ensures that you can restore a storage node to its most recent configuration Manual configuration steps following the restore After you restore the storage node configuration from a file up to three manual configuration steps are required e You must manually configure RAID on the storage node e You must manually add network routes after the restoration Restoring a configuration file from one storage
76. node to a second storage node does not restore network routes that were configured on the storage node e If you restore multiple storage nodes from one configuration file you must manually change the IP address on the subsequent storage nodes For example if you back up the configuration of a storage node with a static IP address and you then restore that configuration to a second storage node the second storage node will have the same IP address Backing up the storage node configuration file Use Back Up to save the storage node configuration file to a directory you choose 1 In the navigation window select a storage node Click Storage Node Tasks on the Details tab and select Back Up or Restore Click Back Up Navigate to a folder to contain the storage node configuration backup file a PP Enter a meaningful name for the backup file or accept the default name Storage Node_Configuration_Backup EY NOTE The configuration files for all storage nodes that you back up are stored in the location you choose If you back up multiple storage nodes to the same location be sure to give each storage node configuration file a unique and descriptive name This makes it easier to locate the correct configuration file if you need to restore the configuration of a specific storage node 6 Click Save Restoring the storage node configuration from a file Before you add the replacement storage node to the management group and cluste
77. of 312 statistics sample data clearing 312 status dedicated boot devices 52 NIC bond 103 RAID 71 safe to remove disk 81 storage node 215 storage server inoperable 214 storage server overloaded 214 stopping managers 181 implications of 182 virtual manager 207 storage adding to a cluster 212 configuration on storage nodes 55 configuring 55 overview 55 storage node configuration categories 43 expanded statistics detail 150 logging in to an additional 44 removing 87 saving statistics to a file 151 statistics 150 storage nodes adding first one 37 178 adding to existing cluster 21 1 adding to management group 80 configuration file backing up 46 restoring 46 configuration overview 43 configuring 37 configuring multiple 40 default RAID configuration on 55 details tab 49 finding on network 29 37 39 ghost storage node 216 locating in a rack 45 powering off 48 rebooting 48 registering 50 removing from cluster 212 removing from management group 87 prerequisites for 187 repacing disks 80 repairing in clusters 216 status of 215 storage configuration of 55 tasks 43 upgrading software 49 storage pool 171 storage server inoperable 214 storage server overloaded 214 storage server status and VSA 214 storage space raw space 233 storage provisioning 22 Subscriber s Choice HP 27 synchronizing time on storage nodes 254 System Management homepage logging into 130
78. of space that would be on the SAN if this fully provisioned volume was changed to thinly provisioned The totals at the cluster level shown at the bottom of the list show the total for both saved and reclaimable space 230 Provisioning storage Category Description Provisioned space The provisioned space is the amount of space reserved for data on the SAN Temporary space is space used by applications and operating systems that need to write to a snapshot when they access it Figure 109 on page 232 shows temp space that can be deleted or con verted to a volume e Full provisioned volumes display the entire amount of allocated space in this column which is the volume size times the replication level For example 10 GB size x 2 Way replication results in 20 GB of provisioned space e Thin provisioned volumes allocate a fraction of the total amount of space planned The provisioned space increases as needed up to the maximum provisioned space or until the cluster is full e Snapshots are automatically thin provisioned The provisioned space is that which is al located when the snapshot is created The amount of provisioned space can change as snapshots are deleted e The temporary space is equal to the size of the snapshot For example if a snapshot size equals 2 GB the temporary space is also 2 GB Max provisioned space The total amount of space which can be allocated for the volume assuming there is enough sp
79. of the backup file Complete the configuration of the replacement storage node by reconfiguring the following characteristics as described in Manual configuration steps following the restore on page 46 e RAID e Network routes e IP address Powering off or rebooting the storage node You can reboot or power off the storage node from the CMC You can also set the amount of time before the process begins to ensure that all activity to the storage node has stopped Powering off the storage node through the CMC physically powers it off The CMC controls the power down process so that data is protected Powering off an individual storage node is appropriate for servicing or moving that storage node However if you want to shut down more than one storage node in a management group you should shut down the management group instead of individually powering off the storage nodes in that group See Safely shutting down a management group on page 184 Powering on or off or rebooting NSM 4150 When powering on the NSM 4150 be sure to power on the two components in the following order 1 2 Disk enclosure System controller Allow up to six minutes for the system controller to come up completely and be discovered by the CMC If you cannot discover the NSM 4150 using the CMC after six minutes contact Customer Support 47 If you do not power on the disk enclosure first the Storage Node Details tab shows the status with N
80. press Enter Tab to select the network interface that you want to configure and press Enter 343 3 Enter the host name and tab to the next section to configure the network settings EY NOTE If you specify an IP address the Gateway is a required field If you do not have a Gateway enter 0 0 0 0 for the Gateway address Tab to OK and press Enter to complete the network configuration 5 Press Enter on the confirmation window A window opens listing the assigned IP address 6 Open the CMC and locate the storage node using the Find function Deleting a NIC bond You can delete NIC bonds using the Configuration Interface Active Passive bond Link Aggregation Dynamic Mode bond e Adaptive Load Balancing bond For more information about creating and configuring NIC bonds see Configuring network interface bonds on page 92 When you delete an Active Passive bond the primary interface assumes the IP address and configuration of the deleted logical interface The other NIC is disabled and its IP address is set to 0 0 0 0 When you delete a Link Aggregation Dynamic Mode or a Adaptive Load Balancing bond ethO or motherboard port 1 retains the IP address of the deleted logical interface The other NIC is disabled and its IP address is set to 0 0 0 0 1 On the Configuration Interface main menu tab to Network TCP IP Settings and press Enter The Available Network Devices window opens The logical bond is the on
81. reads and writes operate at the same speed size The size of a read or write operation As this size increases throughput usually increases because a disk access consists of a seek and a data transfer With more data to transfer the relative cost of the seek decreases Some applications allow tuning the size of read and write buffers but there are practical limits to this pattern Disk accesses can be sequential or random In general sequential accesses are faster than random accesses because every random access usually requires a disk seek depth Queue depth is a measure of concurrency If queue depth is one q 1 it is called serial In serial accesses disk operations are issued one after another with only one outstanding request at any given time In general throughput increases with queue depth Usually only database applications allow the tuning of queue depth Changing the sample interval and time zone You can set the sample interval to any value between 5 seconds and 60 minutes in increments of either seconds or minutes The time zone comes from the local computer where you are running the CMC You can change the sample interval the following ways Using the toolbar e In the Edit Monitoring Interval window where you can also change the time zone To change the sample interval from the toolbar 1 In the navigation window log in to the management group 309 Select the Performance Monitor node for the clu
82. shorter operation than a restripe Prerequisites e Volume must have 2 way or 3 way replication Storage node must have the blinking red and yellow triangle in the navigation window e Ifthe storage node is running a manager stopping that manager must not break quorum How repair storage node works Using Repair Storage Node to replace a failed disk includes the following steps e Using Repair Storage Node from the Storage Node Tasks menu to remove the storage node from the cluster e Replacing the disk in the storage node e Returning the storage node to the cluster Because of the replication level removing and returning the storage node to the cluster would normally cause the remaining storage nodes in the cluster to restripe the data twice once when the storage node is removed from the cluster and once when it is returned The Repair Storage Node command creates a placeholder in the cluster in the form of a ghost storage node This ghost storage node keeps the cluster intact while you remove the storage node replace the disk configure RAID and return the storage node to the cluster The returned storage node only has to resynchronize with the other 2 storage nodes in the cluster Using the repair storage node command When a storage node in a cluster has a disk failure the navigation window displays the storage node and the cluster with a blinking triangle next to them in the tree An alert appears in the alert window and the
83. snapshots overview Snapshots are a copy of a volume for use with backup and other applications Snapshots are one of the following types e Application managed Snapshot of a volume that is taken while the application that is serving that volume is quiesced Because the application is quiesced the data in the snapshot is consistent with the application s view of the data That is no data was in flight or cached waiting to be written This type requires the use of the HP LeftHand P4000 VSS Provider VSS Provider For more information see Requirements for application managed snapshots on page 248 Point in time consistent Snapshots that are taken at a specific point in time but an application writing to that volume may not be quiesced Thus data may be in flight or cached and the actual data on the volume may not be consistent with the application s view of the the data Snapshots versus backups Backups are typically stored on different physical devices such as tapes Snapshots are stored in the same cluster as the volume Therefore snapshots protect against data deletion but not device or storage media failure Use snapshots along with backups to improve your overall data backup strategy Prerequisites Before you create a snapshot you must create a management group a cluster and a volume to receive it Use the Management Groups Clusters and Volumes wizard to create them For information see e Creating a management g
84. ssssssessssesrsssrisssersrsresssrerssrrrsrrressrresssres 75 43 Diagram of the drive bays in the NSM 260 o cssscecsasceansconns cots nce gdoeiecenecntacncusnaivs 75 44 Viewing the Disk Setup tab in a DL380 0 eee ee eseceeeeseceesneeceseeeeeeeaeeeeeaeeecesaeeenenaes 76 45 Arrangement of drives in the DL380 c cccecccccesccccesseeeceeneeeeeeeeeeeeseeeeeeseeesseeeeeeeaeees 76 46 Viewing the Disk Setup tab in a DL320s NSM 2120 eeeceeeeesteeceseteeeeeteeeeeeteeeeeaees 76 47 Diagram of the drive bays in a DL320s NSM 2120 eeescceeseeeeeeeeeeseeeteeeeteeenteeeesees 76 48 Viewing the Disk Setup tab in the IBM x3650 0 0 ceeeeceeeeeseeceenseeeesneeeeeseeesenteeeeeaaes 77 49 Arrangement of drives in the IBM x3650 cccccccececcceeseeceeenececeeeeeeeeseeeeeeeeeeeeseeeensaeees 77 50 Viewing the disk status of a VSA cei G Act lated Cae aieaaiee anata 77 51 Viewing the Disk Setup tab in a Dell 2950 or NSM 2060 sseessesrseerrserreserressrrssse 78 52 Drive bays with bezel on in a Dell 2950 or NSM 2060 ccccceeeeessceeeeeeeeeeeeeeeeeens 78 53 Drive bays with bezel off in a Dell 2950 or NSM 2060 0 cccccceesseeeesseeeeeteeeeeteeeeees 78 54 Viewing the Disk Setup tab in a NSM 4150 o eeeececccceceeecceeeeeeeeneeeeeeeeeeeeeesensennaeeeees 78 55 Drive bays with bezel on in an NSM 4150 cccceceeseeccceeeeeneeeeeceeeeneeeeeessteeeeeeeeenees 79 56 Drive bays with bezel off in an NSM 4150 vec cccascerascesmceanace
85. statistic in the table 2 To redisplay the line select the Display check box for the statistic If you want to display all of the statistics from the table rightclick anywhere in the Performance Monitor window and select Display All Changing the color or style of a line You can change the color and style of any line on the graph 1 From the Performance Monitor window select one or more statistics in the table that you want to change 2 Right click and select Edit Line The Edit Line window opens Edit Line E E Statistics Name Statistic Line Color D width _______ w Style s Wr Shape __________ w Figure 158 Edit Line window 3 Select the color and line style options you want To see the changes and leave the window open click Apply 5 When you finish the changes click OK 314 Monitoring performance Highlighting a line You can highlight one or more lines on the graph to make them easier to distinguish 1 2 From the Performance Monitor window right click the statistic in the table that you want to highlight and select Highlight The line turns white To remove the highlight right click the statistic and select Remove Highlight Changing the scaling factor The vertical axis uses a scale of O to 100 Graph data is automatically adjusted to fit the scale For example if a statistic value was larger than 100 say 4 000 0 the sys
86. storage node For example if RAID has gone off unexpectedly you need Customer Support to help determine the cause and if it is a disk failure to identify which disk must be replaced Replacing disks and rebuilding data Single disk replacements in storage nodes where RAID is running but may be degraded can be accomplished by following the procedures described in Replacing disks on page 328 The following situations may require consulting with Customer Support to identify bad disks and then following the procedures below to rebuild the data when replicated on the storage node RAIDO Stripe RAID is off due to a failed disk RAID5 5 spare Stripe with parity and 50 if multiple disks need to be replaced then those disks must be identified and replaced and the data on the entire storage node rebuilt RAID10 1 0 Mirror and Stripe can sustain multiple disk replacements However Customer Support must identify if any two disks are from the same mirror set and then the data on the entire storage node needs to be rebuilt RAID6 Stripe with dual parity if multiple disks need to be replaced then those disks must be identified and replaced and the data on the entire storage node rebuilt Before you begin 1 2 3 Know the name and physical location of the storage node that needs the disk replacement Know the physical position of the disk in the storage node Have the replacement disk ready and confirm that
87. tested limits for common SAN configurations and uses Exceeding these guidelines does not necessarily cause any problems However your performance may not be optimal or in some failover and recovery situations may cause issues with volume availability Volumes and snapshots The optimum number of combined volumes and snapshots ranges up to 1 000 If the management group contains 1 001 to 1 500 volumes and snapshots the Configuration Summary displays orange for that line of the management group Over 1 500 volumes and snapshots triggers a warning by turning that line red As soon as the total number reduces below the boundary the summary bar returns to the previous indicator either orange or green iSCSI sessions The optimum number of iSCSI sessions connected to volumes in a management group ranges up to 4 000 If the management group contains 4 001 to 5 000 iSCSI sessions the Configuration Summary displays orange for that line of the management group Over 5 001 iSCSI sessions triggers a warning by turning that line red As soon as the total number of iSCSI sessions reduces below the boundary the summary bar returns to the previous indicator either orange or green Storage nodes in the management group The optimum number of storage nodes in a management group ranges up to 20 If the management group contains 21 to 30 storage nodes the Configuration Summary displays orange for that line of the management group Over 30 storage nodes trigg
88. the date and time while going through the Management Groups Clusters and Volumes wizard This ensures that all the storage nodes in the management group have the same time setting Getting there 1 2 In the network window select a management group and log in Click the Time tab Refreshing the management group time Use Refresh All to update the view of the time on all storage nodes in the management group This view is not updated automatically so you must refresh the view to verify that the time settings on the storage nodes are what you expect 1 2 3 Select a management group Click the Time tab Select Time Tasks and select Refresh All After processing all storage nodes display the current time 119 Using NTP Network time protocol servers NTP can manage the time for the management group instead of using the local system time NTP updates occur at 5 minute intervals If you do not set the time zone for the management group it uses GMT time zone EY NOTE When using a Windows server as an external time source for an storage node you must configure W32Time the Windows Time service to also use an external time source The storage node does not recognize the Windows server as an NTP server if W32Time is configured to use an internal hardware clock 1 Click Time Tasks and select Add NTP Server 2 Type the IP address of the NTP server you want to use 3 Decide whether you want this NTP ser
89. the disk drives from left to right 0 2 4 on the top row and 1 3 5 on the bottom row when you are looking at the front of the Dell 2950 or NSM 2060 77 For the Dell 2950 or NSM 2060 the columns Health and Safe to Remove will respectively help you assess the health of a disk and whether or not you can replace it without losing data nagement Console o X RAID Setup Disk Setup Status Health Safe to Remove I Model Serial Number Class Capacity Active normal Yes ST3750640NS 5QD148QJ SATA 3 0GB 698 12 GB Active normal Yes ST3750640NS 5QD149XxX0 SATA 3 0GB 698 12 GB Active normal Yes ST3750640NS 5QD1A9EF SATA 3 0GB 698 12 GB Active normal Yes ST3750640NS 5QD14931 SATA 3 0GB 698 12 GB Active normal Yes ST3750640NS 5QD1444Q SATA 3 0GB 698 12 GB Active normal Yes ST3750640NS SQD1A9SW SATA 3 0GB 698 12 GB Disk Setup Tasks Figure 53 Drive bays with bezel off in a Dell 2950 or NSM 2060 Viewing disk status for the NSM 4150 The disks are labeled O through 14 in the Disk Setup window and correspond with the disk drives from left to right when you are looking at the front of the NSM 4150 For the NSM 4150 the columns Health and Safe to Remove will respectively help you assess the health of a disk and whether or not you can replace it without losing data anagement me i oog RAID Setup Disk Setup Boot Devices Health Safe to Remove Model Serial Number Class C
90. the evaluation period You start the evaluation period for an advanced feature when you configure that feature in the CMC Table 70 Descriptions of advanced features Advanced Feature Provides This Functionality Ane Pmers dna Ucenee Evaluation Period When Multi Node Virtualiza Clustering multiple storage nodes to You add 2 or more storage nodes to a tion and Clustering create pools of storage cluster in a management group Creating secondary volumes and snap You create a remote volume in prepara Remote Copy shots in remote locations tion for making a remote snapshot Managed Snapshot Creating schedules to snapshot volumes ai a schedule to snapshot a Multi site clusters which synchronously Multi Site SAN and automatically mirror data between You create a cluster with multiple sites sites Backing out of Remote Copy evaluation If you decide not to purchase Remote Copy you must delete any remote volumes and snapshots you have configured However you can save the data in the remote snapshots before you delete them 1 First back up any volumes you plan to retain 318 Registering advanced features 2 Next safely back out of the Remote Copy evaluation as described in Table 71 according to how you want to handle the data Table 71 Safely backing out of Remote Copy evaluation Fate of Data in Remote Snapshots Steps to Back Out e Delete the remote snapshots Removing data from the remote target e Delete t
91. the snapshot to a server as a read write volume and connect to it with an iSCSI initiator Mounting a snapshot on a server adds temporary space to the snapshot See Managing snapshot temporary space on page 253 for more detailed information about temporary space Mounting the snapshot on a host You can add a server to the snapshot when it is created or add the server later For information about creating and using servers see Chapter 17 on page 289 1 If it is not already added add the server on which you want to mount the snapshot to the management group Assign the snapshot to the server and configure the snapshot for read write access Configure server access to the snapshot If you mount an application managed snapshot as a volume use diskpart exe to change the resulting volume s attributes For more information see Making an application managed snapshot available on page 250 When you have mounted the snapshot on a host you can do the following e Recover individual files or folders and restore to an alternate location e Use the data for creating backups Making an application managed snapshot available If you do any of the following using an application managed snapshot you must use diskpart exe to make the resulting volume available Convert temporary space 250 Using snapshots Create a SmartClone Promote a remote volume to a primary volume e Failover Failback Volume Wizard and selecting the Fa
92. them to 2 way replication before replacing the disk If the cluster does not have enough space for the replication take a backup of the volumes or snapshots and then delete them from the cluster After the disk replacement is complete you can recreate the volumes and restore the data from the backup All volumes and snapshots show a status of Normal Any volumes or snapshots that are being deleted have finished deleting Use the instructions in Replacing disks appendix on page 327 if you have more than one disk to replace or if you are unsure which disk needs replacing Best practice checklist for single disk replacement in RAID1 10 RAID5 RAID50 and RAID6 There are no prerequisites for this case however HP recommends that All volumes and snapshots should show a status of Normal Any volumes or snapshots that were being deleted have completed deletion RAID status is Normal or If RAID is Rebuilding or Degraded for platforms that support hot swapping of drives the Safe to Remove column indicates Yes that the drive can safely be replaced Replacing a disk in RAIDO Complete the checklist for single disk replacement RAIDO Manually power off the disk in the CMC for RAIDO You first power off in the CMC the disk you are replacing which causes RAID to go off So So PS a In the navigation window select the storage node in which you want to replace the disk Select the Storage category Select the Disk Setu
93. to ensure that the applications on the server start only after the iSCSI service starts and the sessions are connected HP LeftHand DSM for MPIO settings If you are using HP LeftHand DSM for MPIO and your server has two NICs select the Enable multi path option when logging on to the volume and log on from each NIC For more information about HP LeftHand DSM for MPIO refer to the HP LeftHand P4000 Windows Solution Pack User Manual Disk management You must also format configure and label the volumes from the server using the operating system s disk management tools 295 296 Controlling server access to volumes 18 Monitoring performance The Performance Monitor provides performance statistics for iSCSI and storage node I Os to help you and HP LeftHand Networks support and engineering staff understand the load that the SAN is servicing The Performance Monitor presents real time performance data in both tabular and graphical form as an integrated feature in the CMC The CMC can also log the data for short periods of time hours or days to get a longer view of activity The data will also be available via SNMP so you can integrate with your current environment or archive the data for capacity planning See Chapter 7 on page 129 As a real time performance monitor this feature helps you understand the current load on the SAN to provide additional data to support decisions on issues such as the following e Configuration
94. to fit the scale For example if a statistic value was larger than 100 say 4 000 0 the system would scale it down to 40 0 using a scaling factor of 0 01 If the statistic value is smaller than 10 0 for example 7 5 the system would scale it up to 75 using a scaling factor of 10 The Scale column of the statistics table shows the current scaling factor If needed you can change the scaling factor For more information see Changing the scaling factor on page 315 The horizontal axis shows the either local time or Greenwich Mean Time The default setting is the local time of the computer that is running the CMC You can change this default to GMT See Changing the sample interval and time zone on page 309 This time zone setting is not related to the management group time zone For information about controlling the look of the graph see Changing the graph on page 313 305 Performance monitoring table The performance monitor table displays a row for each selected statistic Display Line Name Server Statistic Units Value Y Minimum Maximum Average Scale vi 5 Stores Throughput Total Bis 72 187 172 208 72 187 172 81 685 942 857 76 622 128 656 Auto 0 000001 Y ra 3 Stores IOPS Total lO s 1 101 489 4 101 489 1 246 429 1 169 161 Auto 0 01 Y WM S stores Queue Depth Total 8 000 0 000 8 000 1 619 Auto 10 Y Figure 155 Performance Monitor table The table shows i
95. unavailable to any clients The data is safe and you can continue to manage the volumes and snapshots in the CMC Also the entire configuration can be restored to availability when a license key is purchased and applied to the storage nodes in the management group that contains the configured advanced features EY NOTE If you know you are not going to purchase the feature plan to remove any volumes and snapshots created by using the feature before the end of the 30 day evaluation period 317 Tracking the time remaining in the evaluation period Track the time left on your 30 day evaluation period by using either the management group Registration tab or the reminder notices that open periodically Viewing licensing icons You can check the status of licensing on individual advanced features by the icons displayed Note that the violation icon displays throughout the 30 day evaluation period Details Remote Snapshots Time Registration Registration Information Node Golden 1 Provisioning server version is 8 1 00 0018 0 Management server version is 3 1 00 0018 Storage server version is 8 1 00 0018 LSMD no replication no J M space Licensed for Managed Snapshot Not licensed for Multi Node Virtualization and Clustering in violation because of virtual manager Not licensed for Multi Site SAN Not licensed for Remote Copy v Figure 160 Icons indicating license status for advanced features Starting
96. window 136 viewing and saving 143 window for viewing 30 anager IP addresses updating 117 application managed snapshots converting temporary space from 253 creating 248 creating for volume sets 249 creating SmartClone volumes from 260 defined 245 deleting 261 making available 250 251 252 requirements for 248 rolling back from 259 Assigning servers to volumes and snapshots 292 293 294 Authentication Groups and volume lists 289 auto discover 29 auto performance protection 214 storage server inoperable 214 storage server overloaded 214 volume availability and 214 availability of volumes and snapshots 51 214 Availability tab 51 available node pool 7 available nodes 32 available storage nodes See Available Nodes B backing out of Remote Copy evaluation 318 of scripting evaluation 320 backing up management group configuration description 184 management group configuration binary 183 Management Group with Remote Copy Relationship 184 storage node configuration file 46 backup and restore storage node configuration files 45 bandwidth changing local settings 183 benefits of virtual manager 201 354 best practice recommended numbers for management group storage items configuring cluster for disaster recovery 202 configuring volumes with critical data 225 frame size 110 link aggregation dynamic mode 93 NIC bonds 100 provisioning storage 222 setting replication levels and redun
97. window select a storage node and log in Open the tree and select the TCP IP Network category Select the Routing tab On the Routing tab select the optional route you want to change Click Routing Tasks and select Edit Routing Information Select a Route and click Edit Change the relevant information Click OK Deleting routing information You can only delete optional routes you have added ONAN WH In the navigation window select a storage node and log in Open the tree and select the TCP IP Network category Select the Routing tab On the Routing tab select the optional route you want to delete Click on Routing Tasks and select Edit Routing Information Select the routing information row you want to delete Click Delete Click OK on the confirmation message Configuring storage node communication Use the Communication tab to configure the network interface used by the storage node to communicate with other storage nodes on the network and to update the list of managers that the storage node can communicate with 115 Selecting the interface used by the SAN iQ software The SAN iQ software uses one network interface for communication with other storage nodes on the network In order for clustering to work correctly the SAN iQ software communication interface must be designated on each storage node The interface can be A single NIC that is not part of a bond A bonded interface consisting of 2 bonded NICs
98. with 10 SmartClone volumes 1 clone point and the source volume So you edit the volume C class_1 and on the Advanced tab you change the cluster to SysAdm A confirmation message window opens This message lists all the volumes and snapshots that will have their cluster changed as a result of changing C class_1 In this case there are 12 volumes and snapshots that will move to cluster SysAdm the original C volume and the 10 SmartClone volumes plus the clone point 270 SmartClone volumes Centralized Management Console WARNING Migrating volume C class_1 to cluster SysAdm may require that you modify the virtual IP VIP and or the iSCSI initiator IP address information associated with this volume After migrating ensure that these settings are correct for your new configuration Are you sure you want to migrate this volume Changing the cluster association of volume C class_1 will also change the cluster association of the following volumes and their associated snapshots if any C C class_2 C class_3 C class_4 C class_5 C class_6 C class_7 C class_68 C class_9 C class_10 To continue with the operation click OK To exit the operation without making changes click Cancel Figure 122 Changing one SmartClone volume changes all associated volumes and snapshots When you click OK on the message the 12 volumes and snapshots move to the cluster SysAdm a a a ee ees Cee fz HP LeftHand Networks Centrali
99. with the rest of the management group After a management group is shut down a subset of storage nodes is powered on The management group remains in maintenance mode until the remaining storage nodes are powered on and re discovered in the CMC For some reason a storage node comes up but it is not fully functional Manually change management group to normal mode While the management group is in maintenance mode volumes and snapshots are unavailable You may get volumes and snapshots back online if you manually change the status from maintenance mode to normal mode depending upon how your cluster and volumes are configured However manually changing from maintenance mode to normal mode causes the volumes in the management group to run in degraded mode while it continues resynchronizing or until all the storage nodes are up or the reason for the problem is corrected A CAUTION If you are not certain that manually setting the management group to normal mode will bring your data online or if it is not imperative to gain access to your data do not change this setting 1 2 Click Management Group Tasks on the Details tab and select Edit Management Group In the navigation window select the management group and log in Edit Management Group Group Name Exchange Group Mode Normal Mode Local Bandwidth Priority Bandwidth Priority 4 MB sec How Dol Set My Bandwidth Priority PPPUPee seep ee eee e e
100. you change some settings and export data 0000000 9000 h CRA a E A A Status Normal as dD ill si secs v Button or Status 1 Performance Monitor status Export 5 secs for 60 secs Lf m E ES Definition Normal Performance monitoring for the cluster is OK e Warning The Performance Monitor is having difficulty monitoring one or more storage nodes Click the Warning text for more information Figure 153 on page 305 2 Add Statistics Opens the Add Statistics window 3 Hide Graph Show Graph Toggles the graph display on or off 4 Resume Monitoring Restarts monitoring after pausing 5 Pause Monitoring Temporarily stops monitoring 6 Sample Interval Numeric value for the data update frequency 7 Sample Interval Units Unit of measure for the data update frequency either minutes or seconds 8 Export status e N A No export has been requested Sample interval and duration If you have expor ted data sample interval and duration display e Paused You paused an export e Stopped You stopped an export e Warning System could not export data Click the Warning text for more information e Error System stopped the export because of a file lO error Try the export again 9 Start Export Log Resume Export Log Displays window to set up exporting of data to a comma separated values CSV file Button chang
101. 1 Disk 2 Disk 3 Disk 4 Disk 5 Figure 13 Parity distributed across RAID6 Drive failure and hot swapping in RAID6 The DL320s NSM 2120 HP LeftHand P4500 and HP LeftHand P4300 support RAID6 and also support hot swapping in the event of a drive failure Hot swapping means that you can physically remove a failed drive and insert a new one without powering down the unit In addition to redundancy during normal operation RAID6 further protects the RAID array against data loss during degraded mode by tolerating up to two drive failures during this vulnerable stage Explaining RAID devices in the RAID setup report In the Storage category the RAID Setup tab lists the RAID devices in the storage node and provides information about them An example of the RAID aetup report is shown in Figure 14 Information listed in the report is described in Table 9 59 nagement Console RAID Setup Disk Setup RAID Status E normal RAID Configuration RAID 10 RAID Rebuild Rate Priority High Device Name Device Type Device Status Subdevices Jdeviscsishost2 ous2farget RAID 1 Normal 2 deviscsi host2 us2Aarget2 RAID 1 Normal 2 Jdeviscsifhost2fous2ftarget3 RAID 1 Normal Figure 14 RAID10 in an Dell 2950 RAID devices by RAID type Each RAID type creates different sets of RAID devices Table 9 contains a description of the variety of RAID devices created by the different RAID types as impleme
102. 120 without NTP 121 setting up a RAID disk 73 shared snapshots 274 shutting down a management group 185 Management Group shutting down 184 single disk replacement in RAIDO 83 Single Host Configuration in iSCSI 339 sites defined 32 size changing for volumes 242 for snapshots 227 planning for snapshots 246 volumes 221 requirements for volumes 238 slow I O 214 SmartClone Volumes making application managed snapshot available after creating 250 251 252 SmartClone volumes assigning server access characteristics of shared versus individual characteristics of 267 clone point 272 creating from application managed snapshots 260 definition of 263 deleting 284 deleting multiple 285 editing 284 examples for using 264 glossary for 264 overview 263 planning 265 planning naming convention 266 planning space requirements 265 requirements for changing 283 uses for 265 utilization of 282 viewing with Map View 280 SMTP setting SMTP for alert notification 142 settings for alert notification 142 settings for alert notification one variable 142 settings for alert notification several variables 143 Snapshots assigning to servers 292 293 editing server assignments 294 367 snapshots adding 248 adding schedules for 255 and upgrading software 247 application managed 245 as opposed to Backups 226 changing thresholds 250 controlling server access to 289 copying a volume from 250 cre
103. 17 Unassigned RAID 5 Figure 101 Replacing the repaired storage node 218 Working with clusters Select the repaired storage node to exchange for the ghost storage node and click OK The storage node returns to its original position in the cluster and volumes in the cluster proceed to resync Edit Cluster x General iSCSI Cluster Name Logs Description Cluster Status Normal Cluster Type Standard Storage Nodes in Cluster Multi Site clusters require that you have the same number of storage nodes in each site within the cluster Name PAddress Site RAID Configuration Denver 1 10 0 6116 Unassigned RAID 5 Denver 2 10 0 6117 Unassigned RAID 5 Denver 3 10 0 60 32 Unassigned RAID 50 Remove Nodes Cancel Figure 102 Repaired storage node returns to proper place in cluster Deleting a cluster Volumes and snapshots must be deleted or moved to a different cluster before you can delete the cluster For more information see Deleting a volume on page 243 and Deleting a snapshot on page 261 219 220 Working with clusters 12 Provisioning storage The SAN iQ software uses volumes and snapshots to provision storage to application servers and to back up data for recovery or other uses Before you create volumes or configure schedules to snapshot a volume and related policies plan the config
104. 19 95 485 79 856 Auto 1 Y H enver 2 _ Network Utilization Motherboard Portl 9 12821 1077316929 13305 Auto 1 Y M venver 3 Network Utilization Motherboarc Port 8 809 8 809 17 087 13 477 Auto 1 Y Figure 148 Example showing network utilization of three storage nodes Load comparison of two clusters example This example illustrates the total IOPS throughput and queue depth of two different clusters Denver and Boulder letting you compare the usage of those clusters You can also monitor one cluster in a separate window while doing other tasks in the CMC 301 5 secs v Performance Monitor Denver azs s 50 40 30 20 4 16 00 PM 4 16 30 PM 4 17 00PM_ 4 17 30 PM 4 18 00 PM 4 18 30 PM 4 19 00 PM 4 19 30 PM Mountain Standard Time America Denver Statistic Units Value Minimum Maximum Average Scale Throughput Total B s 3 429 910 628 3 262 777 070 3 531 680 318 3 428 598 287 Auto 0 00001 Y IOPS Total os 837 381 796 576 862 227 837 060 Auto 01 Y Average I O Size BAO 4096 000 4 096 000 4 095 000 4 095 000 Auto 0 01 7 Queue Depth Total 128 000 125 000 128 000 127 667 Auto 0 1 Y Performance Monitor Boulder 4 20 30 PM 4 21 00 PM 4 21 30 PM 4 22 00 PM 4 22 30 PM 4 23 00 PM 4 23 30 PM 4 24 00 PM Mountain Standard Time America Denver Display Line Name _ Server Statistic Units Value Minimum Maximum Average Scale
105. 2 Parity distributed across a RAID5 set using four disks Parity allows the storage node to yield more disk capacity for data storage than RAID10 allows Parity and storage capacity in RAID5 or 5 spare Parity in a RAID5 set equals the capacity of one disk in the set Therefore the capacity of any RAID5 set is n 1 as illustrated in Table 8 Table 8 Storage capacity of RAID5 sets in storage nodes Model Number of disks in RAID5 set Storage capacity of disks NSM 160 4 disks3 disks plus a spare He Se ediee NSM 260 5 disks plus a spare x 2 RAID 8 x single disk capacity 10 x single disk sets 6 disks x 2 RAID sets capacity DL380 6 disks 5 x single disk capacity DL320s NSM 2120 6 disks x 2 RAID sets 10 x single disk capacity IBM x3650 6 disks 5 x single disk capacity Dell 2950 6 disks 5 x single disk capacity NSM 2060 6 disks 5 x single disk capacity HP LeftHand P4300 8 disks 7 x single disk capacity NSM 4150 5 disks 4 x single disk capacity HP LeftHand P4500 6 disks x 2 RAID sets 10 x single disk capacity RAID5 and hot spare disks RAID5 configurations that use a spare designate as a hot spare the remaining disk of the RAID set With a hot spare disk if any one of the disks in the RAIDS set fails the hot spare disk is automatically added to the set and RAID starts rebuilding 58 Storage Table 8 on page 58 lists the RAID5 configurations by model and indicates which configurations supp
106. 338 Logging in depends on where the storage node is eceeeeeeseeeeeeeeeeeseeeeeeseeteeeeeees 342 Identifying ethernet interfaces on the storage node ccceececceeseeceeneceeeeteeeeettseeetaees 343 Eol a E T A te thas taint E aaepe baat aan eiaers 347 25 26 About this guide This guide provides information about e Configuring managing and maintaining the HP LeftHand Storage Solution This guide encompasses hardware reporting configuration the volume and snapshot features and guidance for maintaining the SAN Related documentation You can more information about HP s Lefthand products from the Manuals page of the HP Business Support Center website htto www hp com support manuals In the Storage section click Disk Storage Systems gt Left hand SANs and then select your product HP technical support For worldwide technical support information see the HP support website http www hp com support Before contacting HP collect the following information e Product model names and numbers e Technical support registration number if applicable e Product serial numbers e Error messages e Operating system type and revision level e Detailed questions Subscription service HP recommends that you register your product at the Subscriber s Choice for Business website http www hp com go e updates After registering you will receive e mail notification of product enhancements new driver version
107. 344 determining of use would improve performance 301 361 network interface bonds 92 active passive 94 adaptive load balancing 98 best practices 100 communication after deleting 105 configuring 100 creating 101 deleting 104 link aggregation dynamic mode 97 physical and logical interface 94 requirements 93 adaptive load balancing 98 setting flow control and 111 settings 93 status of 103 verifying 102 VSA 89 network interfaces 97 attaching Ethernet cables 90 bonding 92 configuring 91 107 disabling or disconnecting 106 establishing 90 identifying 90 speed and duplex settings 108 used for SAN iQ communication 116 VSA 89 Network Time Protocol See NTP network window see navigation window 3 NIC See Network Interfaces 90 NIC flow control 111 enabling 111 requirements 1 1 VSA 89 nodes finding on network 29 39 normal 46 normal RAID status 7 not preferred NTP server 120 NSM 160 capacity RAID5 58 in RAIDO 60 in RAID10 61 in RAIDS 63 mirroring 6 RAID levels and default configuration 55 RAID rebuild rate 69 reconfiguring RAID 71 362 NSM 2060 capacity RAID5 58 disk arrangement in disk setup 77 disk status 77 RAID levels and default configuration 55 RAID rebuild rate 69 RAID10 initial setup 61 RAIDS initial setup 63 NSM 260 capacity RAIDS 58 devices RAID10 61 disk status 75 locating in rack 45 RAID levels and default confi
108. 4 7 10 on the top row and 2 5 8 11 on the second row and so on as shown in Figure 47 when you are looking at the front of the DL320s NSM 2120 For the DL320s NSM 2120 the columns Health and Safe to Remove will respectively help you assess the health of a disk and whether or not you can replace it without losing data Management Console BA X RAID Setup Disk Setup Disk Status Health Safeto Remove Model Serial Number Class Capacity Active normal Yes ATA 573250624 9ND0S42J SATA 3 0GB 232 89 GB Active normal Yes ATA ST3250624 SNDOTGKC SATA 3 0GB 232 89 GB Active normal Yes ATA ST3250624 SNDOSS2G SATA 3 0GB 232 89 GB Active normal Yes ATA 573250624 SNDOP761 SATA 3 0GB 232 89 GB Active normal Yes ATA ST3250624 SNDOTFYVG SATA 3 0GB 232 89 GB Active normal Yes ATA ST3250624 SNDOPLP4 SATA 3 0GB 232 89 GB Active normal Yes ATA ST3250624 SNDOS287 SATA 3 0GB 232 89 GB Active normal Yes ATA ST3250624 SNDOS46G SATA 3 0GB 232 89 GB Active normal Yes ATA ST3250624 SNDOR2KVY SATA 3 0GB 232 89 GB Active normal Yes ATA T3250624 SNDOK3Z2 SATA 3 0GB 232 89 GB Active normal Yes ATA 573250624 9NDOTGDE SATA 3 0GB 232 89 GB Active normal Yes ATA ST3250624 SNDOTGBL SATA 3 0GB 232 89 GB Figure 47 Diagram of the drive bays in a DL320s NSM 2120 76 Storage Viewing disk status for the IBM x3650 For the IBM x3650 the disks are labeled O through 5 in the Disk Setup window Figure 48 and correspond with the disk drives labe
109. 5 NSM 4150 78 VSA 77 DL 320s reserved community strings 130 DL 380 reserved community strings 1 30 DL320s capacity RAID5 58 capacity in RAID6 59 disk setup 76 disk status 76 drive failure 59 hot swapping 59 in RAID10 61 parity in RAID6 59 RAID rebuild rate 69 RAID5 set 63 RAID6 66 DL380 capacity RAID5 58 disk arrangement in disk setup 75 disk status 75 RAID levels and default configuration 55 RAID rebuild rate 69 RAIDO 60 RAID10 61 DNS adding domain name 112 and DHCP 112 and static IP addresses 112 357 DNS server adding 112 and static IP addresses 112 editing IP or domain name 113 removing 14 using 112 documentation HP website 27 providing feedback 28 Domain Name Server See DNS Server domain names adding to DNS suffixes 113 editing in DNS suffixes list 113 removing from DNS suffixes list 114 downloading variable log file 143 144 DSM when using two NICs 295 DSM for MPIO 335 how to see if using 340 tips for using to access volumes from servers 340 Dynamic Host Configuration Protocol See DHCP E Editing servers 29 editing clusters 211 DNS server domain names 113 DNS server IP addresses 113 domain name in DNS suffixes list 113 frame size 110 group name 26 management groups 82 monitored variables 137 network interface frame size 109 speed and duplex 108 NTP server 120 routes 115 SmartClone volumes 284 snapshot schedules 255 snapsho
110. 51 Viewing the storage configuration category for a storage node seeeeeeeeeeeteeeneeeeees 56 Example of the capacity of disk pairs in RAID10 i isscsveissvavnenrsainnivarsvanieensssennciennnoens 57 Parity distributed across a RAIDS set using four disks ecceeeseeeeeeeteeeeeeeeeeeenterenee 58 Parity distributed across RAIDG oo ncnsncsnunnssnsdenterbreanannansaaeevnsseaidvcezsecnienrendsiubdsanmenesnte 59 RAIDO iman Dell 2950 srini ree n a E E E E ete 60 RAIDO an NSM 16O eniinn a a a E A 60 RAIDO on an NSM 260 ccssctceisecenteoee dice unseeasenierneadivanies eee atiraslaciga anda beieatgueetadrageeiaananieenny 6 RAIDO on a DES SO rn a a a e a E 6l RAIDO oman IBM RIOS O nienia a N E E any 6l RAID10 on an NSM 160 mirroring done at hardware level c ccccseeeeeeeseeeeeseeees 61 RAID I oman INSM 260 oaiae iaie mane ruen irr oa E E a E EE 6l RAID TO Git a DESEO xcssssrearccssaseainscneantonantlantanaieidnsionanineareieeiaecniucnoapanssoneyin 62 RAID10 on the DL320s NSM 2120 and the HP LeftHand P4500 ccceeeeeeeeeteeees 62 RAID1 0 in the HP LeftHand P4300 i o csisceannennedbiaveananccanel sain thananioncnesiiedantainnsnsuare inet 62 Initial RAID10 setup of the Dell 2950 and NSM 2060 cecsceesseeeteecereeeeereeeneeeeneeeees 62 Initial RAID IO setup of the NSM 4150 ccsencsva vecnauiesnwetaicededsonadsduas Corendvisnensivarniennganis 63 RAIDS set inan NSM 160 ssenarisi ne E E E 64 RAIDS sp re inan NSM T
111. 7 Setting characteristics for SmartClone volumes ceeeceeeeeeeseeeeeeeeeeeeeenaeesteeenteeeess 276 128 Creating multiple SmartClone volumes cccccccceeeeeeeeeeeeeenneeeeeessneeeeeeeeeeeeeeneees 278 129 Creating multiple SmartClone Vols a 34ceascsarsadiaxcaaalrnadiosacdadeaniecenssmendabeoraeseanteeanays 278 130 New SmartClone volumes in Navigation Window ccccsccceceeseeeeeeeeeeenneeeeeentnaeeees 279 131 Viewing SmartClone volumes and snapshots as a tree in the Map View 00eeeeeees 280 132 Viewing the organic layout of SmartClone volumes and associated snapshots in the Map WLW chic Svs E Veinideeuaechpeh eden A ATEA 280 133 Toolbar with display tools in the Map View Window ccccsceeceesteeeeeteeseeteeeeeneerees 281 134 Using the Magnify tool with Map View tree csceeesseeeeseeeneeceseeceseeeseeestseesteeeeeeenas 281 135 Highlighting all related clone points in navigation window cseeseeeeeeeeeeseeeeeeneeeees 282 136 Clone point Details tab showing utilization graph ccccececeeseeeeeeeenteeeeeseeeneeeeeeeees 283 137 SmartClone volume Details tab showing utilization graph ccceesseeeeeeeettteeeeeeeneees 283 138 Viewing volumes that depend on a clone point cceeeccccceeeesteeeeeeeeeteeeeeeneettaeeeeees 285 139 List of SmartClone volumes in cluster a yeedseeeus jacvanveuapatengedeanednadeeaspuemamhemisebornenaaanngneeed 285 140 Server assignments in the navigat
112. 8 Viewing disk status for the HP LeftHand P4500 cccccccescceeseeceeseeeeeteeeeneaeeeeeeeeeeeaees 79 Viewing disk status for the HP LeftHand P4300 cccccccessceeeeeeceeeseeeeesseeeeneeeensseeeeeaaes 80 JelloL alate he gt cl eer marae en RET RT Mme SO ee Re ROTA TO OCR OER eR a ee Te 80 Ithe VAS eenean eta ban ce teae pet vali Ayes aa cena ceuntacean tober edl S EE A EE Aa EREA 80 Overview of replacing a disk acest Savenacaice ves tapGernsgrnt cues Ate es tatters quien dannnae Menetansgh nd cies 81 Replacing disks in non hot swap platforms IBM x3650 ccccecceeeeeeeeeeeneeeeeeeeeeeeseeeeeses 81 Replacing disks in RAIDO configurations y vi ius wirten dive eeidenl oles dgudivataresstenwieds 82 Preparing for a disk replacement a isis sesveasadacracauccsansithasaeyunccedncenboa darnnauhnsdamnatenandengonangooncmanasanns 82 To prepare for disk replacement ss tcc toict cuales eeeelat alee Aun oe Oe 82 Identify physical location of storage node and disk cccccceeeeseeeeeteeeeeeeeeeeeseeeeeseeeeenaes 82 Best practice checklist for single disk replacement in RAIDO eccceeseeeeeeeeeeteeetseeeteeeenees 82 Best practice checklist for single disk replacement in RAID1 10 RAID5 RAID50 and RAID6 By Ges eS eee se a gS ep E ots tev a AES RRNA UATE ARISEN Ns 83 Replacihig a disk in RAIDO ons s avossleasaqcosscha anche rss danoupenasr n e a name guns oadetancah auetadkad yang 83 Manually power off the disk in the CMC
113. 93 Link aggregation dynam Feature Active passive Adaptive load balancing ic ode NSM160 NSM160 NSM160 NSM260 NSM260 NSM260 DL 380 DL 380 DL 380 DL 320s NSM 2120 DL 320s NSM 2120 DL 320s NSM 2120 Supported storage IBM x3650 IBM x3650 IBM x3650 nodes Dell 2950 Dell 2950 Dell 2950 NSM 2060 NSM 2060 NSM 2060 NSM 4150 NSM 4150 NSM 4150 HP LeftHand P4500 HP LeftHand P4500 HP LeftHand P4500 HP LeftHand P4300 HP LeftHand P4300 HP LeftHand P4300 Allocate a static IP address for the logical bond interface bondO You cannot use DHCP for the bond IP How active passive works Physical Bonding NICs for Active Passive allows you to specify a preferred interface that will be used for data transfer This is the active interface The other interface acts as a backup and its status is Passive Ready and logical interfaces The two NICs in the storage node are labeled as listed in Table 15 If both interfaces are bonded for failover the logical interface is labeled bondO and acts as the master interface As the master interface bondO controls and monitors the two slave interfaces which are the physical interfaces Table 15 Bonded network interfaces Failover name Failover description bondO Logical interface acting as master ethO or Motherboard Port1 Physical interface acting as slave eth or Motherboard Port2 Physical interface acting as slave Slot PortO NSM 260 Physical interface in a PCI slot This
114. AIDS set Figure 32 Initial RAID5 setup of the Dell 2950 and NSM 2060 EY NOTE The initial disk setup shown above for the Dell 2950 and NSM 2060 may change over time if you have to replace disks RAID5 in the NSM 260 RAIDS in the NSM 260 consists of either six disk sets shown in Figure 33 or sets of five disks plus a spare so that the single disk acts as a hot spare for the RAID set shown in Figure 34 RAID50 in the NSM 4150 consists of three RAIDS sets using all 15 disks shown in Figure 35 Storage Node 1 2 3 4 5le6l7 JE 10 11 12 Disks y RAID 5 sets Figure 33 NSM 260 RAID5 using six disk sets Disks oe 5 a A C gt Figure 34 NSM 260 RAID5 using five disks plus a hot spare Storage Node 2 s 4 s 65 Safe to Active normal Yes 0 1 Active normal Yes B 2 Active normal Yes B 3 Active normal Yes B 4 Active normal Yes B 5 Active normal Yes B 6 Active normal Yes B 7 Active normal Yes B 8 Active normal Yes B 9 Active normal Yes B 10 Active normal Yes B 11 Active normal Yes B 12 Active normal Yes BEE Active normal Yes B 14 Active normal Yes 1 RAIDS set 2 RAIDS set 3 RAID5 set Figure 35 Initial RAID50 setup of the NSM 4150 EY NOTE The initial disk setup shown above for the NSM 4150 may change over time if you have to replace disks RAID6 in the DL320s NSM 2120 HP Left
115. AN iQ HP LeftHand DSM for MPIO you can use HP LeftHand DSM for MPIO to access volumes For more information about HP LeftHand DSM for MPIO refer to the HP LeftHand P4000 Windows Solution Pack User Manual You can see if you are using HP LeftHand DSM for MPIO in the CMC by selecting a server in a management group then clicking the Volumes and Snapshots tab The Gateway Connection column shows multiple connections labeled as DSM When accessing volumes from a server using HP LeftHand DSM for MPIO keep in mind the following e SAN iQ HP LeftHand DSM for MPIO and the Microsoft MPIO must be installed on the server e With the above installed servers automatically use HP LeftHand DSM for MPIO when you log on to volumes from the iSCSI initiator e If you have dual storage NICs in your server you can select the Enable multi path option when logging on to the volume and log on from each NIC 340 iSCSI and the HP LeftHand Storage Solution 23 Using the Configuration Interface The Configuration Interface is the command line interface that uses a direct connection with the storage node You may need to access the Configuration Interface if all network connections to the storage node are disabled Use the Configuration Interface to perform the following tasks Add storage node administrators and change passwords Access and configure network interfaces Delete a NIC bond Set the TCP speed and duplex or edit the frame size Remove th
116. Community String Configuration Summary Date and Time Diagnostics Disk status Frame size Full provisioning Ghost NSM Graphical legend Hardware reports Hostname ID LED iSCSI Load Balancing 348 Glossary Definition Command line interface for the SAN iQ software A cluster is a grouping of storage nodes that create the storage pool from which you create volumes Centralized Management Console See HP LeftHand Centralized Management Console The unicast communication among storage nodes and application servers The community string acts as an authentication password It identifies hosts that are allowed read only access to the SNMP data The Configuration Summary displays an overview of the volumes snapshots storage nodes and iSCSI sessions in the HP LeftHand Storage Solution It provides an overview of the storage network broken out by management groups Set the date and time on the storage node if not using an external time service such as NTP Diagnostics check the health of the storage node hardware Whether the disk is e Active on and participating in RAID Uninitialized or Inactive On but not participating in RAID e Off or Missing Not on e DMA Off disk unavailable due to faulty hardware or improperly seated in the chassis The frame size specifies the size of data packets that are transferred over the net work Full provisioning reserves the same amount of space on the SAN a
117. D See Replacing a disk on page 80 6 Return the repaired storage node to the management group The ghost storage node remains in the cluster EY NOTE The repaired storage node will be returned to the cluster in the same place it originally occupied to ensure that the cluster resyncs rather than restripes See Chapter 24 on page 347 for definitions of restripe and resync 7 Optional Start a the manager on the repaired storage node 217 To return the repaired storage node to the cluster 1 Right click the cluster and select Edit Cluster window PF gt General Cluster Name Logs Cluster Status Normal Cluster Type Standard Storage Nodes in Cluster Multi Site clusters require that you have the same number of storage nodes in each site within the cluster a HOTE There is a ghost storage node in this cluster To avoid restriping the entire cluster exchange the placeholder node with a new or repaired node Name __PAddress Site RAID Configuration Denver 1 10 0 61 16 Unassigned RAID 5 1006147 10 0 6117 Unassigned Not Available Denver 3 10 0 60 32 Unassigned RAID 50 Add Nodes Remove Nodes Figure 100 Exchanging ghost storage node 2 Select the ghost storage node the IP address in the list and click Exchange Node Exchange storage node 10 0 61 17 With a node selected in the following table IP Address Site RAID Configuration 10 0 61
118. Fs3 GB win pki 9 test vmhba32 6 0 1 14 75 GB 5 44 GB vmfs3 Networking snap 00000002 test vmhba32 7 0 1 14 75 GB 5 43GB vmfs3 Storage Adapters Network Adapters E rhet Software susel0 r Details EB vsa 13 Licensed Features wk Time Configuration ED wars DNS and Routing Virtual Machine Startup Shutdown Virtual Machine Swapfile Location Security Profile System Resource Allocation Advanced Settings Figure 118 Duplicate names on duplicate datastores in ESX Server 266 SmartClone volumes Server access Plan the servers you intend to assign to the SmartClone volumes Configure the servers before creating the volumes and you can then assign the servers when you create the volumes See Chapter 17 on page 289 Detining SmartClone volume characteristics When creating SmartClone volumes you define the following characteristics Table 59 Characteristics for new SmartClone volumes SmariClone volume characteristic What it means Quantity The number of SmartClone volumes you want to create You can create up to 25 as one operation in the CMC and then repeat the process to create the desired number of SmartClone volumes Use the CLI to create larger quantities of SmartClone volumes in a single operation SmartClone Name The name of the SmartClone volume that is displayed in the CMC A volume name is from 1 to 127 characters and is case sensitive The name cannot be changed after the vo
119. HDS725050 KRVNO3ZAH SATA3 0GB 465 66 GB normal Yes HDS725050 KRVNOSZAH SATA 3 0GB 465 66 GB normal Yes HDS725050 KRYNOSZAH SATA 3 0GB 465 66 GB normal Yes HDS725050 KRVNO3ZAH SATA3 0GB 465 66 GB Figure 43 Diagram of the drive bays in the NSM 260 Viewing disk status for the DL380 For the DL380 the disks are labeled O through 5 in the Disk Setup window shown in Figure 44 and correspond with the disk drives from left to right 5 3 1 on the top and 4 2 0 on the bottom as shown in Figure 45 when you are looking at the front of the DL380 For the DL380 the columns Health and Safe to Remove will respectively help you assess the health of a disk and whether or not you can replace it without losing data 75 Management Console am o x Raw Setup Disk Setup Safe to Remove Model Serial Number Class Capacity normal Yes COMPAQ BDO728 D210F2SK SCSI 320MB 67 84 GB normal COMPAQ BDO728 D212K4CK SCSI 320MB 67 84 GB normal COMPAQ BD0728 3KTOM8550000753 SCSI 320MB 67 84 GB normal COMPAQ BD0728 D2104HBK SCSI 320MB 67 84 GB normal COMPAQ BD0728 D210A2LK SCSI 320MB 67 84 GB normal COMPAQ BD0728 D2108K3K SCSI 320MB 67 84 GB Figure 45 Arrangement of drives in the DL380 Viewing disk status for the DL320s NSM 2120 The disks are labeled 1 through 12 in the Disk Setup window shown in Figure 46 and correspond with the disk drives from left to right 1
120. HP LeftHand Storage Solutions user guide Part number ATO04 96049 First edition September 2009 Legal and notice information Copyright 2009 Hewlett Packard Development Company L P Confidential computer software Valid license from HP required for possession use or copying Consistent with FAR 12 211 and 12 212 Commercial Computer Software Computer Software Documentation and Technical Data for Commercial Items are licensed to the U S Government under vendor s standard commercial license The information contained herein is subject to change without notice The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as constituting an additional warranty HP shall not be liable for technical or editorial errors or omissions contained herein WARRANTY STATEMENT To obtain a copy of the warranty for this product see the warranty information website http www hp com go storagewarranty Microsoft Windows Windows XP and Windows NT are U S registered trademarks of Microsoft Corporation Contents About this guide cccccccccccccce cece eect e cece cece cece eee EEE EEE EEE EEE EEE EEE EEE EEE 27 Related documentation nsss E eames E a E aan 27 HP technical support 02 esc veaceceysatstnnscnda ter aseinsbsaathed annie av steer deeaaaalhyousanenniateseinenntcleasaeebemndavunergek 27 S bsciphon serViCe enesi
121. Hand P4500 For RAID6 the physical disks are grouped into sets RAID6 uses two sets of disks DL 320s NSM 2120 and 1 4 7 10 HP LeftHand P4500 Disks 2 striped RAID device H W Figure 36 DL320s NSM 2120 and HP LeftHand P4500 RAID6 using two six disk sets RAID6 in the HP LeftHand P4300 RAID6 in the HP LeftHand P4300 is striped with parity into a single array 66 Storage Disk layout in the FS USE ee HP LeftHand P4300 1 RAID device Figure 37 Raid5 in P4300 Planning the RAID configuration The RAID configuration you choose for the storage node depends on your plans for data fault tolerance data availability and capacity growth If you plan to expand your network of storage nodes and create clusters choose your RAID configuration carefully A CAUTION After you have configured RAID you cannot change the RAID configuration without deleting all data on the storage node Data replication Keeping multiple copies of your data can ensure that data will be safe and will remain available in the case of disk failure There are two ways to achieve data replication e Configure RAID1 10 5 5 spare 50 or 6 within each storage node to ensure data redundancy Always replicate data volumes across clusters of storage nodes regardless of RAID level for added data protection and high availability Using RAID for data re
122. Hand Storage Solution volume 264 SmartClone volumes Computer training lab You run a computer lab for a technical training company You routinely set up training environments for classes in programming languages database development web design and other applications The classes are anywhere from 2 days to 1 week long and your lab can accommodate 75 students On your HP LeftHand Storage Solution you maintain master desktop images for each class offering These desktop images include all the software applications the students need for each class in the default configuration required for the start of the class To prepare for an upcoming class with 50 students you clone the 50 student desktops from the master image without consuming additional space on the SAN You configure the iSCSI connections and the students are ready to start working During the class the only additional data added to the SAN is the trainees class work When the class is finished you can roll back all 50 SmartClone volumes to the clone point and recreate the desktops Safely use production data for test development and data mining Use SmartClone volumes to safely work with your production environment in a test and development environment before going live with new applications or upgrades to current applications Or clone copies of your production data for data mining and analysis Test and development Using the SmartClone process you can instantly clo
123. Health and Safe to Remove will respectively help you assess the health of a disk and whether or not you can replace it without losing data yagement Console m o X RAD Setup Disk Setup Status Health Safe to Remo Model Serial Number Class Capacity Active normal Yes HP DFO450B JMV6703C SAS3 0GB 41919 GB Active normal Yes HP DF0450B JMV2ELBC SAS30GB 41919 6GB Active normal Yes HP DF0450B JIMV3EJMC SAS3 0GB 41919GB Active normal HP DFO450B JMV2Z9VC SAS3 0GB 419 19GB Active normal HP DFO450B JIMV46S7C SAS3 OGB 41919GB Active normal HP DFO450B JIMV2EV4C SAS3 0GB 41919GB Active normal HP DFO450B JMV3HB0C SAS3 0GB 41919GB Active normal HP DFO450B JMV3HBPC SAS3 0GB 419 19GB Active normal HP DFO450B JMV3EL2C SAS3 0GB 41919 GB Active normal HP DF0450B JMV65N2C SAS30GB 41919 GB Active normal HP DF0450B JMY3BATC SAS3 0GB 41919GB Active normal HP DFO450B JMY3YEMC SAS30GB 419 19 GB Figure 58 Diagram of the drive bays in a HP LeftHand P4500 79 Viewing disk status for the HP LeftHand P4300 The disks are labeled 1 through 8 in the Disk Setup window Figure 59 and correspond with the disk drives from left to right 1 3 5 and 7 on the top row and 2 4 6 and 8 on the second row shown in Figure 60 when you are looking at the front of the HP LeftHand P4300 For the HP LeftHand P4300 the columns Health and Safe to Remove will respectively help you assess the health of a disk and wh
124. ID with replication in a cluster Always use replication in a cluster to replicate volumes across storage nodes The redundancy provided by RAID10 5 50 or 6 ensures availability at the storage node level Replication of volumes in a cluster ensures availability at the cluster level For example e Using replication up to three copies of a volume can be created on a cluster of three storage nodes The replicated configuration ensures that two of the three storage nodes can go offline and the volume will still be accessible e Configuring RAID10 on these storage nodes means that each of these three copies of the volume is stored on two disks within the storage node for a total of six copies of each volume For a 50 GB volume 300 GB of disk capacity is used RAID5 50 uses less disk capacity than RAID 1 10 so it can be combined with replication and still use capacity efficiently One benefit of configuring RAID5 50 in storage nodes that use replication in a cluster is that if a single disk goes down the data on that storage node can be rebuilt using RAID instead of requiring a complete copy from another storage node in the cluster Rebuilding the disks within a single set is faster and creates less of a performance hit to applications accessing data than copying data from another storage node in the cluster RAID6 provides similar space benefits of RAIDS with the additional protection of being able to survive the loss of up to two drives
125. In a degraded RAID50 configuration loss of a second disk in a single RAIDS set will result in data loss In a degraded RAID6 configuration the loss of three drives results in data loss 71 The RAID status is located at the top of the RAID Setup tab in Storage RAID status also displays in the Details Tab on the main CMC window when a storage node is selected in the navigation window HP LeftHand Networks Centralized Management Console Bex E Getting Started zt Configuration Summary EH Available Nodes 3 Boulder 2 F Alerts E Hardware gt SNMP TJ Storage a TCPAP Network y FailoverManager Detaits FEaturelRenietration Storage Node Hostname IP Address Site nsm185G2 06 10 0 63 6 NIA Model Software Version MAC Address Raw Space Usable Space P4300 8 0 00 1720 0 00 21 5A F5 2F 62 Login to view Login to view i E fH Exchange Logged In User Loginto view 1D LED RAID E Normal RAID 6 Status Not in a management group Login to view Management Group Hame NIA Manager No Virtual Manager Migrating Data NA Storage Node Tasks 27 Alerts Remaining Date Time Hostname IP Address Alert Message ar a Dianaan 1 RAID status Figure 38 Monitoring RAID status on the main CMC window The status displays one of four RAID states e Normal RAID is synchronized and r
126. NSM 4150 This term Dell 2950 NSM 2060 NSM 4150 Last refreshed Date and time re X X X 163 This term Means this Dell 2950 NSM 2060 NSM 4150 Hostname Hostname of the X X storage node X Storage node soft ware Full version num ber for storage node software Also lists any X X patches that have been applied to the storage node IP address Support key IP address of the storage node Support Key is used by a Technic al Support repres y X entative to log in to the storage node NIC data Information about NICs in the stor age node card number manufac turer description MAC address IP address mask gateway and X mode Mode shows manual auto dis abled Manual equals a static IP auto equals DHCP and disabled means the inter face is disabled DNS data Memory Information about DNS if a DNS server is being used providing X the IP address of the DNS servers IP address of the DNS servers Information about RAM in the stor age node includ X ing values for total memory and free memory in GB 164 Reporting This term Means this Dell 2950 NSM 2060 NSM 4150 This section of the report contains de tails about the CPU including CPU model name or X X X manufacturer of the CPU clock speed of the CPU and cache size Information about the CPU CPU seconds shows the number of CPU secon
127. Networks Centralized Management Console Joe E Getting Started Detaiis Remote Snapshots Time Registration Configuration Summary Available Nodes 1 Group Virtual Manager Hemet E ala Servers 1 Status Normal Coordinating manager Denver 1 2 of 3 managers running B Administration Special Manager Virtual Manager Quorum 2 Local Bandwidth 4 MB sec Nodes teen Perforniance Montor Name IP Address Model RAID Status RAID Configura oftware Vers Manager Special Manager tae Storage Nodes 2 Sdenver 2 10 06116 DELL2950 Normal RAIDS 8 0 00 1579 0 Normal gt bedi denver 1 1008117 DELL2950 Normal RAIDS 8 0 00 1579 0 Normal enver Volumes 8 and Snapshots 7 Clusters Name Logs Figure 81 Virtual manager added to a management group Creating a management group and default managers When creating a management group the wizard creates an optimal manager configuration for the number of storage nodes used to create the group See Table 39 for the default manager configurations 173 After you have finished creating the management group be certain to reconfigure managers as necessary to optimize your particular SAN configuration Table 39 Default number of managers added when a management group is created Number of storage nodes Manager configuration 1 1 manager 2 2 managers and a Virtual Manager 3 or more 3 managers
128. OO secsi E EE EEK ERE 64 RAIDS seting DL38O rcce na tierin e ara EEE EE A Eni ain 64 RAIDS set in the DL320s NSM 2120 and the HP LeftHand P4500 n ssnnnnonssnonoeseenene 64 RAIDS set in the HP LeftHand PASOD dcr sncrcmnri rer ama nse denanmemiwneniuniwks 64 RAID5 set in a IBM x3650 or oisincannenciienantroninuaihe Girelnabbidabibbiind sib anovhnnsiuandieauimeatneaueton 64 Initial RAIDS setup of the Dell 2950 and NSM 2060 cecceeesseeeteeceteeeeeeeeeneeeeneeeees 65 33 NSM 260 RAIDS using six disk sets cccceceeeececeeeeseenneeeeecennneeeeeeseseeeeeeeseeeeeeeeenees 65 34 NSM 260 RAIDS using five disks plus a hot spare ccccescceceseeceeneeeeeetseeeesteeeeeneeeees 65 35 Initial RAID50 setup of the NSM 4150 io3 ces cotescnater nau eiveiuarsion neue enwreanneciuscngees 66 36 DL320s NSM 2120 and HP LeftHand P4500 RAID6 using two six disk sets 05 66 37 RaidS in PAS OO euro situ Net sil stat ui cat indeteindadcnle ued sane daneual a teases 67 38 Monitoring RAID status on the main CMC window ccscccesseceeeteeeeeteeeeeteeeseeeeeees 72 39 Example of columns in the Disk Setup tab cceccccesceceeeteeeeeeeeeeeeeeeeeeeeeeeseeeensaes 73 AO Viewing the Disk Setup tab in an NSM 160 asssssssssesssssrisserrsserrssrrrsssrrsssresssressereest 74 41 Diagram of the drive bays in the NSM 160 csessceesseceseeeeeeeeseseeeneeesteeenteeeeneeeeseeees 75 42 Viewing the Disk Setup tab in an NSM 260
129. P e The VIP must be routable regardless of which storage node it is assigned to e iSCSI servers must be able to ping the VIP when it is enabled in a cluster The VIP address must be different than any storage node IPs on the network e The VIP address must be a static IP address reserved for this purpose 335 e All iSCSI initiators must be configured to connect to the VIP address for the iSCSI failover to work properly ISNS server An iSNS server simplifies the discovery of iSCSI targets on multiple clusters on a network If you use an iSNS server configure your cluster to register the iSCSI target with the iSNS server You can use up to 3 iSNS servers but none are required iSCSI load balancing Use iSCSI load balancing to improve iSCSI performance and scalability by distributing iSCSI sessions for different volumes evenly across storage nodes in a cluster iSCSI load balancing uses iSCSI Login Redirect Only initiators that support Login Redirect should be used When using VIP and load balancing one iSCSI session acts as the gateway session All O goes through this iSCSI session You can determine which iSCSI session is the gateway by selecting the cluster then clicking the iSCSI Sessions tab The Gateway Connection column displays the IP address of the storage node hosting the load balancing iSCSI session Configure iSCSI load balancing when setting up servers See Chapter 17 on page 289 for information about configuring s
130. P BOOTP protocol 4 Click OK 5 Click OK on the confirmation message 6 Click OK on the message notifying you of the automatic log out EY NOTE Wait a few moments for the IP address change to take effect Contiguring network interface bonds Network interface bonding provides high availability fault tolerance load balancing and or bandwidth aggregation for the network interface cards in the storage node Bonds are created by bonding physical NICs into a single logical interface This logical interface acts as the master interface controlling and monitoring the physical slave interfaces Bonding two interfaces for failover provides fault tolerance at the local hardware level for network communication Failures of NICs Ethernet cables individual switch ports and or entire switches can be tolerated while maintaining data availability Bonding two interfaces for aggregation provides bandwidth aggregation and localized fault tolerance Bonding the interfaces for load balancing provides both load balancing and localized fault tolerance EY NOTE The VSA does not support NIC bonding Depending on your storage node hardware network infrastructure design and Ethernet switch capabilities you can bond NICs in one of three ways e Active Passive You specify a preferred NIC for the bonded logical interface to use If the preferred NIC fails then the logical interface begins using another NIC in the bond until the
131. PM 3 12 00 PM Mountain Standard Time America Denver Display Line Name Server Statistic Units Value S Minimum Maximum Average Scale Denver 1 lO Latency Read ms 193 018 62 692 242198 173 987 1 vj Denver 3 1O Latency Read ms 3 690 3467 7 079 3 841 1x Figure 144 Example showing fault isolation What can learn about my volumes If you have questions such as these about your volumes the Performance Monitor can help e Which volumes are accessed the most e What is the load being generated on a specific volume The Performance Monitor can let you see the following e Most active volumes e Activity generated by a specific server Most active volumes examples This example shows two volumes DB1 and Log1 and compares their total IOPS You can see that Log averages about 2 times the OPS of DB1 This might be helpful if you want to know which volume is busier Performance Monitor Denver 3 20 30 PM 3 21 00 PM 3 21 30 PM 3 22 00 PM 3 22 30 PM 3 23 00 PM 3 23 30 PM 3 24 00 PM 3 24 30 PM 3 25 00 Mountain Standard Time America Denver Display Line Name Server Statistic Units Value Minimum Maximum Average Y Scale Wi E Logt i ExchServer 1 IOPS Total lO s 451 587 413 977 562 164 488 279 Auto 01 Y M s1 Q ExchServer 1 IOPS Total 10 s 215 278 173 004 298 416 231 945 Auto 0 1 Y Figure 145 Example showing IOPS of two volumes This example shows
132. Provisioning SNOPSRO S sireni iisi iinit ECN RENAE E EA EE i ea EEI NERA E ERE EDENEDES 226 Snapshots vers s BackUPS siersnonroniin en i ea ee ee 226 The effect of snapshots on cluster space sssnssnnnssnniseooeeeneesoeersseesseeersetessettssterssersesersrret 226 Managing capacity using volume size and snapshots ccccseseeeeeeeeeeeeseeeeenseeeenseeeeeeeeeees 226 How snapshots are created ccinansienavervananenarsvransaietenden entamaiwnnstsannanerienl Mesa atneninta 226 Ongoing Capacity management sesiis oea E a EEN S a E 227 Number of volumes and snapshots j scncrcevscessoasssncastacevnsidereteessnstinneseenahirssendedienaiienauacdinbeveetenonts 227 Reviewing SAN capacity and usage si ntasaiuaewninidneortanen ene aoues 227 Cluster Use SUMMATY jaccnioctyssuncicariy natives tated enee ESEN PES ee ana ini Eie TiK 228 Volome use Summary ordnen a e a a a a ener 230 Ae CRTE e La sats ee rairciua Grass hasehtorssetecnre ain eaanvanieicba nsmaiices dehcahchandeutslaailancaitynraenctnrste 232 Measuring disk capacity and volume size siicic ccccsarscscsecenstsentssorsuncsesenestessidtessecneseosaednesedanenss 233 Block systems and file Sy HEI S cssevextivedsaesseucecinevsiainednnansevendetrrastaresammatvaasraeetasaneragans 233 Storing file system data on a block system icc c0issovsssneccpeconesvasnhdiaseneevisiandanrdengudsasonnennes 234 Changing the volume size on the server cccccsesssecceeeenneeeeeeseeeeeeeeesenneeeeeeneneaeeece
133. RAID is currently re building No action is required e Degraded RAID is not functioning properly Either a disk needs to be replaced or a replacement disk has been inserted in a drive Off Data cannot be stored on the storage node The storage node is offline and flashes red in the network window Register individual storage nodes to use add on applications Registration requires sending in the storage node serial numbers to purchase the license keys which are then applied to the storage node Creates a placeholder in the cluster in the form of a ghost storage node that keeps the cluster intact while you remove the storage node to replace a disk or re place the storage node itself and return it to the cluster Designate how many copies of data you want to keep in the cluster Designates whether data availability or redundancy is more important in your configuration Striped data is stored across all disks in the cluster You might change the configur ation of a volume for example change replication level add a storage node or remove a storage node Because of your change the pages in the volume must be reorganized across the new configuration The system can keep track of several configuration changes at once This means you can change configurations even while a volume is in the midst of a different reconfiguration In particular if a recon figuration was done by accident you don t have to wait until it finishe
134. RAID status on the RAID Setup tab shows Normal the disks provide fully operational data redundancy The storage node is ready for data transfer at this point Monitoring RAID status RAID is critical to the operation of the storage node If RAID has not been configured the storage node cannot be used Monitor the RAID status of a storage node to ensure that it remains normal If the RAID status changes a CMC alert is generated You can configure additional alerts to go to an email address or to an SNMP trap See Using alerts for active monitoring on page 135 for instructions to set these additional alerts Data transfer and RAID status A RAID status of Normal Rebuild or Degraded all allow data transfer The only time data cannot be transferred to or from the storage node is if the RAID status shows Off Data redundancy and RAID status In a RAID1 10 or RAID5 50 configuration when RAID is degraded there is no full data redundancy Therefore data is at risk if there is a disk failure when RAID is degraded In RAID6 when RAID is degraded due to a single drive failure the data is still not at risk for a second failure However if it is degraded due to the failure of two drives then data would be at risk if another drive failed A CAUTION In a degraded RAID1 10 configuration loss of a second disk within a pair will result in data loss In a degraded RAIDS configuration loss of a second disk will result in data loss
135. S CH ingri _4 G C ingr 3 C ingr 2 ca imor 1 a c _SCsnap G C _online_4 G C _online_3 G C _online_2 Go C _online_4 as C _SCsnap2 a B C _remate_4 B C _remote_3 B C _remote_2 g C _remote_1 0 Alerts Remaining Date Time Hostname IP Address Alert Message C _trngrmt_2 2 C trnarmt_3 2 A g Far Far EO C _remote_5 4 g g g Figure 131 Viewing SmartClone volumes and snapshots as a tree in the Map View Using views The default view is the tree layout displayed in Figure 131 The tree layout is the most effective view for smaller more complex hierarchies with multiple clone points such as clones of clones or shared snapshots You may also display the Map view in the organic layout The organic layout is more useful when you have a single clone point with many volumes such as large numbers in a virtual desktop implementation In such a case the tree quickly becomes difficult to view and it is much easier to distinguish the multiple volumes in the organic layout HP LeftHand Networks Centralized Management Console Woe 458 Trainingos i Detaiis iSCSI Sessions Remote Snapshots Assigned Servers Map View EHE Servers 1 z L Administration Layout Organic jl aaan EA a Sites EES Programming E Pertormance Monitor i Storag
136. S BS In the Next Occurrence column of the Schedules tab window this snapshot schedule is marked as paused 6 Make a note to resume this snapshot schedule at a convenient time Resume a schedule 1 In the navigation window select the volume for which you want to resume the snapshot schedule Click the Schedules tab to bring it to the front Select the schedule you want Click Schedule Tasks on the Details tab and select Resume Snapshot Schedule In the Confirm window click OK aoe Pp In the Next Occurrence column of the tab window this snapshot schedule shows the date and time the next snapshot will be created Deleting schedules to snapshot a volume EY NOTE After you delete a snapshot schedule if you want to delete snapshots created by that schedule you must do so manually 1 In the navigation window select the volume for which you want to delete the snapshot schedule 2 Click the Schedules tab to bring it to the front 256 Using snapshots 3 Select the schedule you want to delete Click Schedule Tasks on the Details tab and select Delete Schedule 5 To confirm the deletion click OK The Schedules tab refreshes without the deleted snapshot schedule 6 Optional To delete snapshots related to that schedule select the Volumes and Snapshots node where you can delete multiple snapshots from a list EHP LeftHand Networks Centralized Management Console Joey mE Getting Started
137. Saving log files on page 167 View and save log files to a remote computer See Using hardware information log files on page 167 Running diagnostic reports Use diagnostics to check the health of the storage node hardware Different storage nodes offer different sets of diagnostic tests EY NOTE Running diagnostics can help you to monitor the health of the storage node or to troubleshoot hardware problems Getting there 1 2 Select a storage node in the navigation window and log in Open the tree below the storage node and select Hardware 144 Reporting 3 Select from the list the diagnostic tests that you want to run The default setting is to run all tests If all boxes are not checked click Diagnostic Tasks and select Check All Clear any tests that you do not want to run To clear all selections click Clear All EY NOTE Running all of the diagnostic tests will take several minutes To shorten the time required to run tests clear the checkboxes for any tests that you do not need 4 Click Diagnostic Tasks and select Run Tests A progress message displays When the tests complete the results of each test display in the Result column 5 Optional When the tests complete if you want to view a report of test results click Save to File Then select a location for the diagnostic report file and click Save The diagnostic report is saved as a txt file in the designated location View
138. SmartClone volumes and their snapshots depend on the clone point it cannot be deleted until it is no longer a clone point A clone point ceases to be a clone point when only one SmartClone volume remains that was created from that clone point That is you can delete all but one of the SmartClone volumes and then you can delete the clone point 272 SmartClone volumes z HP LeftHand Networks Centralized Mana File Getting Started Configuration Summary fs Available Nodes 4 EHS Trainingos Servers 1 2 Administration Sites E S Programming Performance Monitor Storage Nodes 3 EH amp Volumes nd Snapshots 1 o Lg c _scsnaf E i C class_4 1 L c _scsna C aclass_2 1 I C aclass_3 1 Find Tasks Help 1 Original volume 2 Clone point 3 SmartClone volume Figure 124 Navigation window with clone point In Figure 124 the original volume is C e Creating a SmartClone volume of C first creates a snapshot C _SCsnap e After the snapshot is created you create at least one SmartClone volume C class_1 Table 61 How it works clone point First a volume C Next a snapshot C _SCsnap Next SmartClone from the snapshot C class_1 Snapshot becomes a clone point G Because the SmartClone volumes depend on the clone point from which they were created the clone point appears underneath each SmartClone volume in the navigation window While the
139. Started Et Configuration Summary Available Nodes 1 EY FailoverManager 5 Trainingos EHE Servers 1 H Ya Administration Stes H S Programming Hame S c _sCsnap Description Cluster Programming Status Normal Temp space None Performance Monitor Eis Storage Nodes 3 Volumes 8 and Snapshots 1 EH cet LS c Scsney EH D Citclass_1 1 US c scsnay E Ciclass_2 1 LS ce ses E Citclass_3 1 UG c _scsnap E Citclass_4 1 US c _scsnap EH Citclass_5 1 UG c _scsnap EHI Citclass_6 1 UG c _scsnap EG c class_7 1 LS ce scsnan Type Primary Clone Point 56B Created by Manual Size Created 08 05 2008 12 49 54 PM MDT None Replication Levet Replication Priority Availabilty 896 MB A Provisioned Space Provisioning Thin Utilization 33 Target Information iSCSI Name ign 2003 10 com efthandnetworks trainingos 28 c scsnap 1 Selected clone point 2 Clone point repeated under SmartClone volumes Figure 135 Highlighting all related clone points in navigation window utilization of clone points and SmartClone volumes Multiple SmartClone volumes share data from the clone point without requiring that data be duplicated for each SmartClone volume On the Details tab of the clone point and the SmartClone volumes there is a Utilization graph Compare the Utilization graph for the clone point and then for the SmartClone
140. Status DNS Routing Communication SANAQ Interface Motherboard Port1 Communications Mode Multicast Off Unicast On IP Addresses Select SANAOQ Interface Motherboard Portt Figure 75 Verifying interface used for SAN iQ communication Verify that the SAN iQ communication port is correct Disabling a network interface You can disable the network interfaces on the storage node You can only disable top level interfaces This includes bonded interfaces and NICs that are not part of bonded interfaces To ensure that you always have access to the storage node do not disable the last interface If you want to disable the last interface first enable another interface A CAUTION If you disable an interface be sure you enable another interface first That way you always have access to the storage node If you disable all the interfaces you must reconfigure at least one interface using the Configuration Interface to access the storage node See Configuring a network connection on page 343 Disabling a network interface 1 E M ee Log in to the storage node and open the tree Select the TCP IP Network category Select from the list on the TCP IP tab window the interface to disable Click TCP IP Tasks and select Edit Click Disable Interface Click OK A confirmation message opens If you are disabling the only interface the message warns tha
141. T E E a thie done eon tn a Manes le aee cue melts 71 Monitoring RAID status anena e e a e e a e 71 Data transt rand RAID stalus iicet e a E a E EN Rees 71 Data redundancy and RAID status toa ue0r vcawacaall oxgaleaus naacuaaacensamedengeigoiuesneasen Gea petauaxeatneein eae 71 Managingsdisks acs ceed coetsccaeie tsa we a sag Lau eg es E E E E a a ee 72 Getting here Sa aemet a e a n a So ng Soar e a a a a e Ae 73 Reading the disk report on the Disk Setup tab cccccsssecceeeeeeeeeeeeeeeneeeeecstaeeeeeeseeeeeeeeseeaes 73 VERIEVING Cd FS Kc SIGUA ss cpscrsve aE E E iad oiaaaat ence AE SS 74 Viewing disk status for the NSM 160 4 cca ao ae ce peu arhatn Sea ala ese ede ae eet ae eke 74 Viewing disk status for the NSM 260 lt 2 ccc saxon cuaceontaveduauniedenianepatvndmbeysaugantiocnuencaaactalas 75 Viewing disk status for the DL380 x cicien serie cinsate hs vaoniennteteie anuid lee Gee cknee 75 Viewing disk status for the DL320s NSM 2120 cccccccesscceseeeeeeeseeeeeseeeeenneeeeneaseeesnaaes 76 Viewing disk status for the IBM x3650 say 2 cavs cede oc otereeonstunn veskone hove ueaeuongeeeanteenscheceorse 77 Viewing disk status for the VS Avis itorac baseseadecegssbresaisuiidaunds a iaeoenaaandayseeds Meda yeceMonihaseanes 77 Viewing disk status for the Dell 2950 and NSM 2060 0 ccccccescceeeseeeeeeeeeenseeensnseeenaes 77 Viewing disk status for the NSM 4150 s sccctuscnducuecnassuternre eyes stennaracysrenssacnnnnceusy eennvasnecaes 7
142. Table 55 Common applications daily change rates Application Daily Change Rates Fileshare 1 3 Email Exchange 10 20 246 Using snapshots Application Daily Change Rates Database 10 EY NOTE When considering the size of snapshots in the cluster remember that the replication level of the volume is duplicated in the snapshot Source volumes for tape backups Best practice Plan to use a single snapshot and delete it when you are finished Consider the following question in your planning e Is space available on the cluster to create the snapshot Data preservation before upgrading software Best practice Plan to use a single snapshot and delete it when you are finished Consider the following questions in your planning e Is space available on the cluster to create the snapshot Automated backups Best practice Plan to use a series of snapshots deleting the oldest on a scheduled basis Consider the following questions in your planning e Is space available on the cluster to create the snapshots e What is the optimum schedule and retention policy for this schedule to snapshot a volume See Planning snapshots on page 246 for the average daily change rates for some common applica tions For example if you are using these backups as part of a disaster recovery plan you might schedule a daily snapshot of the volume and retain 7 copies A second schedule would run weekly and retain 5 copies
143. Working with clusters cal the virtual IP address In the Edit Cluster window click the iSCSI tab to bring it to the front 2 Select the VIP you want to change 3 Change the information in the Edit VIP and Subnet Mask window 4 Click OK to return to the Edit Cluster window Removing the virtual IP address You can only remove a VIP if there is more than one VIP assigned to the cluster 1 In the Edit Cluster window click the iSCSI tab 2 Select the VIP and click Delete A confirmation message opens 3 Click OK to confirm the deletion Finishing up 1 Click OK when you are finished changing or removing the VIP 2 Reconfigure the iSCSI initiator with the changes 3 Reconnect to the volumes 4 Restart the applications that use the volumes Changing or removing an iSNS server If you change the IP address of an iSNS server or remove the server you may need to change the configuration that clients are using Therefore you may need to disconnect any clients before making this change Preparing clients e Quiesce any applications that are accessing volumes in the cluster e Log off the active sessions in the iSCSI initiator for those volumes oo an iSNS server Select the iSNS server to change 2 Click Edit The Edit iSNS Server window opens 3 Change the IP address Click OK Deleting an iSNS server 1 Select the iSNS server to delete 213 2 Click Delete A confirmation message opens 3 Click OK Finish
144. able Nodes pool use the Find menu option to re locate it Replace the disk In the NSM 160 or the NSM 260 When these platforms are configured in RAIDO the menu choices for powering on and powering off disks are enabled 1 Reconfigure the storage node for RAIDO if it is not already in RAIDO 2 In the CMC power off the disk you may power off up to 3 disks one at a time See the procedures for powering off a disk in RAIDO in Manually power off the disk in the CMC for RAIDO on page 83 3 Physically replace the disks in the storage node 4 In the CMC power each disk back on one at a time After the last disk is powered on RAID becomes Normal 329 In the DL380 or the IBM x3650 A CAUTION You must always use a new drive when replacing a disk in an IBM x3650 Never reinsert the same drive and power it on again 1 Reconfigure the storage node for RAIDO if it is not already in RAIDO 2 Power off 1 disk See the procedures for powering off a disk in RAIDO in Manually power off the disk in the CMC for RAIDO on page 83 3 Physically replace the disk in the storage node Power on the disk 5 Repeat Step 2 through Step 4 until all necessary disks are replaced In the DL320s NSM 2120 Dell 2950 NSM 2060 NSM 4150 HP LeftHand P4500 HP LeftHand P4300 For the DL320s NSM 2120 Dell 2950 NSM 2060 NSM 4150 and HP LeftHand P4500 use the disk replacement procedures in Replacing a dis
145. ace in the cluster e Full provisioned volumes this is the same as provisioned space described above e Thin provisioned volumes this total reflects the size of the volume times the replication level Snapshots this value is the same as snapshot provisioned space unless there is temporary space for the snapshot In that case the temporary space is also reflected in this total Used space Amount of space used by actual data in the volume or snapshot Used space only decreases when you delete volumes snapshots or temporary space from the SAN The total of cluster used space may decrease if volumes are deleted or moved to a different cluster Deleting files or data from client applications does not decrease the used space For more information see Measuring disk capacity and volume size on page 233 Utilization Percentage of the Max Provisioned Space that has been written to This value is calculated by dividing the Used Space by the Max Provisioned Space Available Nodes 1 EH Bi Exchange iB Servers 1 Administration Getting Started fe Configuration Summary Stes Performance Monitor gt Storage Nodes 3 Volumes 5 and Snapshi EE FinanceSCss1_1 1 US Financescsst EG FinanceScss1_2 1 S FinanceSCss1 E FinanceSCss1_3 1 FinanceSCss1 EE LogsAdmin 1 UG snapshot E LogsFinance 1 FinanceSCss1 HP LeftHand Networks Centralized Manage
146. actices when changing network characteristics Plan to make network changes during off peak hours to minimize the impact of those changes Make changes on one storage node at a time 89 Some network changes cause the storage server to restart the SAN iQ services making the storage node unavailable for a short time Check the Availability tab for each storage node to see if any volumes will become unavailable if the services restart on the storage node Volumes and snapshots may become temporarily unavailable while services restart Examples in clude unreplicated volumes or snapshots that are causing a restripe of the data e After the changes are in place verify the iSCSI sessions You may need to update the sessions Getting there 1 In the navigation window select a storage node 2 Open the tree under the storage node and select TCP IP Network The TCP IP tab The TCP IP tab lists the network interfaces on the storage node On the TCP IP tab you bond interfaces disable an interface configure an IP address and you can ping servers from the storage node Identifying the network interfaces A storage node comes with two Ethernet interfaces To use either interface you must connect an Ethernet cable to either port and configure the interface in the Configuration Interface or the CMC These ports are named and labeled on the back of the storage node Table 13 lists the methods to identify the NICs You can work with the NIC
147. adapter Default x Source IP Default m Target Porta Default CRC 7 Checksum C Data digest C Header digest V CHAP logon information CHAF helps ensure data security by providing authentication between a target and an initiator trying to establish a connection To use it specify the same target CHAP secret that was configured on the target for this initiator User name iqn 1991 05 com microsoft l Igallagh corp lefthandnet Target secret ee C Perform mutual authentication To use mutual CHAP specify an initiator secret on the Initiator Settings page and configure that secret on the target Description iSCSI Security Z Allow access via iSCSI Enable load balancing information on compliant initiators Enabling load balancing on non compliant initiators can compromise volume availability To function correctly load balancing requires that the cluster virtual IP be configured Authentication CHAP not required HAP required hicrosott igallagh corp Jefthandnetworks com Figure 172 Configuring iSCSI shown in the MS iSCSI initiator for a single host with CHAP Figure 173 illustrates the configuration for a single host authentication with 2 way CHAP required 339 New Server esa ey CHAP Secret Setup The secret allows the initiator to authenticate targets when
148. addresses Click Communication Tasks and select Update Communications List The list is updated with the current storage node in the management group and a list of IPs with every manager s enabled network interfaces A window opens which displays the manager IP addresses in the management group and a reminder to reconfigure the application servers that are affected by the update 117 118 Managing the network 5 Setting the date and time The storage nodes within management groups use the date and time settings to create a time stamp when data is stored You set the time zone and the date and time in the management group and the storage nodes inherit those management group settings Using network time protocol Configure the storage node to use a time service either external to your network or internal to your network Setting the time zone Set the time zone for the storage node The time zone controls the time stamp on volumes and snapshots If you use NTP decide what time zone you will use You can use either GMT on all of your man agement groups or you can set each management group to its local time If you do not set the time zone for each management group the management group uses the GMT time zone whether or not you use NTP Setting date and time Set the date and time on the management group s if not using an NTP time service Management group time When you create a management group you set the time zone and
149. age Node feature when replacing disks Such circumstances include the following e RAID is OFF on a storage node with RAIDO 80 Storage e When replacing multiple disks on a storage node with RAID5 50 or RAID6 e When multiple disks on the same mirror set need to be replaced on a storage node with RAID10 See Replacing disks appendix on page 327 for further information Overview of replacing a disk The correct procedure for replacing a disk in a storage node depends upon a number of factors including the RAID configuration the replication level of volumes and snapshots and the number of disks you are replacing Unless you are replacing a disk in a storage node that is not in a cluster data must be rebuilt either just on the replaced disk or in the case of RAIDO on the entire storage node Replacing a disk in a storage node includes the following steps e Planning for rebuilding data on either the disk or the entire storage node all platforms e Powering the disk off in the CMC non hot swap platforms e Physically replacing the disk in the storage node all platforms e Powering the disk on in the CMC non hot swap platforms e Rebuilding RAID on the disk or on the storage node all platforms See Using Repair Storage Node on page 80 for situations in which you can use this feature to save unnecessary restripes of your data Replacing disks in hot swap platforms In hot swap platforms running RAID1 10 5 50 or 6 you
150. age node configuration on page 45 Rebooting the storage node on page 48 Powering off the storage node on page 48 Working with the storage node After finding all the storage nodes on the network you configure each storage node individually 1 Select the storage node in the navigation window Usually you will be logged in automatically However you will have to log in manually for any storage nodes running a software version earlier than release 7 0 If you do need to manually log in the Log In window opens Type a user name and password Click Log In Logging in to and out of storage nodes You must log in to a management group to perform any tasks in that group Logging into the management group automatically logs you into the storage nodes in that group You can log out of individual storage nodes in the management group and log back in to them individually Automatic log in Once you have logged into a management group additional log ins are automatic if the same user names and passwords are assigned If management groups have different user names or passwords then the automatic log in fails In that case you must log in manually 1 Type the correct user name and password 2 Click Log In Logging out of a storage node 1 Select a storage node in the navigation window 2 Right click and select Log Out EY NOTE If you are logged in to multiple storage nodes you must log out of each storage
151. agement group 78 adding snapshots 245 adding volumes 238 application managed snapshots 248 changing SmartClone volumes 283 changing volumes 240 editing snapshots 255 Failover Manager 89 network interface bonding 93 rolling back volumes 258 scheduling snapshots 254 snapshot schedules 254 system for Failover Manager on VMware Server or Player 189 using more than one Failover Manager 190 virtual manager 201 resetting DSM in Configuration Interface 345 NSM to factory defaults 345 resolving host names 44 Restarting monitoring 313 restoring management group 84 storage node configuration files 46 volumes 257 Restoring defaults for the Performance Monitor 313 restriping volume 84 resuming scheduled snapshots 256 resyncing RAID 214 return the repaired storage node to the cluster 331 Rolling Back a Volume from application managed snapshots 259 rolling back a volume 257 366 routing adding network 114 deleting 115 editing network 115 routing tables managing 14 a safe to remove status 81 Sample interval changing for the Performance Monitor 309 SAN capacity of 221 comparing the load of two clusters 301 308 comparing the load of two volumes 302 current activity performance example 298 determining if NIC bonding would improve performance 301 fault isolation example 299 learning about applications on the SAN 299 learning about SAN performance 298 monitoring performance 297
152. also be used during maintenance to prevent loss of quorum Adding a virtual manager to a management group enables you to start the virtual manager when you need to take a storage node offline for maintenance Benefits of a virtual manager Running a virtual manager supports disaster tolerant configurations to support full site failover The virtual manager ensures that in the event of either a failure of a storage node running a manager or of communication breakdown between managers as described in the two site scenario quorum can be recovered and hence data remains accessible Requirements for using a virtual manager It is critical to use a virtual manager correctly A virtual manager is added to the management group but not started on a storage node until the management group experiences a failure and a loss of quorum To regain quorum you start the virtual manager on a storage node that is operating and in the site that is operational or primary Table 43 Requirements for using a virtual manager Requirement What it means Use a virtual manager with an even number of regular managers running on storage nodes Add a virtual manager when creat ing management group Total number of man agers including the virtual manager Disaster Recovery Number of regular Scenario managers running 2 separate sites with 5 shared data 2 storage nodes in management group You cannot add a virtual manager after quoru
153. and requirements The name of the snapshot created by the schedule that is displayed in the CMC A scheduled snapshot name must be from 1 to 127 characters and is case sensitive Snapshots created by a schedule have a default naming convention enabled when the CMC is installed You can change or disable this naming convention See Setting naming conventions on page Name 34 for information about this naming convention The name you enter in the Create Schedule to Snapshot a Volume window will be used with sequential numbering For ex ample if the name is Backup the list of snapshots created by this schedule will be named Backup 1 Backup 2 Backup 3 Description Optional Must be from O to 127 characters Start at The date and time can occur in the past Retenti The retention criteria can be for a specified number of snapshots or for a designated period etention of time Currently you cannot create scheduled application managed snapshots from SAN iQ This function will be available in a future release 1 In the navigation window select the volume for which you want to create a schedule for snapshots The Volume tab window opens Click Volume Tasks on the Details tab and select New Schedule to Snapshot a Volume Type a name for the snapshots Optional Enter a snapshot description oF amp Click Edit to specify a start date and time The Date and Time Configuration window opens Use this window to set the
154. andnetworks trainingos 265 c class 1 ESS sysadm Figure 138 Viewing volumes that depend on a clone point Deleting multiple SmartClone volumes Delete multiple SmartClone volumes in a single operation from the Volume and Snapshots node of the cluster First you must stop any application servers that are using the volumes and log off any iSCSI sessions 1 Select the Volumes and Snapshots node to display the list of SmartClone volumes in the cluster HP LeftHand Networks Centralized Management Console Woe Getting Started TT Configuration Summary Volumes 6 and Snapshots 1 Available Nodes 1 Name Description Status Replication Le Provisioned Sp Utilization _ _ Provisioning Created 2 Trainingos Gc Normal None 512MB C1 thin 08 05 2008 1 EHE Servers 1 O Citclass4 Normal None 512MB Thin 08 05 2008 0 H2 Administration O caclass_2 Normal None 512MB Thin 08 05 2008 0 stes Gi c ciass_3 Normal None 512MB Thin 08 05 2008 0 P Programming G ciclass_4 Normal None 512MB Thin 08 05 2008 0 12 Performance Monitor G c ciass_5 Normal None 512MB Thin 08 05 2008 0 ei Storage Nodes 3 c _SCsnap Normal Tem None 28GB Thin 08 05 2008 0 A c 1 L ca_scsnap EE C class_4 1 L ca_scsnap EO C olass_2 1 c _SCsnap FG caclass_3 1 C _SCsnap EH Caclass_4 1 LS ce scsnap E E caclass_5 1 c _SCsnap BSS sysadm
155. aneralaannonioenionGen 115 Selecting the interface used by the SAN iQ software cccccsceeeeseeeeeseeeeeeeeeeeseeeeenseeeenseeeens 116 Updating the list of manager IP addresses c 0 0c0dssrecssercceecsneocennneeedeotsseuesedeeseceescaessseenns 117 Regviremenis esencon ii a i ROER T a EE 117 5 Seling he dde and ME ancesnreeniannrencceesecieacsigcctecseecteccetaneseneeunecane I1 Management group time seseeeeeeeeeeeeeeeeceeececececeeececececeeeaeceaaaeeaaeeasaascneeeseeeeeeeeeeeseeeeeeeeeess 119 Getting hereesia a a a a a a aa 119 Refreshing the management group time ccceeeceeeeeseeeeeeeeeeeeeeeeeeseeeecseeeecnsaeeeeeaeeeeneaseeentseeees 119 Using INTP cassas enron eea E EEE A EE E EA OA RA nies EEEE eweceneees 120 Editing NTP servers sserigcegeas cpa E E E espa E EO N EA 120 Deleting an NTP Server 5 iscseiaiseissincaay saaisesnised caisedanisecaisialuut E E REE e 121 Delete an NTP server sinaeantucsomsnandiadanionsaciapoann teanionedanen balesaen sun meumientpemaa add eitcns Saoy maaan 121 Changing the order of NTP servers iavsica sic wininciuwatewlheniindiideutanlmiienitesesnvescseeiaintenlanlics 121 Editing the date and time iss torenesneptaiaad eodausnoing tensile deauanseyGanandaaneiaananei ees asoantnndbinamignimesnanuataneimaheaite 121 Editing the time Zone Only oj iosisniaade iiihalnsehawmnbsawmiameagwniaiscnaainas aie taiwan psd E E A E louis 122 G Administeaive voar ONG GOUDE sasea ENS 123 Getting there pe
156. antaned 305 154 Performance Monitor graph coceaieectedsytet scent ctert eth ikvelane ul achive a dead aleeesoehe take aoa 305 155 Performance Monitor table sai sindesrasanaa wesw eriavatntonta ite undp elev niudiieasaren enn biguarencamuounehvonks 306 156 Performance statistics and where they are measured cecececeeseeceeeeeeeeteteeeeeeeeeees 307 157 Add Statistics Win Ow jase vaso oss die leas De etd is hese Sa Sa 311 158 Editline Windows serietan eR en Tae a ene ee renee eee ee 314 159 Verifying the start of the 30 day evaluation period ccccceecceeeeeceeeseeeeeetseeeeeeeeeees 317 160 Icons indicating license status for advanced features ccccccceeseeceesceeeeeeeeeeesseeesaees 318 161 Storage node with a license key oc jaccleinatiascacaud chaanceentouameveeeiearhleuemleiubl tua densevearuae 321 162 Registering advanced features for a management group cccseeseeeeeseeeeseeeeeetseeeeeaes 322 163 Selecting the feature key saicasdelassuesesuntencanahoeiaeagenselesmeuwns tuts dhavadeadonveMounesawshoabeeannss 322 T64 Entering a license Koy emeren annae siera etre duncams E A EE A 323 165 Viewing license keys i 24 ccies carcsuseden fe causdtenteasomiiindd a tamtsa ss alapaneape niente rei gneten eegaeees 323 166 Warning if volumes are not replicated cic veces once ilawed natasioeieetsesee eodbevd 329 167 Checking RAID rebuild status vaccrsawsnargnsaevevesncvius las tune ae eld aries ini eomemmarts 331 168 Finding complian
157. antoaienew i avieaaa demoanneuannenes 285 1a SGT SE PDD essa E 287 Scripting doc menfaton sessen et ne ne oon an an Minne een non cen mmm meee er re oD 287 17 Controlling server access to volumes iscscasccssssassersessrererererereeerseereees 289 Former terminology release 7 0 and earlier cccecccecceeeesneeeeeeeesneeeceeeeneeeeeeceeeeeeeeseeeeneeeeeeees 289 Adding server connections to management GrOUPS ccceeeeseeeeeeseeeeeteeeeeneeeeseeeeeeeneeeesniaeeseneeeees 290 Prerequisites crciataacessacenae santa casgatcenteteneie cahvaaubsaaes seunetigeerseaertaehar ees noese name ed mean S 290 Editing server connections ukers Visca aa AKEE EE REEE E AA E A soning tens 291 Deleting server connections sccannanasntenranatanenedocneotaearantenoraanieursioaauasdacereuimmmeermaoumaaegeaede 292 Assigning server connections access to VOLUMES cceessseeceeeeeeeeeeeeeeeeeneeeeeeesnnaeeeeeecsuaeeeeeseeaaes 292 Assigning server connections from a volume ccsescceeeeeeeeeeeeeeeesceeeensceeeesaeeeeceaeeeseeseeseaees 293 Assigning volumes from a server Connection 45 crciscanasiaiinansisncoban sacknunarnnnesavodniansinlbnianinssannlannes 293 Editing server connection and volume assignments ccccssseeeeeeeeeeeecceeeeecsneeeeeeeseeeeeeeeeees 294 Editing server connection assignments from a volume ecccceeceeeeesseeeenteeeeetteeeetteeeess 294 Editing server assignments from a server CONNECTION cccceecceeees
158. ap Tasks and select Edit SNMP Traps The Edit SNMP Traps window opens 5 Select one of the Trap Recipients and click Edit Change the IP address or hostname and click OK 7 Click OK when you are finished editing trap recipients Removing trap recipients 1 Log in to the storage node and expand the tree 133 6 Select the SNMP category from the tree Select the SNMP Traps tab The SNMP Traps Settings window opens Click SNMP Trap Tasks and select Edit SNMP Traps The Edit SNMP Traps window opens Select one of the Trap Recipients and click Remove The host is removed from the list Click OK on the SNMP Traps tab when you are finished removing trap recipients Disabling SNMP traps To disable SNMP traps you must delete all of the settings in the SNMP Traps window 1 2 3 134 Remove the Trap Recipient hosts Delete the Trap Community String Click OK Using SNMP 8 Reporting Reporting capabilities of the HP LeftHand Storage Solution are divided into two categories e Active Monitoring Use the Alerts category to configure how you receive alerts for selected vari ables The Alerts category is where you set up email alerting and where you can review the log of alerts generated automatically by the operating system including any generated while the CMC was closed See Active monitoring overview on page 135 Hardware Reporting Use the hardware category to select monitoring perform hardware dia g
159. apacity normal Yes HDS725050KL KRVN67ZBHD SATA 3 0GB 465 25 GB normal Yes HDS725050KL KRVN67ZBHDE SATA 3 0GB 465 25 GB normal Yes HDS725050KL KRVN67ZBHD SATA 3 0GB 465 25 GB normal Yes HDS725050KL KRVN67ZBHD SATA 3 0GB 465 25 GB normal Yes HDS725050KL KRVN67ZBHDJ SATA 3 0GB 465 25 GB normal Yes HDS725050KL KRVN67ZBHD SATA 3 0GB 465 25 GB normal Yes HDS725050KL KRVN67ZBHDJ SATA 3 0GB 465 25 GB normal Yes HDS725050KL KRVN67ZBHDU SATA 3 0GB 465 25 GB normal Yes HDS725050KL KRVN67ZBHDX SATA 3 0GB 465 25 GB normal Yes HDS725050KL KRVN67ZBHDD SATA 3 0GB 465 25 GB normal Yes HDS725050KL KRVN67ZBHD SATA 3 0GB 465 25 GB normal Yes HDS725050KL KRVN67ZBHDU SATA 3 0GB 465 25 GB normal Yes HDS725050KL KRVN67ZBHD SATA 3 0GB 465 25 GB normal Yes HDS725050KL KRVN67ZBHD SATA 3 0GB 465 25 GB normal Yes HDS725050KL KRVN67ZBHDX SATA 3 0GB 465 25 GB Disk Setup Tasks Figure 54 Viewing the Disk Setup tab in a NSM 4150 78 Storage Figure 56 Drive bays with bezel off in an NSM 4150 Viewing disk status for the HP LeftHand P4500 The disks are labeled 1 through 12 in the Disk Setup window Figure 57 and correspond with the disk drives from left to right 1 4 7 10 on the top row and 2 5 8 11 on the second row and so on as shown in Figure 58 when you are looking at the front of the HP LeftHand P4500 For the HP LeftHand P4500 the columns
160. appears multiple times Note that it is exactly the same in each spot Figure 125 Clone point appears under each SmartClone volume EY NOTE Remember A clone point only takes up space on the SAN once Shared snapshot Shared snapshots occur when a clone point is created from a newer snapshot that has older snapshots below it in the tree They are designated in the navigation window with the icon shown above 274 SmartClone volumes EHP LeftHand Networks Centralized Management Console Loe mE Getting Started g Gettin Detais iSCSI Sessions Remote Snapshots Assigned Servers Snapshots Schedules Map view E Configuration Summary EH ss Available Nodes 1 Volume EY FailoverManager lime Bo BS TrainingOS BHE Servers 1 Description Sa Administration pee i f L Stes juster rogramming FZ Programming Status Normal Performance Monitor f Storage Nodes 3 Type Primary Created by Manual Hae omi papshots 3 Size 56B Created 08 05 2008 12 17 56 PM MDT ELS c _scen Replication Level Replication Priority Availabilty es C _snap2 Provisioned Space Provisioning Thin I C _snapt G c ciass_1 3 Utilization ES c _scsnap Target Information iSCSI Name ign 2003 10 com Jefthandnetworks trainingos 12 c GD c class_5 3 aS SysAdm Volume Tasks 57 Alerts Remaining Date Time Hostname IP Address Alert Message 57 08 05 20 Gol
161. aracteristics and click OK when you are finished EY NOTE The system automatically factors replication levels into the settings For example if you create a 500 GB volume and the replication level is 2 the system automatically allocates 1 000 GB for the volume Editing a volume When editing a primary volume you can change the description size and advanced characteristics such as the cluster replication level replication priority type and provisioning EY NOTE Moving the volume to a different cluster requires restriping the data in both clusters Restriping can take hours or even days Table 53 Requirements for changing volume characteristics Item Requirements for Changing Description Must be from 0 to 127 characters 240 Using volumes ltem Requirements for Changing Server Server must have already been created in the management group The target cluster must e Reside in the same management group e Have sufficient storage nodes and unallocated space for the size and replication level of the volume being moved Cluster e Use a Virtual IP if the originating cluster has a Virtual IP The volume resides on both clusters until all of the data is moved to the new cluster This causes a restripe of the data on both clusters For example you restructure your storage and create an additional cluster You want to migrate an existing volume to the new cluster as part of the restructuring The clust
162. are performed while the volumes are on line and available to users Increasing the volume size in other environments Some environments use alternative tools such as Dell Array Manager and VERITAS Volume Manager Both of these disk management tools use a utility called Extpart exe instead of Diskpart exe Extpart exe commands are similar to those of Diskpart exe The only major difference is that instead of selecting the volume number as in Diskpart exe you select the drive letter instead Extpart exe and corresponding documentation can be downloaded from www dell com Changing configuration characteristics to manage space Options for managing space on the cluster include Changing snapshot retention retaining fewer snapshots requires less space Changing schedules to snapshot a volume taking snapshots less frequently requires less space Deleting volumes or moving them to a different cluster e Deleting snapshot temporary space EY NOTE Deleting files on a file system does not free up space on the SAN volume For more information see Block systems and file systems on page 233 For file level capacity management use application or file system level tools Snapshot temporary space When you mount a snapshot additional space can be created in the cluster for use by applications and operating systems that need to write to the snapshot when they access it This additional space is called the temporary space For example
163. arted Launch Pad on page 37 e By right clicking an available storage node in the navigation window From the menu bar with Tasks gt Management Group gt New Management Group Creating a new management group 1 Select Getting Started in the Navigation window to access the Getting Started Launch Pad 2 Select the Management Groups Clusters and Volumes wizard 3 Optional Click the link to review the information you will need to have ready to create the management group and cluster 178 Working with management groups 4 Click Next to start creating the management group Create management group and add storage nodes 1 Type a name for the new management group This name cannot be changed later without destroying the management group Select the storage node s to add to the management group Use Ctrl Click to select more than one Add administrative user 1 2 3 Click Next to add an administrative user Enter the administrative user s name a description and a password The first administrator is always at full administrator level Click Next to set the time for the management group Set management group time 1 2 Select the method by which to set the management group time Recommended To use an NTP server know the URL of the server or its IP address before you begin Note if using a URL DNS must be configured on the storage nodes in the group To set the time manually selec
164. ased on the sample interval duration and selected statistics When the export information is set the way you want it click OK to start the export The export progress displays in the Performance Monitor window based on the duration and elapsed time To pause the export click uu _ then click P to resume the export To stop the export click El Data already exported is saved in the CSV file Saving the graph to an image file You can save the graph and the currently visible portion of the statistics table to an image file This may be helpful if you are working with technical support or internal personnel to troubleshoot an issue 1 316 From the Performance Monitor window make sure the graph and table display the data you want Right click anywhere in the Performance Monitor window and select Save Image The Save window opens Navigate to where you want to save the file Change the file name if needed The file name defaults to include the name of the object being monitored and the date and time Change the file type if needed You can save as a png or jpg Click Save Monitoring performance 19 Registering advanced features Advanced features expand the capabilities of the SAN iQ software All advanced features are available when you begin using the SAN iQ software If you begin using a feature without first registering a 30 day evaluation period begins Throughout the evaluation period you receive reminde
165. at displays more items Double clicking Double click an item in the navigation window to open the hierarchy under that item Double click again to close it Right clicking Right click an item in the navigation window to view a menu of commands for that item Getting started launch pad The first item in the navigation window is always the Getting Started Launch Pad Select the Launch Pad to access any of the three wizards to begin your work Available nodes The second item in the navigation window is Available Nodes Available Nodes includes the storage nodes and Failover Managers that are not in management groups These storage nodes are available to be added to management groups Other information in the navigation window depicts the storage architecture you create on your system An example setup is shown in Viewing the three parts of the CMC on page 30 CMC storage hierarchy The items in the navigation window follow a specific hierarchy 32 Management Groups Management groups are collections of storage nodes within which one or more storage nodes are designated as managers Management groups are logical containers for the clustered storage nodes volumes and snapshots Servers Servers are application servers that you set up in a management group and assign to a volume to provide access to that volume Sites Sites are used to designate different geographical or logical sites in your environment Sites are us
166. ata The steps below provide actions you can take to try to ensure your data is not lost First if the volume is in a data process wait for the process to finish Then click the Power Off Disk button below Second if there is enough space on the cluster change the replication level to 2 way by right clicking on the volumes above selecting Edit in the menu and selecting 2 way replication Wait for the restripe to finish Then click the Power Off Disk button below Third if you don t have enough space on the cluster but do have space on another cluster move the volume to the other cluster by right clicking on the volume above selecting Edit in the menu and select the other cluster from the cluster list Vait for the restripe to finish Then click the Power Off Disk button below Finally if none of the above actions are possible back up the volumes and snapshots and delete them by right clicking on each one and selecting Delete from the menu Then click the Power Off Disk button below Do you want to power off disk 57 Cancel Figure 166 Warning if volumes are not replicated e Right click on the storage node in the navigation window and select Repair Storage Node A ghost image takes the place of the storage node in the cluster with the IP address serving as a place holder The storage node itself moves from the management group to the Available Nodes pool EY NOTE If the storage node does not appear in the Avail
167. ata Transfer Preferred YesStatus Passive Preferred NoStatus ActiveData Trans Fails Over to Eth Failed Data Transfer No fer Yes EthO Restored Preferred YesStatus ActiveData Preferred NoStatus Passive ae Transfer Yes Ready Data Transfer No Example network configurations with active passive Two simple network configurations using Active Passive in high availability environments are illustrated Active Path NSM 260 NSM 260 NSM 260 Storage Cluster A Figure 65 Active passive in a two switch topology with server failover The two switch scenario in Figure 65 is a basic yet effective method for ensuring high availability If either switch failed or a cable or NIC on one of the storage nodes failed the Active Passive bond would cause the secondary connection to become active and take over Figure 66 Active passive failover in a four switch topology Figure 66 illustrates the Active Passive configuration in a four switch topology 96 Managing the network How link aggregation dynamic mode works Link Aggregation Dynamic Mode allows the storage node to use both interfaces simultaneously for data transfer Both interfaces have an active status If the interface link to one NIC goes offline the other interface continues operating Using both NICs also increases network bandwidth Requirements for link aggregation dynamic mode To configure Link Aggregation Dynamic Mode Both NICs should be enabl
168. ated when the first management group is configured Configuration guidance As the Configuration Summary reports the numbers of the storage items it provides warnings about the safe limits for each category based on performance and scalability These warnings first alert you that the category is nearing the limits by turning the category orange When an individual category turns orange then the Configuration Summary category in the navigation window turns orange as 174 Working with management groups well When an individual category reaches the maximum recommended configuration it turns red When the number in that category is reduced the color changes immediately to reflect the new state For example if you have a large number of volumes that have numerous schedules that are creating and deleting snapshots the snapshots may increase to a number that changes the summary bar from green to orange As soon as enough snapshots from the schedules are deleted reducing the overall total the summary bar returns to green Best practices The optimal and recommended number of storage items in a management group depend largely on the network environment the configuration of the SAN the applications accessing the volumes and what you are using snapshots for However we can provide some broad guidelines that help you manage your SAN to obtain the best and safest performance and scalability for your circumstances These guidelines are in line with our
169. ating application managed 248 creating application managed for volume sets 249 defined 32 deleting 261 deleting schedules 256 editing 250 editing schedules 255 managing capacity and scheduled snapshots 254 and thresholds 226 mounting 250 overview 226 245 pausing or resuming 256 planning 226 246 planning size 246 pointin time consistent 245 prerequisites for 245 read write deleting temporary space 253 requirements for editing 255 rolling back a volume from 257 schedule requirements 254 scheduling 254 shared 274 size 227 temporary space for read write snapshots 235 253 using 245 versus backups 245 368 SNMP access control 130 Agents community strings 130 disabling 132 agents community strings 130 Clients adding 130 MIB 325 overview 129 removing trap recipient 133 Traps enabling 133 traps disabling 134 editing recipient 133 using 133 using MIB 132 software upgrading storage nodes 49 space allocation 222 space requirements planning for SmartClone volumes 265 speed duplex configuring 108 VSA 89 Spoofing 337 starting management group 85 managers on storage nodes 181 virtual manager to recover quorum 205 starting and stopping dedicated boot devices 52 startup and shutdown troubleshooting 194 static IP addresses and DNS 112 Statistics adding 310 exporting performance to a CSV file 315 in the Performance Monitor defined 306 removing 312 viewing details
170. ation evel Tye Space Space 2 stes GG Financescss1_i1 36GB 2Way Ful 5 GB Reclaimable 66B sca 19MB 3 E G Financescsst_2 3GB 2Way _Thin 5 GB Saved 16B 6GB 0MB ba id G Financescss1_3 3GB 2Way Thin 5 GB Saved 16B 6cB6 oms 0 erformance Monitor Storage Nodes 3 EG LogsAdmin 5GB None Full 3 67 GB Reclaimable 56B 56B 10268 a Volumes 5 and Snapshl Snapshett 5GB None Thin 4 12 GB Saved 896 MB 896MB 3785 EEZ H FinanceSCss1 1 1 G LogsFinance 15GB 2Way Thin 29 GB Saved 16B 3068 15m8 0 EHD FinanceSCsst_2 1 2Way Thin 275GBSaved 25GB 776MB Temp Space 25GB 2046 ETT EHI FinanceSCss1_3 1 amp FinanceSCsst 3GB 2Way Thin 5 GB Saved 16B 1GB 538MB E LogsAdmin 1 LS bogs 8 67 GB Reclaimable 75 62 G 18 37 GB 776 MB Temp Space 57 37 4 146 L Snapshot Ei LogsFinance 2 Snapshot2 L Financescsst 1 Temp space can be deleted or converted to a volume Figure 109 Provisioned space shows temp space used Node use summary The Node Use window presents a representation of the space provisioned on the storage nodes in the cluster I HP LeftHand Networks Centralized Management Console Jon a angar Detais Use Summary Volume Use Node Use iSCSI Sessions Remote Snapshots iSCSI 5 Configuration Summary BH Available Nodes 1 a EE Exchange Name Raw RAD Usable Provisioned Used p Servers 1 Space Configuration Space S
171. ation is completed Registering advanced features for a storage node Using the Feature Registration tab register individual storage nodes for advanced features For more information about registering advanced features see Chapter 19 on page 317 50 Working with storage nodes Determining volume and snapshot availability The Availability tab displays which volumes and snapshots availability depends on this storage node staying online Details include the replication level and what factors contribute to the availability status such as the status of storage nodes participating in any replication or a RAID restripe in progress vagement Console Joe a a Details Feature Registration Availabilty The following volumes and snapshots will become unavailable if storage node Golden 1 goes offline This table is updated every 5 seconds Name Status Replication Level Contributing Factors Gateway Conne volA_ss_2 Normal Temp spac None not replicated volA_ss1 Normal Temp spac None not replicated Bo vol Normal None not replicated Figure 9 Availability tab Checking status of dedicated boot devices Some storage nodes contain either one or two dedicated boot devices Dedicated boot devices may be compact flash cards or hard drives If a storage node has dedicated boot devices the Boot Devices tab appears in the Storage configuration category Platforms that do not hav
172. aunensuanieeeorceauanieneneumGremiaea 325 21 Replacing disks appendix cceeeeeeeeeeeeeseeeeteteeeeeeeeeeeeeaeeeaaeaeeees 327 Replacing disks and rebuilding data inissccsaiensaviconanennue hawenhanse iubehin danulbnwtianieunwnsdaspiueaiaiuueyhobSenhowses 327 Before you begin ccsss cnanhdssacenaseeon wate atetancoimectiaacesevaaimeanuieas atm hetiannevi antares omen nnne n ene 327 Prerequisites eitice a e acu sibcaesiee Sean nates slic bac A T O a 327 RSG disks onanie ean a A E E a a EiT 328 Verify storage node not running a MANAEF xi casi00 5e conrsdvaznsinendnnddawmmnbononpeawerdiallonupiacmboaveven 328 SHOPPING A MANdgET areste ren enee ra E AE E A EA A E E A EA 328 Repair HG SRS NOAE sisian i a a a a ae E a AER 328 PETE QUISIIC senpa en Ee a A E eeatetenuase carat nae neGs 328 Replace HS CLS susrcumgie oinar na bea a a e ea a a a 329 In the NSM 160 or the NSM 260 5 5 isensrnnnsianeatsnawbs tenis tennen cneaacnomeseeteankncunntbpmeeuntaaesinabestenent 329 In ihe DL380 or the IBM X365 0ni e e n E a 330 In the DL320s NSM 2120 Dell 2950 NSM 2060 NSM 4150 HP LeftHand P4500 HP LeftHand PASO E E 330 Rebuilding data sesoses NEn RE EE E E a Be i TeRi ERE 330 Recreate the RAID Array sosirii ooa a EEE E e Saal 330 Checking progress for RAID array to rebuild ecccecceccesecceesneecceneeeeeeeaeeeeeeeeceeeeeeseeeeseneeeeeaes 331 Return slorage node fo cl ster sianshassziawadayidanns aaniseaaidhanhadenislinesebaueionsecaveatauniunuvla
173. ausing 313 planning for SAN improvements 300 prerequisites 297 restarting 313 statistics defined 306 understanding and using 297 workload characterization example 298 Performance Monitor graph changing 313 changing line color 314 changing line style 314 changing the scaling factor for 315 displaying a line 314 hiding 314 hiding a line 314 showing 314 Performance Monitor window accessing 303 graph 305 parts defined 303 saving to an image file 316 table 306 toolbar 304 permanent variables 138 Permissions effect of levels 293 permissions administrative group 27 full 126 read modify 126 read only 126 planning data replication 223 RAID configuration 67 SmartClone volumes 265 snapshots 226 246 volumes 222 237 size 221 planning capacity full provisioning method 222 thin provisioning method 222 point in time consistent snapshots defined 245 pool of storage 171 powering off disk using CMC 83 84 NSM 4150 system controller and disk enclosure correct order 47 storage nodes 48 powering on disk using CMC 83 85 NSM 4150 system controller and disk enclosure correct order 47 preferred interface in active passive bond 95 in adaptive load balancing 99 in link aggregation dynamic mode bond 97 preferred NTP server 120 prerequisites 263 Prerequisites for assigning servers to volumes 293 servers 290 363 prerequisites for clusters 209 Performance Monitor 297 removing
174. ave been rebooted and is not yet online e Power could have failed to a network switch that the storage node is connected to e The CMC might be running on a system that is on a different physical network than the storage node Poor network routing performance at the site may severely affect performance of the CMC Changing which storage nodes appear in the navigation window 1 Click the Find menu 2 Select Clear All Found Items to remove all storage nodes from the navigation window 3 Perform a Find using either method By Subnet and Mask or By Node IP or Host Name to find the desired set of storage nodes EY NOTE Control which storage nodes appear in the navigation window by entering only specific IPs or Host Names in the IP and Host Name List window Then when you open the CMC only those IPs or Host Names will appear in the navigation window Use this method to control which management groups appear Configuring multiple storage nodes After you have configured one storage node with settings for alerts SNMP monitoring and remote log files you can copy those settings between storage nodes For information about configuring these settings see the following sections e Enabling SNMP agents on page 129 e Using alerts for active monitoring on page 135 e Using remote log files on page 168 e Setting email notifications of alerts on page 141 A CAUTION Copying the configuration between differ
175. aw Space Site Bkup Usable Space 3 21 TB admin Logged In User ID LED N A on this hardware RAID Bi normal RAD 50 Status Storage Server Normal Manager Normal Virtual Manager Normal Management Group Name Exchange Normal Normal Manager frtual Manager Migrating Data No 7 Alerts Remaining Date Time Hostname IP Address Alert Message 7 09 19 20 Deny 10 0 61 16 Management Group Exchange Storage Server Denver 3 Status storage down f 6 0979 200 ns41 10 0 60 24 Management Group Exchange Storage Server Denver 3 Status manager down 5 09 19 200 Deny 10 0 60 32 Management Group Exchange Storage Server Denver 3 Status manager down 4 09 19 20 Deny 10 0 60 32 Management Group Exchange Storage Server Denver 3 Status storage not re _ lt P z Ce x 1 Unavailable Manager 2 Virtual manager started Figure 98 Starting a virtual manager when storage node running a manager becomes unavailable EY NOTE If you attempt to start a virtual manager on a storage node that appears to be normal in the CMC and you receive a message that the storage node is unavailable start the virtual manager on a different storage node This situation can occur when quorum is lost because the CMC may still display the storage node
176. b EHP LeftHand Networks Centralized Management Console Joe E Getting Started Details Feature Registration Availabilty Storage Node Model NSM4150 EP fi Exchange Servers 1 Software Version 8 0 00 1659 0 k Administration st Hostname Denver 2 MAC Address 00 19 B9 E9 4A D1 les IP Address 10 0 60 32 Raw Space 4 08 TB Site Unassigned Usable Space 3 21 TB Logged In User admin 1D LED N A on this hardware Denver 3 RAID E normal RAID 50 amp Volumes 2 and Snapshots 10 Status Storage Server Overloaded Manager Normal Efi ExchangeHO Management Group Name Exchange Manager Normal Virtual Manager Migrating Data No 1 Status line Figure 99 Checking the storage node status on the Details tab If status is Storage Server Overloaded Wait up to 10 minutes and check the status again The status may return to Normal and the storage node will be resyncing If status is Storage Server Inoperable Reboot the storage node and see if it returns to Normal when it comes back up If these statuses recur This may be an indication that the underlying hardware problem still exists 215 Repairing a storage node Repairing a storage node allows you to replace a failed disk in a storage node that contains volumes configured for 2 way or 3 way replication and trigger only one resync of the data rather than a complete restriping Resyncing the data is a
177. ble and selected for display The graph and table data repopulate with the latest values after the next sample interval elapses 1 2 312 In the navigation window log in to the management group Select the Performance Monitor node for the cluster you want The Performance Monitor window opens Right click anywhere in the Performance Monitor window and select Clear Samples Monitoring performance Clearing the display You can clear the display which removes all lines from the graph and deselects the Display option for each statistic in the table This leaves all of the statistics in the table along with their data which continue to update 1 In the navigation window log in to the management group 2 Select the Performance Monitor node for the cluster you want The Performance Monitor window opens 3 Right click anywhere in the Performance Monitor window and select Clear Display Resetting defaults You can reset the statistics to the defaults which removes all lines from the graph and sets the three default statistics cluster total IOPS cluster total throughput and cluster total queue depth to zero in the table The default statistics are set to display and their data update when the next sample interval elapses 1 In the navigation window log in to the management group 2 Select the Performance Monitor node for the cluster you want The Performance Monitor window opens 3 Right click anywhere in the Performan
178. bled at installation SmartClone Volumes Snapshots Remote Snapshots Schedules to Snapshot a Volume Schedules to Remote Snapshot a Volume Default name MG_ VOL_ _Sch_SS_ _Sch_RS_ Example MG_lLogsBackup CL_OffSiteBkUp VOL_DailyBkUp VOL_VOL_DailyBkUp_SS_3_1 VOL_DailyBkUp_SS_1 VOL_DailyBkUp_RS_1 VOL_DailyBkUp_Sch_SS_1 1 VOL_DailyB kUp_Sch_RS_1_Pri 1 VOL_DailyB kUp_Sch_1_RS_Rmt 1 This example is illustrated in Figure 4 E HP LeftHand Networks Centralized Managem E Getting Started Ei DataStore Servers 2 2 Administration Sites Virtual Manager E z ExchgLogs Performance Monitor Storage Nodes 2 L Volumes 3 and Snapshots 4 O Logs 1 EF YOL_DailyBkUp 1 YOL_DailyBkUp_Sch_RS_1_Rimt1 VOL_DailyBkUp_RS_1 c GNDSSBaekup fe Servers 0 2 Administration B Sites cL_OffSiteBkUp Performance Monitor Storage Nodes 1 Volumes 4 and Snapshots 5 VOL_DailyBkUp 5 VOL_DailyBkUp_SS_3 S VOL_DailyBkUp_Sch_RS_1_Pri S VOL_DailyBkUp_Sch_SS_11 S VOL_DailyBkUp_SS_2 S VOL_DailyBkUp_SS_1 voL_voL_DailyBkUp_S5_31 5 EES VOL_DailyBkUp_SS_3 vou_Vvou_DailyBkUp_Ss_3_2 5 BH VOL_VOL_DailyBkUp_SS_3_3 5 HHA Figure 4 Using the default naming for all the elements If you do not use any of the default names then the only automatically generated naming elements are those that incrementally number a series of snapshots or SmartClone volu
179. can remove a faulty or failed disk and replace it with a new one RAID will rebuild and the drive will return to Normal status A CAUTION Before replacing a drive in a hot swap platform always check the Safe to Remove status to verify that the drive can be removed without causing RAID to go Off When RAID is Normal in RAID1 10 RAID5 RAID50 or RAID6 all drives indicate they are safe to remove However you should only hotswap one drive at a time If it is necessary to replace more than one drive always check the Safe to Remove status again You must wait up to two minutes for the status to fully update before checking it again If the status indicates the second drive is safe to remove then you may replace it For example if an array is Rebuilding no other drives in the array except for hot spare drives will be safe to remove However if the configuration includes two or more arrays and those arrays are Normal the Safe To Remove status will indicate that drives in those other arrays may be replaced Note that the Safe To Remove status will always be No when in a RAIDO configuration until the drive is powered off Also note that Hot Spare Inactive and Uninitialized drives are always safe to remove Replacing disks in non hot swap platforms IBM x3650 In non hot swap platforms running RAID1 10 or 5 you must power off in the CMC the faulty or failed disk before you physically replace the disk in the chassis After physically re
180. cannta eid nnsea aa ea es EEE EES 344 Setting the TCP speed duplex and frame size ccccccccessceeeeneeeceneeeeenseeeseeaeeeseeaeeensneeseseseeeenaes 344 Removing a storage node from a management group ecceecceeseeeseeneeeeseeneeeeeeeseseeeeeeeeaeeneeeneees 345 Resetting the storage node to factory defaults ceecccceeescceeeeeeeeeeeeeeeeeeeeseeeeeenseeeeesteeeenteeeens 345 T N e a EEEE 347 E a USS E E E E E EA E E A E EE E A E A 347 17 Figures 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 Viewing the three parts of the CMC isc acss hissed dei whaincanedearpoanndadanidaenias niahanionmirekeoaokennnen 30 Viewing the menu bar in the navigation window ssssssesesrseesreserressrrsssersssresserrrseere 31 Default naming conventions for snapshots and SmartClone volumes cscceeereeeeees 34 Using the default naming for all the elements c ccceseecceceecseeeeeteeeeeneeeeeeseeeenseeees 36 The SAN iQ software storage hierarchy ccssceceeeseeceeseeeeeeeeeeeeeeeeesseeeenseeeseaaes 38 Storage node configuration categories c c i 10 ceanciececersaversconnsseestantiveractuensiceersonncdegvnees 43 Disk enclosure not found as shown in Details tab cceesseeeceeeeneeceeeeeereesneeeeneeeees 48 Confirming storage node power off ccccccesceccssceeeeeeeeeseeeeeseeeeeaseeeneseeceeteeensaees 49 Availapiiy tabe eaea e n e E EE E a a A
181. capacity When you have schedules to snapshot a volume the recurrence or frequency and the retention policy for the schedules affect the amount of space used in the cluster For example it is possible for a new snapshot and one snapshot scheduled for deletion to coexist in the cluster for some period of time If there is not sufficient room in the cluster for both snapshots the scheduled snapshot will not be created and the schedule will not continue until an existing snapshot is deleted Therefore if you want to retain n snapshots the cluster should have space for n 1 Deleting snapshots Another factor to note in planning capacity is the fact that when a snapshot is deleted that snapshot s data is added to the snapshot or volume directly above it the next newer snapshot The amount of space allocated for the volume or snapshot directly above the deleted snapshot increases The effect of this data migration can be seen in the Max Provisioned Space and Used Space columns on the Volume Usage tab of the cluster See Ongoing capacity management on page 227 for detailed information about reviewing capacity Ongoing capacity management One of the critical functions of managing a SAN is monitoring usage and capacity The CMC provides detailed information about overall cluster capacity and usage as well as detail about provisioning and storage node capacity Number of volumes and snapshots For information about the recommended ma
182. cation Level None Replication Priority Availabilty Provisioned Space 28GB Provisioning Thin Utilization Target information iSCSI Name ign 2003 10 com lefthandnetwarks trainingos 107 c sesnap 1 Utilization graph for clone point Figure 136 Clone point Details tab showing utilization graph Figure 137 shows the utilization of the SmartClone volume created from the clone point EHE Servers 1 H a Administration H stes 3 Pr 2 Performance Monitor forage Nodes 3 amp Volumes 6 and Snapshots 1 C _SCsnap O c ciass_4 1 c class_5 1 BSS sysadm amp cect US c scsnap O GSS US c _scsnap CHG caclass_2 1 LS c scsnap E Caclass_3 1 G i BHP LeftHand Networks Centralized Management Console Joke Details iSCSISessions Remote Snapshots Assigned Servers Snapshots Schedules Map View Volume Name O c class_1 Description Cluster Programming Status Normal Type Primary Created by Manual Size 56B Created 08 05 2008 05 10 45 PM MDT Replication Leve None Replication Priority Availabilty Provisioned Space 512MB Provisioning Thin Utilization a 0 Target information iSCSI Name ign 2003 10 com etthandnetworks trainingos 265 c class 1 1 Utilization graph at 0 Figure 137 SmartClone volume Details tab showing utilization graph Editing SmartClone volumes Use the Edit Volume window to c
183. ce Monitor window and select Reset to Defaults Pausing and restarting monitoring If you are currently monitoring one or more statistics you can pause the monitoring and restart it For example you may want to pause monitoring during planned maintenance windows or production downtime A e From the Performance Monitor window click 44 to pause the monitoring All data remain as they were when you paused To restart the monitoring click dD Data update when the next sample interval elapses The graph will have a gap in the time Changing the graph You can change graph and its lines in the following ways e Hiding and showing the graph on page 314 e Displaying or hiding a line on page 314 e Changing the color or style of a line on page 314 e Highlighting a line on page 315 e Changing the scaling factor on page 315 313 Hiding and showing the graph By default the performance monitor graph displays in the Performance Monitor window If you want more space to display the performance monitor table you can hide the graph From the Performance Monitor window click E to hide the graph To redisplay the graph click B to show the graph Displaying or hiding a line When you add statistics to monitor by default they are set to display in the graph You can control which statistics display in the graph as needed 1 From the Performance Monitor window deselect the Display check box for the
184. ce remaining in the cluster that has not been allocated for storage This value decreases as volumes and snapshots are created or as thinly provi sioned volumes grow Provisioned space Amount of space allocated for volumes For fully provisioned volumes this is the Volumes size x the replication level For thinly provisioned volumes the amount of space allocated is determined by the system Amount of space allocated for snapshots and temporary space if required This Snapshots value is zero until at least one snapshot has been created If all snapshots are deleted this value returns to zero Total Sum of the space allocated for volumes snapshots and temporary space Used space 1 Volumes Actual amount of space used by volumes Snapshots Actual amount of space used by snapshots including temporary space Total Total of space used by volumes and snapshots For more information see Meas uring disk capacity and volume size page 233 Saved space Thin provisioning The space saved by thin provisioning volumes This space is calculated by the system SmartClone feature Space saved by using SmartClone volumes is calculated using the amount of data in the clone point Only as data is added to an individual SmartClone volume does it consume space on the SAN Total Approximate total amount of space saved by using thin provisioning and Smart Clone volumes Graph information Pr
185. ceaeeesesaeesessaeesesseeeseeeeeees 263 Whal are SmartClone volumes ssicica denser davibiwunial eanwahsnid acue hase buniie ined baked howhe haubibawennanpianieaniieins 263 PETG SS sacusteticacuseatuna cctiinsnes manic O a a O ETE N E 263 lE lale Ta E E E wns a wes 264 Example scenarios for using SmartClone volumes csesceeeeeeeeseeeereeceeeeeeaeeeeseeenseeeneeeeneeeres 264 Deploy multiple virtual or boot from SAN servers cccsceeseceeeeteeeeeeseaeeeteeeenteeeeeeeseaeeneas 264 Safely use production data for test development and data mining ccseccseeesteeeeeeees 265 Clone a volume ieonierii iiieoo wean yas Die ee da apc La eh La wu valet ti 265 Planning SmartClone volumes lt c c25 nacescacnenicacees tea capaneesc oe eateeemeerpaeiatasencnecia Racmeanrenitnast 265 Space requirements s c ssinisacsicedoren cncbaanendadusuaybeedaapansustadadssene a E E E AE O E 265 Naming convention for SmartClone volumes s cceeceseeeeeneeceeeeeseeeseeeenteeeeteeeenseentneeeaeeess 266 Naming and multiple identical disks in a server cccesceeeeeeeeeeesneeenseeeeseeceteeeeteeeeneeeees 266 Server CCS SE visgi onea T eTa vas E Mose EEE E EEEE E ae EEE ee anor naa 267 Defining SmartClone volume characteristics 0 cccccceceeeceeeseeceeeseeeeneeeceeeeeeeseseeeseeeeesseeesseees 267 Naming SmartClone volumes isiseoacnaapiacadeaeniugh anno neg awa Galeaere edna aaeta eae 267 Shared versus individual characteristics
186. ceeeeeeeeeeteeeeeeseeeeeseeess 294 Completing the iSCSI Initiator and disk setup ivcscisssccviensnssnnsvrnsbaanis tensive andleriens sala lien nidainadilevaaiainntus 294 Persistent targets or favorite targets ccesccccsesscceesscceeeeseeeceeeceeeeeeeeseseeeeeseeesenseeeenseeenteaes 295 HP LeftHand DSM for MPIO settings jucijsciaicbinaiatsecnisanats tales david anbhedanrenbbisanpinnnpioutiouedioensaiars 295 Disk management sesser es iena a aE e E E a a A E ER ERORE RREA E 295 18 Monitering parlamant daeinennsteneseesneunnctesnencunnans 297 Prerequisites sae vit Vis A iat Ll a E a panei 297 Introduction to using performance information ccccccccceesececeeseeeeeeececeeeeeceeseeeeeeeeeeeeseeeenseeeees 297 Whotean learn about ity SANE ccssccociininincniGncndin taabbanaboeaiad a a a a 298 Current SAN activities exOmple sccccnsscssersannceasesendessseanansteadesnasscnadeevsuanansisogsscnneadersanene 298 Workload characterization example iisdiusniiaveveislaliivaasiisionniavbiuewilaviinnbeanduiiindMiniiblvaviiaiis 298 Fault isolation example sax saccodnapenassn pas aesuin hesannnm Read aad laine abasic an mciuanGuoedaminsicadmencaorads 299 What can learn about my volumes tcnisusuescsiranunaiiiusedbayiiahoucaniydabakiniedbabenusiebauielawmedaunnlauvhonyts 299 Most active volumes examples iasvassctenunersnebnoniavensenoannotnsacdaoneastavennninaiuabnataesnenmbaeanniens 299 Activity generated by a specific server example
187. ch that file previously occupied are now freed Subsequently when you query the file system about how much free space is available the space occupied by the deleted files appears as part of the free space since the file system knows it can overwrite that space However the file system does not inform the block device underneath the SAN iQ volume that there is freed up space In fact no mechanism exists to transmit that information There is no SCSI command which says Block 198646 can be safely forgotten At the block device level there are only reads and writes So to ensure that our iSCSI block devices work correctly with file systems any time a block is written to that block is forever marked as allocated The file system reviews its available blocks list and reuses blocks that have been freed Consequently the file system view such as Windows Disk Management may show you have X amount of free space and the CMC view may show the Used Space as 100 used A CAUTION Some file systems support defragmenting which essentially re orders the data on the block device This can result in the SAN allocating new storage to the volume unnecessarily Therefore do not defragment a file system on the SAN unless the file system requires it Changing the volume size on the server A CAUTION Decreasing the volume size is not recommended If you shrink the volume in the CMC before shrinking it from the server file sy
188. click Install on the opening window The install software window opens Click Failover Manager Continue through the installation wizard following the instructions on each window After the installation wizard finishes the default choice is to launch the Failover Manager 4 Click Finish to exit the wizard and start the Failover Manager Using the HP LeftHand Networks web site download 1 Click download on the web site and the installation wizard opens 2 Continue through the installation wizard following the instructions on each window After the installation wizard finishes the default choice is to launch the Failover Manager 3 Click Finish to exit the wizard and start the Failover Manager To configure the Failover Manager The system pauses while the Failover Manager is installed registered and then the VMware Console opens as shown in Figure 89 191 3 Local host VMware Server Console JOE File Edit View Host VM Power Snapshot Windows Help BH Pea BG Oe soa iFaloverMgr E a FailoverMgr State Powered off Guest 0S Other Linux 2 6 x kernel Configuration file C Program Files LeftHand Networks SANIQ Failover Manager FailoverMgr vmx Version Current virtual machine for VMware Server 1 0 2 Commands Devices D Start this virtual machine memory 256 MB Hard Disk SCSI 0 0 Hard Disk 2 SCSI 0 Hard Disk 3 SCSI 1 Edit virtual machine settings ethernet Bridged Ethernet 2 Bridged uss Controller Pre
189. click Install on the opening window The install software window opens Click Failover Manager for ESX 195 3 Continue through the installation wizard following the instructions on each window Using the HP LeftHand Networks web site download 1 2 Click download on the web site and the installation wizard opens Continue through the installation wizard following the instructions on each window After the installation wizard finishes the default choice is to launch the Failover Manager Click Finish Installing the Failover Manager Files on the ESX Server Use one of the following methods depending on your software For ESX 3 5 or ESXi 1 au wn Connect to ESXi host via VC or VI Client Click on ESXi host and go to the Configuration tab Select Storage Find the local VMFS datastore where the Failover Manager will be hosted Right click and select Browse DataStore Create a new directory and click on the upload files icon For ESX Server 3 0 to 3 0 2 1 2 Upload unzipped folder for the Failover Manager Make a directory for the Failover Manager at vmfs volumes your_datastore_name Copy the Failover Manager files into the directory you just created on the ESX Server using scp Linux or pscp Windows as shown in the example below scp lt user gt lt IP address of the ESX Server gt vmfs volumes datastore Open execute permissions on the vmx file using the command chmod 755 FOM vmx
190. clone point may appear many times it only exists as a single snapshot in the SAN Therefore it only uses the space of that single snapshot The display in the navigation window depicts this by the multiple highlights of the clone point underneath each SmartClone volume that was created from it 273 mE Getting Started 5t Configuration Summary FailoverManager EE Trainingos EHE Servers 1 i Administration Sites ES Programming Performance Monitor Storage Nodes 3 EG ca WS c _SCsnap D c class_4 1 CS c _scsnap GB c class_2 1 US c _scsnap E Ciclass_3 1 US c _scsnap EI Citclass_4 1 CS c _scsnap c class_5 1 US c _scsnap GB c class_6 1 US c _scsnap E C class_7 1 C SCsnan ig Volumes 8 and Snapshots 1 HP LeftHand Networks Centralized Management Console Detais iSCSI Sessions Remote Snapshots Assigned Servers Map View Snapshot Hame Description Cluster Status Type Size Replication Level Provisioned Space Utilization Target Information iSCSI Name C _SCsnap Programming Normal Temp space None Primary Clone Point Created by Manual 5GB Created 08 05 2008 12 49 54 PM MDT None Replication Priority Availability 896 MB Provisioning Thin SSS 33 ign 2003 1 0 com lefthandnetworks trainingos 28 c scsnap 1 Clone point
191. cterateeverinvaiediaiedaniedeseesseearaesgdcnendenettunctetiedtheaativaiisietegsaiyccuneees 112 DNS ana DHCP ee E E not ian 112 DNS and static IP addresses cri cosneqrannyasntaarcin rinnenatsied annetaitn senetadensenadiicbleedapieasieaddeeasemenidonohdvess 112 Getting here mesate ue e ate a ame eee N 112 Adding the DNS domain name ci scisvsavetaccavsenesansnasinaccdesauatiaaietensaredarnnataacnmanvnativaansinninn 112 Adding the DNS servet orisirisi a a aa O E DE 112 Adding domain names to the DNS suffixes ccsscesesseeseeteeceeneeeeeeeeeeecseeeeesseeeenseeeeneaes 113 Editing a DNS server pan ceva ste eer aed e dee ea en dt E eee a a 113 Editing a domain name in the DNS suffixes list ccc scscimesadvarecrasnversiudeisvacsmapmleeedeerndaomesavucnaye 113 Removing DNS serven cercei nienen n e a a e a aaa aa A E O 114 Removing a domain suffix from the DNS suffixes list ccccceceeseceeeeeeeeeneeeeenteeeeesteeeeteneeeees 114 Setting UP POUTNG kenere Tace oa a E es Guat E S 114 Adding routing information cs vacensaseavendsnacnansasee erence oeudscussanendrdevenanennadongsmnquenanneeteaeeentains 114 Editing routing information 452425550 805e Aaiuneseateinaa iver dani seeeegee gant aaes wen aoa ease aN anand 115 Deleting routing information Lie taccksdevasdeveiuasinacstersivieustaauivodadcpaaatsiancenedlsilentensiseeanguvatbianonnisoeneel 115 Configuring storage node communication si 5 xcsedoehae svrdcnantecvned iim iindealanta acaorionst
192. ctive monitoring 135 Active Passive bond 94 active interface 95 during failover 95 example configurations 96 requirements 95 Adaptive Load Balancing bond active interface 99 during failover 99 example configurations 100 preferred interface 99 requirements 98 add on applications evaluating 317 overview 317 registering for 320 Adding servers to management groups 290 statistics 310 adding a remote log 168 administrative groups 125 administrative users 123 clusters 209 adding storage to 212 DNS domain name 112 DNS servers 112 domain names to DNS suffixes 113 iSNS server 210 management groups 77 requirements for 78 managers fo management group 181 monitored variables 136 routes 14 snapshot schedules 255 snapshots 248 requirements 246 SNMP clients 130 storage for the first time 37 storage nodes to existing cluster 211 storage nodes to management group 180 users to a group 127 virtual manager 204 volumes 239 requirements for 238 administrative groups 125 adding 125 adding users 127 changing 126 deleting 127 permission levels 126 permissions descriptions 126 127 removing users 27 administrative security 171 administrative users 123 adding 123 deleting 124 agents disabling SNMP 132 alert notification via email 141 via SNMP 141 via the CMC 141 353 alerts active monitoring 135 editing variables in alerts 137 selecting alerts to monitor 136 tab
193. d fy The Enable Load Balancing setting has been changed For the change to take effect you must first disconnect the volumes from the application servers You may need to first stop the applications and then disconnect them Then reconnect the servers restart the applications and log back on to the volumes Figure 141 Warning after changing load balancing check box Deleting server connections Deleting a server connection stops access to volumes by servers using that server connection Access to the same volume by other servers continues 1 2 3 4 In the navigation window select the server connection you want to delete Click the Details tab Click Server Tasks and select Delete Server Click OK to delete the server Assigning server connections access to volumes Afteryou add a server connection to your management group you can assign one or more volumes or snapshots to the server connection giving the server access to those volumes or snapshots A CAUTION Without the use of shared storage access host clustering or clustered file system technology allowing more than one iSCSI application server to connect to a volume at the same time without cluster aware applications and or file systems in read write mode could result in data corruption You can make the assignments two ways 292 Controlling server access to volumes e Assigning server connections from a volume on pa
194. d 10 0 14 90 Management Group TrainingOS Snapshot C _SCsnap Restripe Complete a 56 08 05 20 amp Denv 10 0 60 32 Management Group TrainingOS Snapshot C _SCsnap Restripe Complete Ba 55 08 05 20 Deny 10 0 61 16 Management Group TrainingOS Snapshot C _SCsnap Restripe Complete 54 08 05 20 Deny 10 0 6117 Management Group TrainingOS Volume C class_10 Restripe Complete 4 1 Original volume 2 Clone point 3 Shared snapshots Figure 126 Navigation window with shared snapshots In Figure 126 the original volume is C Three snapshots were created from C e C snap e C _snap2 C _SCsnap Then a SmartClone volume was created from the latest snapshot C _SCsnap That volume has a base name of C _class The older two snapshots C _snap1 and C _snap2 become shared snapshots because the SmartClone volume depends on the shared data in both those snapshots Table 62 How it works shared snapshots First a volume C Next 3 snapshots C _snap1C _snap2C _SCsnap Finally SmartClone volumes from the latest snapshot C class_x Latest snapshot becomes clone point S Older two snapshots become shared between clone point and SmartClone S volume The shared snapshots also display under all the volumes which share them In Figure 126 on page 275 they are displayed under the original volume from which they were created and under the single
195. d 206 99 Checking the storage node status on the Details tab eeceeeeeesseeeceeeeeeeeeeeeeeeetaeeeees 215 OO Exchanging ghost storage Node n 5s svunaceatscentatoeddedinlosse sinner ence uawenmneceeaauntens 218 01 Replacing the repaired storage node sxccasedscinccieniieieaaran tae Res 218 02 Repaired storage node returns to proper place in cluster cccessceceeeeetteeeeeeeeneeees 219 03 Write patterns in 2 Woay replication ssccsss asrencanteaareenanmaaree ounces eadaaraanrnsaunaanyaeeonatecens 224 04 Write patterns in 3 Way replication cccceccccseeeeeesseeceeseeeeeeeeeeeeeneeeeeensnaaeeseeensaaees 224 05 Write patterns in 4 Way replication ss fiiusieeia wily als ce eek ocriset tread ecto ecwta ws 224 106 Glusteritab view sceceriet Sec eee fe tildes Dede Dales ced ace oi eaten Rive Rego Mee Ee ol Sk 228 107 Reviewing the Use Summary tab sinuses ewudd Gea teeske dike oes 228 108 Viewing the space saved or reclaimable in the Volume Use tab cceeeseeeeetteeeeeeees 231 109 Provisioned space shows temp space used ccccceeeeseeeeeeeeeneeeeeeseeeneeeesensentaeeeeeens 232 110 Viewing the Node Use tabi csiecceicvesusescaueanonsnevesbiaibd anal shubd ease dletethaaiareta een stomeoees 232 111 Stranded storage in the cluster 2 0 vautaserar ti oeerm een un BA Peo eRe 233 112 Viewing multiple volumes and snapshots ccccessecceeeeeeeeceeeceeeeneeeeecstateeeeeeeeneeees 244 113 Deleting multiple volumes in
196. dO The bonded interface s displays only if storage node is configured for bonding IBM x3650 NSM 160 Dell 2950 NSM 2060 NSM 4150 HP LeftHand P4500 HP LeftHand P4300 Motherboard Port1 e Motherboard Port2 DL 380 DL 320s NSM 2120 e G4 Motherboard Port1 e G4 Motherboard Port2 VSA EthO 107 Column Description Description Speed Method Duplex Method Status Frame Size Preferred NSM 260 Motherboard PortO Motherboard Port1 Describes each interface listed For example the bondO is the Logical Failover Device Lists the actual operating speed reported by the device Lists duplex as reported by the device Describes the state of the interface See Table 16 for a detailed description of indi vidual NIC status Lists the frame size setting for the device For Active Passive bonds Indicates whether the device is set as preferred The pre ferred interface is the interface within an Active Passive bond that is used for data transfer during normal operation Managing settings on network interfaces Configure or change the settings of the network interfaces in the storage nodes See Network best practices on page 89 for more information Requirements These settings must be configured before creating NIC bonds Changing speed and duplex settings The settings for storage node and the switch must be the same Available settings are listed in Table 24 Table 24 Setting st
197. dancy 71 status and data transfer 7 1 VSA levels and default configuration for VSA 55 RAID virtual devices 60 RAID levels defined 327 RAID status status 7 RAIDO capacity 57 definition 57 devices 60 single disk replacement 83 RAID1 10 single disk replacement 83 RAID10 capacity 57 defined 57 RAID5 capacity 58 configurations 58 defined 58 disk arrays 63 hot spares 58 RAID5 50 single disk replacement 83 RAID50 capacity 59 defined 59 RAID6 single disk replacement capacity 59 definition 59 devices 66 hot swapping 59 raw storage 233 re create the RAID array read only permissions 26 read only volumes 250 read modify permissions 26 rebooting storage nodes 48 rebuild data when not running manager 328 rebuild volume data 333 rebuilding RAID 86 rate for RAID 69 reconfiguring RAID 70 recurring snapshots 254 redundancy data 223 redundant array of independent disks See RAID registering add on applications 320 registering features 80 Feature Registration tab 320 321 for a management group 180 for a storage node 50 registration information 324 Remote Copy backing out of evaluation 318 evaluating 318 registering 320 remote copies defined 32 remote volumes 238 remote log files 168 adding 168 changing remote log file target computer 169 configuring target computer 169 removing old logs 169 remote volumes 238 See Remo
198. dancy modes 225 setting volume size 222 speed and duplex settings 109 using snapshots as source volumes for data mining 247 for data preservation 247 protection against data deletion 247 block device iSCSI as 233 block storage system 233 boot devices activating dedicated 54 checking status of dedicated 52 configuring dedicated 53 deactivating dedicated 53 dedicated 51 removing dedicated 53 replacing dedicated 53 starting and stopping dedicated 52 status of dedicated 52 Boot Devices tab 52 BOOTP 91 Capacity RAID50 59 capacity clusters 209 disk capacity and volume size 233 of the SAN 221 planning volume size 221 planning full provisioning 222 planning thin provisioning 222 RAIDO 57 RAID10 57 RAID5 58 RAID6 59 storage nodes 209 capacity management and scheduled snapshots 254 snapshot thresholds 226 Centralized Management Console overview alerts window 34 features of Getting Started Launch Pad 32 icons used in menu bar 30 navigation window 3 tab window 33 Challenge Handshake Authentication Protocol See CHAP changing administrative group description 26 administrative group permissions 126 changing RAID erases data 70 cluster configuration 21 clusters for volumes 242 data availability 242 data redundancy 242 host names 44 IP address of storage node 91 local bandwidth 183 maintenance mode to normal 186 management groups 82 order of NTP server acce
199. data transfer resumes on the preferred NIC A type of network bonding in which the logical interface performs load balancing of data transmission An additional feature purchased separately from the SAN iQ software Snapshot of a volume that is taken while the application that is serving that volume is quiesced Because the application is quiesced the data in the snapshot is consist ent with the application s view of the data That is no data was in flight or cached waiting to be written For release 7 0 and earlier identifies the client or entity accessing the volume Not used in release 8 0 and later A feature in the CMC that automatically searches for storage nodes on the subnet the CMC is connected to Any storage nodes it discovers appear in the navigation window on the left side of the CMC Interface created for network interface failover and only appears after configuring for failover Combining physical network interfaces into a single logical interface Compact flash cards from which the storage node boots up Also known as disk on modules or DOMs Management interface for the SAN iQ software Challenge Handshake Authentication Protocol CHAP is a standard authentication protocol The snapshot that has two or more volumes associated with it A clone point is created when a SmartClone volume is created from a snapshot or from snapshot temporary space 347 Term CLI Cluster CMC Communication Mode
200. dden shadowcopy Exit diskpart by typing exit Reboot the server Using snapshots 17 18 19 Verify that the disk is available by launching Windows Logical Disk Manager You may need to assign a drive letter but the disk should be online and available for use If the server is running Windows 2008 or later and you promoted a remote application managed snapshot to a primary volume start the HP LeftHand Storage Solution CLI and clear the VSS volume flag by typing clearvssvolumeflags volumename drive_lett r where drive_letter is the corresponding drive letter such as G Reboot the server Managing snapshot temporary space You can either delete the temporary space to free up space on the cluster or if you need data that may have been written to the temporary space convert that temporary space to a SmartClone volume Convert the temporary space Convert the snapshot temporary space if you have written data to a mounted snapshot and you need to permanently save or access that data Converting the temporary space creates a SmartClone volume that contains the original snapshot data plus any additional data written after the snapshot was mounted Prerequisites Stop any applications that are accessing the snapshot and log off all related iSCSI sessions 1 2 3 4 Right click the snapshot for which you want to save the additional data Select Convert Temporary Space from the menu Type a name for the volume a
201. de them simultaneously amp NOTE During the upgrade procedure you may receive a warning that the CPU Utilization value exceeds 90 for example CPU Utilization 97 8843 Value exceeds 90 This is an expected occurrence during an upgrade No action is needed 1 Log in to the first storage node you want to upgrade Click Storage Node Tasks on the Details tab and select Install Software From the list select the storage node that you want to upgrade Use the CTRL key to select multiple storage nodes to upgrade from the list 4 Select this radio button Install file on selected storage nodes one at a time Recommended Click Browse to navigate to the folder where you copied the upgrade or patch file Select the file and click Open Install File Focus returns to the Install Software window When the file name is present the Install button becomes enabled Review the version and description to be sure that you are using the correct upgrade file Click Install Select the check box to have the install messages automatically scroll These messages can be saved to a file Optional After the installation completes click Save To File and choose a name and location for the file After the installation completes the system reboots After the system comes back online it conducts a postinstall qualification After the system passes the post install qualification the upgrade process is complete 9 Click Close when the install
202. dently On the appropriate site depending upon your configuration select one of the storage nodes and start the virtual manager on it That site then recovers quorum and operates as the primary site When communication between the sites is restored the managers in both sites reestablish communication and ensure that the data in both sites are resynchronized When the data is resynchronized stop the virtual manager to return to the disaster tolerant configuration 205 Starting a virtual manager A virtual manager must be started on a storage node ideally one that isn t already running a manager However if necessary you can start a virtual manager on a storage node that is already running a manager 1 Select the storage node on which you want to start the virtual manager 2 Click Storage Node Tasks on the Details tab and select Start Virtual Manager BHP LeftHand Networks Centralized Management Console Jon Getting Started Configuration Summary 8 Exchange Servers 1 Sp Administration Stes Details Feature Registration Availabilty Storage Node f E p Model NSM4150 Software Version 8 0 00 1659 0 Virtual Manager exchLogs E Performance Monitor ig Storage Nodes 4 EH Denver 1 ed H Y Denver 3 EH nfr 50 24 ss Volurfes 1 and Snapshot Hostname Denver 2 MAC Address 00 19 B3 E9 4A D1 IP Address 10 0 6032 4 08 TB R
203. deviscsihost1 bus1 target0lu RAIDS Normal 6 Device Type RAID Setup Tasks Figure 63 RAID rebuilding on the RAID Setup tab 86 Storage alized Management Console 3 l o leg RAID Setup Disk Setup Boot Devices Status Health Safe to Remove Model SerialNumber Class Capacit Active normal No 573250823 3NDICKGG SATA 3 0GB 232 74 GB Active normal No 573250823 4NDOJGHN SATA 3 0GB 232 74 GB Rebuilding normal Yes 573250823 3NDIDDDF SATA 3 0GB 232 74 GB Active normal No 573250823 4NDOJRFS SATA 3 0GB 232 74 GB Active normal No 73250823 3ND1DGP6 SATA 3 0GB 232 74 GB Active normal No 573250823 4NDOLVYS SATA 3 0GB 232 74 GB Active normal 73250823 4NDOM6H2 SATA 3 0GB 231 90 GB Active normal 73250823 4NDOLNXL SATA 3 0GB 231 90 GB Active normal 73250823 4NDOMEDK SATA 3 0GB 231 90 GB Active normal 73250823 4NDOLNY1 SATA 3 0GB 231 90 GB Active normal 73250823 4NDOLZX4 SATA 3 0GB 231 90 GB normal 73250823 4NDOLP2Q SATA 3 0GB 231 90 GB Figure 64 Disk rebuilding on the Disk Setup tab 88 Storage 4 Managing the network A physical storage node has two TCP IP network interfaces NICs For each physical storage node you can Configure the individual TCP IP interfaces Set up and manage a DNS server Manage the routing table View and configure the TCP interface speed and duplex frame size and NIC flow control Update the list of managers running in the management group to which a storage node belongs
204. dit DNS Suffixes Select the name again in the Edit DNS Suffixes window Click Remove Click OK to remove the DNS suffix from the list Setting up routing The Routing tab displays the routing table You can specify static routes and or a default route EY NOTE If you specify a default route here it will not survive a reboot or shut down of the storage node To create a route that will survive a storage node reboot or shut down you must enter a default gateway on the TCP IP tab See Configuring the IP address manually on page 91 Information for each route listed includes the device the network gateway and mask and flags Adding routing information of Se PS In the navigation window select a storage node and log in Open the tree and select the TCP IP Network category Select the Routing tab Click Routing Tasks and select Edit Routing Information Click Add 114 Managing the network Select the port to use for routing in the Device list Type the IP address portion of the network address in the Net field Type the IP address of the router in the Gateway field Select the netmask Click OK Use the arrows on the routing table panel to order devices according to the configuration of your network The storage node attempts to use the routes in the order in which they are listed Editing routing information You can only edit optional routes you have added ONAN ER wWHN a In the navigation
205. dow double check that the order of the storage nodes in the cluster list matches the original order Rebuild volume data After the storage node is successfully added back to the cluster the adjacent storage nodes start rebuilding data on the repaired storage node 1 Select the cluster and select the Disk Usage tab 2 Verify that the disk usage on the repaired storage node starts increasing 3 Verify that the status of the volumes and snapshots is Restriping Depending on the usage it may take anywhere from a few hours to a day for the data to be rebuilt on the repaired storage node Controlling server access Use the Local Bandwidth Priority setting to control server access to data during the rebuild process e When the data is being rebuilt the servers that are accessing the data on the volumes might ex perience slowness Reduce the Local Bandwidth Priority to half of its current value for immediate results e Alternatively if server access performance is not a concern raise the Local Bandwidth Priority to increase the data rebuild speed Change local bandwidth priority 1 Right click the management group and select Edit Management Group The current Bandwidth Priority value indicates that each manager in that management group will use that much bandwidth to transfer data to the repaired storage node Make a note of the current value so it can be restored after the data rebuild completes 2 Change the bandwidth value as de
206. dow and select Delete Group A confirmation window opens Click OK Click OK to finish 127 128 Administrative users and groups 7 Using SNMP The storage node can be monitored using an SNMP Agent You can also enable SNMP traps The Management Information Base MIB is read only and supports SNMP versions 1 and 2c See Installing the LeftHand networks MIB on page 132 for a list of LeftHand Networks MIBs Using SNMP SNMP and SNMP traps are disabled by default on all but the DL 320s NSM 2120 and the DL 380 You must enable SNMP on the storage node and set a community string in order to access SNMP MIB objects Once you configure SNMP on a single storage node you can copy the settings to other storage nodes using Copy Configuration For more information see Configuring multiple storage nodes on page 40 SNMP on the DL 380 and DL 320s NSM 2120 SNMP is required for the HP server management system Insight Manager Insight Manager allows an administrator to manage and monitor a large number of servers from a single location Because SNMP is required for Insight Manager it is permanently enabled on the DL 380 and DL 320s NSM 2120 EY NOTE When you use the Insight Manger application with a DL 320s NSM 2120 misleading attributes will display Open Insight Manager and select File System Space Used The display shows just one file system which is not representative of the true space used Getting there
207. ds spent on user tasks kernel Stat tasks and in ide X X X state Machine up time is the total time the storage node has been running from initial boot up Motherboard informa Serial number of tion the chassis For each drive re ports the model Drive info serial number ca X X X pacity and firm ware version For each drive re Drive status ports the status X X X and health Information about RAID RAID X X X RAID Rebuild Rate Rebuild rate is a priority meas y ured against other OS tasks 165 This term Means this Dell 2950 NSM 2060 NSM 4150 Unused devices Statistics Any device which is not participating in RAID This in cludes Drives that are missing e Unconfigured drives Drives that were powered down e Failed drives rejected by ar ray w IO er rors Drives that are rebuilding e Hot spare drives Information about the RAID for the storage node X X Unit number Identifies devices that make up the RAID configura tion including Type of storage BOOT LOG SANIQ DATA e RAID level O 1 5 Status Normal Rebuilding De graded Off Capacity e Rebuild statist ics com plete time re maining X X RAID O S partitions Information about O S RAID X X Minimum rebuild speed Minimum speed at which the system can rebuild data X X Maximum rebuild speed Maximum
208. dule a snapshot of a volume every 30 minutes or more and retain up to 50 snapshots If you need to you can pause and resume any schedule to snapshot a volume Currently you cannot create scheduled application managed snapshots from SAN iQ This function will be available in a future release EY NOTE Scripting snapshots can also take place on the server side Scripted snapshots offer greater flexibility for quiescing hosts while taking snapshots and for automating tasks associated with volumes and their snapshots Best practices for scheduling snapshots of volumes e Schedules to snapshot a volume require particular attention to capacity management See Under standing how the capacity of the SAN is used on page 221 e If you do not have an NTP server configured before you create the schedule you should refresh the time setting of the management group to ensure that the storage nodes are all set to the correct time e Configure schedules to snapshot a volume during off peak hours If setting schedules for multiple volumes stagger the schedules with at least an hour between start times for best results Table 56 Requirements for scheduling snapshots Requirement What it means Plan for capacity management Scheduling snapshots should be planned with careful consideration for capacity management as described in Managing capacity using volume size and snap shots on page 226 Pay attention to how you wan
209. dundancy Within each storage node RAID1 or RAID10 can ensure that two copies of all data exist If one of the disks in a RAID pair goes down data reads and writes can continue on the other disk Similarly RAID5 RAID50 or RAID6 provides redundancy by spreading parity evenly across the disks in the set If one disk in a RAIDS or RAID6 set goes down data reads and writes continue on the remaining disks in the set In RAID5O up to one disk in each RAIDS set can go down and data reads and writes continue on the remaining disks RAID protects against failure of disks within a storage node but not against failure of an entire storage node For example if network connectivity to the storage node is lost then data reads and writes to the storage node cannot continue EY NOTE If you plan to create all data volumes on a single storage node use RAID1 10 RAID5 or RAID6 to ensure data redundancy within that storage node 67 Using volume replication in a cluster A cluster is a group of storage nodes across which data can be replicated Volume replication across a cluster of storage nodes protects against disk failures within a storage node failure of an entire storage node or external failures like networking or power For example if a single disk or an entire storage node in a cluster goes offline data reads and writes can continue because an identical copy of the volume exists on other storage nodes in the cluster Using RA
210. dware Information tab in the CMC for additional tem perature inform ation Voltage Status NSM 160 Status Yes Yes 1 minute aie alert i NSM 260 not Norma 140 Reporting Setting CMC notification of alerts Variable Default ac name Units status Perm variable Specify freq Default freq tion threshold Volume Re stripe Com True False No Yes 1 minute CMC alert when True plete CMC alert if Volume Status Status No Yes 1 minute volume status changes Volume Threshold Status Yes Yes 1 aahuie CMC alert when True Change CMC alert if threshold ex Volume Thres ceeded for any Status No Yes 15 minutes volume or holds snapshot in the management group BBU capacity test runs monthly to monitor the battery life charge If the battery life remaining is less than 72 hours the cache is shut off to protect data When the cache is shut off there will be a decrease in performance By default you will see all alerts displayed on the console in the bottom Alerts window This method of notification may be turned off and on NO MWR WN o Select a storage node in the navigation window and log in Open the tree below the storage node and select Alerts Select the Alert Setup tab Select any alert in the list Click alert Setup Tasks and select Set Threshold Actions Click the checkbox for HP LeftHand Centralized Management Console alert Click OK Setting SNMP notification of alerts Configur
211. e 186 overview 7 reading configuration summary 75 removing storage nodes 87 prerequisites 87 restoring 84 setting local bandwidth 183 shut down procedure 185 shutting down 184 starting managers 181 starting up 185 using virtual manager configuration for 200 disaster recovery 200 Management Information Base See MIB managers configuring Failover Manager 190 Failover 173 functions of 172 implications of stopping 182 overview 7 quorum and fault tolerance 172 starting 181 stopping 181 virtual 200 managing disks 72 Map View 279 changing the view 280 for SmartClone volumes 280 toolbar 280 menu bar 30 MIB exceptions 325 for SNMP 132 installing 132 locating 132 SNMP 325 supported MIBs 325 versions 132 migrating RAID 69 mixed RAID 69 monitored variables adding 136 editing 137 list of 135 removing 138 viewing summary of 141 Monitoring performance 297 Monitoring interval in the Performance Monitor 309 monitoring RAID status status 7 Motherboard Port and Motherboard Port2 90 mounting snapshots 250 Multi Site SAN and Failover Manager 173 using Failover Manager in 189 Multi Site SAN sites 32 N naming SmartClone volumes 266 267 navigation window 30 clearing items in 40 network finding storage nodes on 29 39 managing settings 89 overview 89 settings for Failover Manager troubleshooting 195 Network Interface Bonds deleting
212. e this cannot be changed in the future without destroying the management group e A storage node that you identified with the Find wizard and then configured e Aname for the cluster e Aname for the volume 38 Getting started e The size of the volume Enabling server access to volumes Use the Assign Volume and Snapshot wizard to prepare the volume for server access You set up application servers in the management group then assign volumes to the servers See Chapter 17 on page 289 for a complete discussion of these functions To work through the Assign Volume and Snapshot wizard you must first have created a management group cluster and at least one volume You should also plan the following e The application servers that need access to volumes e The iSCSI initiator you plan to use You need the server s initiator node name and CHAP inform ation if you plan to use CHAP Continuing with the SAN iQ software This section describes techniques for working with the CMC on an ongoing basis It also describes how to copy the configuration of one storage node to other storage nodes Finding storage nodes on an ongoing basis The Find settings from your first search are saved in the CMC Every time you open the CMC the same search automatically takes place and the navigation window is populated with all the storage nodes that are found Turn off Auto Discover for storage nodes If you do not want the CMC to automatically discov
213. e 170 Differentiating types of CHAP 337 CHAP is optional However if you configure 1 way or 2 way CHAP you must remember to configure both the server and the iSCSI initiator with the appropriate characteristics Table 74 lists the requirements for configuring CHAP Requirements for configuring CHAP Table 74 Configuring iSCSI CHAP What to Configure for the CHAP Level Server in the SAN iQ Soft to Configure in the iSCSI Initiat ware CHAP not required Initiator node name only No configuration requirements e CHAP name Enter the target secret 12 character min way CHAP d Torget secrel imum when logging on to available tar get e CHAP name e Enter the initiator secret 12 character minimum 2 way CHAP Target secret nes Enter the target secret 1 2 character e Initiator secret minimum If using CHAP with a single node only use the initiator node name as the CHAP name iSCSI and CHAP terminology The iSCSI and CHAP terms used vary based on the operating system and iSCSI initiator you are using The table below lists the terms for two common iSCSI initiators Table 75 iSCSI terminology SAN iQ CMC Microsoft VMWare Linux Refer to the documentation for the Initiator Node Name Initiator Node Name iSCSI Name on initiator yOu sare OSINg Hin iSCSI initiators may use a command line interface or a configuration file CHAP Name Not used CHAP Name Target Secret Target Secret CHAP
214. e 315 Changing the scaling factor ats cacnasacveseievensepsaaeysinnirsnancedlvna sesh tinea tnddcingidaagunsiesnadutileeqantonaseaken 315 Exporing ddid er otee E E E 315 Exporting statistics to a CSV file ssc iaacecesacccvnrncnenarvenistonindeirsnenedvectnaedeeantilehatidanderatotearsraavinadies 315 Saving the graph to an image file 5 25 banrpdeesonpniwnicenepieretimneannivaadianiiantnanbnRlanenaenaaes 316 19 Registering advanced TeQhuies pencuci irn 317 Evaluating advanced features assess viscaric Desaiese a intele oiletib a tne vaiile dibsds ut dindnasbaandswavinRa led anacissevenee 317 30 Day evaluation period Srasapistostasdacvsadaineactoencionmedesexragaitanasancdantarsaiieacquoiaaenaeaaraaenn 317 Tracking the time remaining in the evaluation period cceceeceeeeeeeteeeeeeettteeeeesesseeeees 318 Viewing licensing REGIS 24 tanincessdeamedess ekeaacenaeerte pentctensaeciedatatamen tenia tenn biotunee Maecenas 318 Starting the evaluation period IAaNe mere nm tne tae arnen nt at nt a 318 Backing out of Remote Copy evaluation ccsescceeseeesseeeeseeeseeceeeseeeeceeeenseeeeeeessneeseaeeeeas 318 STi SER SVS OR 25525 bain icessecha rirni ra ai hai EEA EAEE EEEE ETL ERE E 319 Turn on scripting evaluation sinsncansinsnnpnedamenaaiseonscn mereiaeniaedtaaanan eleanpte Lend enntenaucenapneaaannne 319 Trott Sebi thie SVU NON is eserves ease oceans nassau dette do va wee aden NeASblineloees 319 Registering advanced features c cccesc
215. e Nodes 3 fa Volumes 58 and Sn EHO c 4 C _SCsnap2 C _SCsnap S C _snap2 E C _snapt E Caclass_4 3 S C _SCsnap GB ctciass_2 3 C _SCsnap S C _snap2 Figure 132 Viewing the organic layout of SmartClone volumes and associated snapshots in the Map View Manipulating the Map View The Map View window contains display tools to control and manipulate the view of SmartClone volumes using either the Tree or the Organic view The display tools are available from the Map View Tasks menu or from the tool bar across the top of the window The tools function the same from either the tool bar or the Map View tasks menu 280 SmartClone volumes ralized Management Console Details iSCSI Sessions Remote Snapshots Assigned Servers f Map view Layout Tree v Q Q Q E ige 6 C _trngrm1_5 Figure 133 Toolbar with display tools in the Map View window Using the display tools Use these tools described in Table 63 to select specific areas of the map to view zoom in on rotate and move around the window If you have a complex configuration of SmartClone volumes use the Map View tools to easily view and monitor the configuration Figure 134 page 281 shows an example of the Magnify tool Table 63 Map View display tools Tool Icon Function Q Zoom In incrementally magnifies the Map View window Q Zoom Out incrementally reduces the Map View window
216. e Task button back into view Sortable columns Click on a column head to sort the list in that column Sizable columns Drag a column boundary to the right or left to widen the column for better reading 33 Using the alert window Alert messages appear in the alert window as they are generated and are removed when the alert situation resolves in one of three ways e On its own e When you remove them with the Alert Tasks commands e When you close the CMC To see old alerts view those alerts from the storage node configuration categories in the Alerts category Select the Alert Log File tab and click the link to refresh the log file report The log of alerts displays in the window Setting naming conventions Use the Preferences window opened from the Help menu to set naming conventions for elements you create when building the HP LeftHand Storage Solution Default values are provided or create your own set of customized values When you install the CMC for the first time or upgrade from release 7 0 x default names are enabled for snapshots including schedules to snapshot a volume and for SmartClone volumes Default names are disabled for management groups clusters and volumes Preferences To use a default name when creating one of the elements below check the Use Name checkbox If a custom name is desired check the Use Name checkbox and enter the custom name in the Use Custom field Clic
217. e a copy of a volume for use with backup and other applications You create snapshots from a volume on the cluster Snapshots are always thin provisioned Thin provisioning snapshots saves actual space in the SAN while letting you have more snapshots without the concern of running out of cluster space Snapshots can be used for multiple purposes including Source volumes for data mining and other data use Source volumes for creating backups Data or file system preservation before upgrading software e Protection against data deletion File level restore without tape or backup software Snapshots versus backups Backups are typically stored on different physical devices such as tapes Snapshots are stored in the same cluster as the volume Therefore snapshots protect against data deletion but not device or storage media failure Use snapshots along with backups to improve your overall data backup strategy At any time you can roll back to a specific snapshot When you do roll back you must delete all the snapshots created after that snapshot Also using an iSCSI initiator you can mount a snapshot to a different server and recover data from the snapshot to that server The effect of snapshots on cluster space Snapshots take up space on the cluster Because snapshots are a thin provisioned space they save space compared to a full provisioned space Prior to this release snapshots were full provisioned Plan how you intend
218. e active and Eth will be Passive Ready Then if EthO fails Eth changes from Passive Ready to active EthO changes to Passive Failed Once the link is fixed and EthO is operational there is a 30 second delay and then EthO becomes the active interface Eth returns to the Passive Ready state EY NOTE When the active interface comes back up there is a 30 second delay before it becomes active Table 17 Example active passive failover scenario and corresponding NIC status Example failover scenario NIC status Bond0O is the master logical interface 1 Active Passive bondO is created The active e EthO is Active ferred interface is EthO presreo mentee is e Eth is connected and is Passive Ready 2 Active interface fails BondO detects the failure EthO status becomes Passive Failed and Eth takes over e Eth status changes to Active ae e EthO status changes to Active after a 30 second delay 3 The EthO link is restored e Eth status changes to Passive Ready Summary of NIC status during failover Table 18 shows the states of EthO and Eth when configured for Active Passive are shown below Table 18 NIC status during failover with Active Passive Failover status Status of EthO Status of Eth Ni rmalOseration Preferred YesStatus ActiveData Preferred NoStatus Passive P Transfer Yes Ready Data Transfer No 95 Failover status Status of EthO Status of Eth EthO Fails D
219. e dedicated boot devices will not display the Boot Devices tab Table 4 Dedicated boot devices by storage node Platform Number and type of boot devices NSM 160 J compact Hoshveands NSM 260 1 compact flash card NSM 4150 2 hard drives In storage nodes with two dedicated boot devices both devices are active by default If necessary compact flash cards can be deactivated or activated using the buttons on this tab However you should only take action on these cards if instructed by HP LeftHand Networks Technical Support The following storage nodes do not have dedicated boot devices e DL 380 e DL 320s NSM 2120 IBM x3650 e VSA 51 Dell 2950 e NSM 2060 e HP LeftHand P4500 Checking boot device status View dedicated boot device status in the Boot Devices tab window in the Storage category in the storage node tree The platforms that have dedicated boot devices are listed in Table 4 on page 51 Getting there 1 Select a storage node in the navigation window and log in if necessary 2 Open the tree below the storage node and select Storage 3 Select the Boot Devices tab The status of each dedicated boot device on the storage node is listed in the Status column Table 5 on page 52 describes the possible status for boot devices EY NOTE Some statuses only occur in a storage node with two boot devices Table 5 Boot device status Boot device status Description Active The device is synchron
220. e for a disk replacement differs according to the RAID level of the storage node and whether it is a hot swap platform You should carefully plan any disk replacement to ensure data safety regardless of whether the platform is hot swap The following checklists outline steps to help ensure your data remains safe while you replace a disk Identify physical location of storage node and disk Before you begin the disk replacement process you should identify the physical location of both the storage node in the rack and the disk in the storage node Know the name and physical location of the storage node that needs the disk replacement e Know the physical position of the disk in the storage node See Verifying disk status on page 74 for diagrams of disk layout in the various platforms e Have the replacement disk ready and confirm that it is of the right size and has the right carrier Best practice checklist for single disk replacement in RAIDO A CAUTION Do not use hot swap procedures on any storage node running in RAIDO 82 Storage In RAIDO always power off the drive in the CMC before removing it RAIDO provides no fault tolerance by itself so when you do power off the drive you lose the data on the storage node Therefore if you need to replace a disk in a RAIDO configuration HP recommends the following All volumes and snapshots have a minimum of 2 way replication If volumes or snapshots are not replicated change
221. e logs list Click Log File Tasks and select Edit Remote Log Destination The Edit Remote Log window opens Change the log type or destination and click OK Be sure that the remote computer has the proper syslog configuration Deleting remote logs Delete a remote log when it is no longer used 1 Select a storage node in the navigation window and log in 2 Open the tree below the storage node and select Hardware 3 Select the Log Files tab 4 Click Log File Tasks and select Delete Remote Log Destination A confirmation message opens 5 Click OK EY NOTE After deleting a remote log file from the storage node remove references to this log file from the syslog configuration on the target computer 169 170 Reporting 9 Working with management groups A management group is a collection of one or more storage nodes It is the container within which you cluster storage nodes and create volumes for storage Creating a management group is the first step in creating an IP SAN with the SAN iQ software Functions of management groups Management groups serve several purposes Management groups are the highest administrative domain for the SAN Typically storage admin istrators will configure at least one management group within their data center e Organize storage nodes into different groups for categories of applications and data For example you might create a management group for Oracle applications and a separate
222. e may not be consistent with the application s view of the data A preferred interface is the interface within an active backup bond that is used for data transfer during normal operation A majority of managers required to be running and communicating with each other in order for the SAN iQ software to function RAID originally redundant array of inexpensive disks now redundant array of in dependent disks refers to a data storage scheme using multiple hard drives to share or replicate data among the drives Type of RAID configuration e RAIDO data striped across disk set e RAID1 data mirrored from one disk onto a second disk e RAID10 mirrored sets of RAID1 disks e RAID5 data blocks are distributed across all disks in a RAID set Redundant information is stored as parity distributed across the disks e RAID50 mirrored sets of RAIDS disks 349 Term RAID Quorum RAID Rebuild Rate RAID Status Register Repair NSM Replication Level Replication Priority Restripe Resync Rolling Back SAN iQ Interface Server Shared Snapshot SmartClone Volume Snapshot 350 Glossary Definition Number of intact disks required to maintain data integrity in a RAID set The rate at which the RAID configuration rebuilds if a disk is replaced Condition of RAID on the storage node e Normal RAID is synchronized and running No action is required e Rebuild A new disk has been inserted in a drive bay and
223. e nodes ccecececeeeeeceeeeeeeeeesteeeeeneeeeesseees 58 Information in the RAID setup report ca scaisceosecsovsadidewsnntacroentcnhoncdageiatocsebncddensdcdoans 60 Data availability and safety in RAID configurations i sisscsusisoueieinaneracneteneesnaceeseraivbrerndens 68 Disk management tasks for storage nodes 4i 4 ccnseaneannntenonssaunavanctbartesnhicdvarcaveecensns 73 Description of items on the disk report cccccsscceseeceeeeseeeeeeeeeeeseeeeneaeeeeeeeeeseeeeeees 73 Identifying the network interfaces on the storage node scceeeeceeeeeeeteeeeteeeneereetenaes 90 Comparison of active passive link aggregation dynamic mode and adaptive load balancing DondiNg spassu iaaiaee inin E EE A S ENOS 93 Bonded network interfaces wvrsesriavaraneassnad atotenneetedtaunidanesiay cnn veiarsennepnionradonshrsamateiete 94 NIC status in active passive configuration cccccceeeeteeeeesseeeeesececeeeeeeeeseeeeeseeeeeneeeees 94 Example active passive failover scenario and corresponding NIC status ceeereeeeee 95 NIC status during failover with Active Passive ccsscceeseceereeeeseeeneeeeneecnaeeeeseeesttensate 95 Link aggregation dynamic mode failover scenario and corresponding NIC status 97 NIC status during failover with link aggregation dynamic mode ccceeceeeseeeeesteeees 98 Example adaptive load balancing failover scenario and corresponding NIC status 99 NIC status du
224. e or two and choose Try Again on the Network Search Failed message 12 Verify the new bond interface HP LeftHand Networks Centralized Management Console Joey st Configuration Summary f Available Nodes 1 EH fi Exchange BH Servers 1 Name Description MAC Mode IP Address Subnet Mask Gateway Type Ls Administration bondo Logical Failov 00 19 B9 E9 Static IP 10 0 60 32 255 255 0 0 10 0 255 254 Active Pas Le Sites Motherbo Broadcom Co 00 19 B9 E9 Slave 10 0 60 32 255 255 0 0 10 0 255 254 NIC EH Denver 2 Motherbo Broadcom Co 00 19 B9 E9 Slave 10 0 60 32 255 255 0 0 10 0 255 254 NIC HF Alerts HA Hardware 3 SNMP ig ee Denver 3 gt Denver 1 Saing Started tcrap TCP Status DNS Routing Communication 1 Bonded logical network interface 2 Physical interfaces shown as slaves Figure 70 Viewing a new active passive bond The bond interface shows as bondO and has a static IP address The two physical NICs now show as slaves in the Mode column 13 Optional for Active Passive bonds only To change which interface is the preferred interface in an Active Passive bond on the TCP Status tab select one of the NICs in the bond and click Set Preferred Verify communication setting for new bond 1 Select a storage node and open the tree below it 102 Managing the network 2 Select the TCP IP Network category and click Comm
225. e pee eee eee pete f 025 4 10 20 30 40 Max Figure 88 Manually setting management group to normal mode 3 Click Set To Normal The management group is reset to normal mode 186 Working with management groups Removing a storage node from a management group Prerequisites Stop the manager on the storage node if it is running a manager You may want to start a manager or Virtual Manager on a different storage node to maintain quorum and the best fault tolerance See Stopping managers on page 181 Optional If the resultant number of storage nodes in the cluster is fewer than the volume replication level you may have to reduce the replication level of the volume s on the cluster before removing the storage node from the cluster e Remove the storage node from the cluster See Removing a storage node from a cluster on page 212 e Let any restripe operations finish completely Remove the storage node 1 Log in to the management group from which you want to remove a storage node 2 In the navigation window select the storage node to remove 3 Click Storage Node Tasks on the Details tab and select Remove from Management Group 4 Click OK on the confirmation message In the navigation window the storage node is removed from the management group and moved to Available Nodes pool Deleting a management group Delete a management group when you are completely reconfiguring your SAN and you inte
226. e sections cover just few examples of common questions and issues but they are not an exhaustive discussion of the possibilities the Performance Monitor offers For general concepts related to performance monitoring and analysis see Performance monitoring and analysis concepts on page 309 297 What can learn about my SAN If you have questions such as these about your SAN the Performance Monitor can help e What kind of load is the SAN under right now e How much more load can be added to an existing cluster e What is the impact of my nightly backups on the SAN e think the SAN is idle but the drive lights are blinking like crazy What s happening Generally the Performance Monitor can help you determine e Current SAN activities e Workload characterization Fault isolation Current SAN activities example This example shows that the Denver cluster is handling an average of more than 747 IOPS with an average throughput of more than 6 million bytes per second and an average queue depth of 31 76 Performance Monitor Denver 2 15 30 PM 2 16 00 PM 2 16 30 PM_ 2 17 00 PM 2 17 30 PM_ 2 18 00 PM_ 2 18 30 PM_ 2 19 00 PM_ 2 19 30 PM_ 2 20 00 PM Mountain Standard Time America Denver Display Line __Name Server Statistic Junts Vawe Minimum Maximum Average Scale WM denver Throughput Total B s 946 700 637 5 704 676 236 8 270 189 700 6 120 584 990 Auto 0 00001 Y denver 1OPS Total los
227. e storage node from a management group Reset the storage node configuration to factory defaults Connecting to the Configuration Interface Accessing the Configuration Interface is accomplished by e Attaching a keyboard and monitor KVM to the storage node serial port preferred or e Attaching a PC or a laptop using a null modem cable and connecting to the Configuration Interface with a terminal emulation program Establishing a terminal emulation session on a Windows system On the PC or laptop attached directly to the storage node with a null modem cable open a session with a terminal emulation program such as HyperTerminal or ProComm Plus Use the following settings 19200 8 N 1 When the session is established the Configuration Interface window opens Establishing a terminal emulation session on a Linux Unix system If using Linux create the following configuration file You must create the file as root or root must change permissions for dev cua0 in order to create the config file in etc 341 1 Create the etc minirc NSM with the following parameters Begin HP LeftHand Networks NSM configura tion Machine generated fil use minicom s to change parameters pr port dev cua0 pu baudrate 19200 pu bits 8 pu parity N pu stopbits 1 pu mautobaud Yes pu backspace DEL pu hasdcd No pu rtscts No pu xonxoff Yes pu askdndir Yes End HP LeftHand Networks NSM configuration
228. e the following items to have alerts delivered via SNMP Enable the SNMP system See Enabling SNMP agents on page 129 Select the check box for SNMP Trap See either Setting notification for one variable on page 142 or Setting notification for several variables on page 143 Setting email notifications of alerts If you request email notification of alerts you must configure these settings 14 SMIP settings See Setting SMTP settings on page 142 e Application of SMTP settings to an entire management group See Applying SMTP settings to a management group on page 142 Email notification preferences See either Setting notification for one variable on page 142 or Setting notification for several variables on page 143 Setting SMTP settings Use the Email Server Setup tab to configure the SMTP settings for email communication For more information on configuring active monitoring see Using alerts for active monitoring on page 135 1 In the Alerts category select the Email Server Setup tab The Email Server Setup window opens Click Email Server Setup Tasks and select Edit SMTP Settings Enter the IP address or host name of the email server Enter the email port The standard port is 25 5 Optional If your email server is selective about valid sender addresses on incoming emails enter a sender address for example username company com If you do not enter a sender address t
229. e time and time Network zone for the management group identify the Domain Name Server and use SNMP Storage Node Administration Boot User can add administrators and upgrade the SAN iQ software Upgrade System and Disk Report User can view reports about the status of the storage node What the permission levels mean e Read Only User can only view the information about these functions e Read Modify User can view and modify existing settings for these functions Full Users can perform all actions view modify add new delete in all functions 1 Add a user to the group e Click Add in the Users section Select one or more users to add to the group e Click Add 2 Click OK to finish creating a new group Editing administrative groups Each management group has an administration node in the tree below it You can add edit and remove administrative groups here Editing an administrative group includes changing the description permissions and users for the group Change the description of a group 1 Log in to the management group and select the Administration node 2 Click Administration Tasks in the tab window and select Edit Group 3 Change the Description as necessary 4 Click OK to finish Changing administrative group permissions Change the management capabilities available to members of a group 1 Log in to the management group and select the Administration node 126 Administrative users and g
230. e to the cluster in the spot held by the ghost IP address In the Edit Cluster window first note the order of the storage nodes in the list Next remove the ghost storage node from the cluster Return the repaired storage node to the cluster in the position of the ghost storage node Use the arrows to return the storage nodes in the list to their original order Example If the list of the storage nodes in the cluster had storage node A lt IP address gt storage node C and storage node B is the repaired storage node then after re arranging the list of the storage nodes in the cluster should be storage node A storage node B storage node C and the storage node lt IP address gt will be in the management group Table 73 Replacing the ghost storage node with the repaired storage node Storage Nodes in Cluster Storage node A Before rearranging e lt IP Address gt Storage node C Storage node A After rearranging Storage node B Storage node C 332 Replacing disks appendix EY NOTE If you do not arrange the storage nodes to match their original order the data in the cluster is rebuilt across all the storage nodes instead of just the repaired storage node This total data rebuild takes longer to complete and increases the chance of a second failure during this period To ensure that only the repaired storage node goes through the rebuild before you click the OK button in the Edit Cluster win
231. e volume below the size needed for data cur rently stored on the volume A CAUTION Decreasing the volume size is not recommended If you shrink the volume in the CMC before shrinking it from the server file system your data will be corrupted or lost To edit a volume 1 In the navigation window select the volume you want to edit 2 Click Volume Tasks and select Edit Volume The Edit Volume window opens Changing the volume description 1 In the Description field edit the description 241 2 Click OK when you are finished Changing the cluster Requirement Either before or after changing the cluster you must stop any applications that are accessing the volume and log off all associated iSCSI sessions Even if using the HP LeftHand DSM for MPIO log off the volumes from the server add the VIP or the individual IP addresses of the storage nodes in the other cluster discover and mount volumes 1 On the Edit Volume window select the Advanced tab 2 In the Cluster drop down list select a different cluster 3 Click OK Changing the replication level 1 In the Replication Level drop down list select the level of replication you want 2 Click OK when you are finished Changing the replication priority 1 Select the replication priority you want 2 Click OK when you are finished Changing the size 1 In the Size field change the number and change the units if necessary 2 Click OK when you are finish
232. eaching safe limits 0 0008 177 85 Error when some item in the management group has reached its limit ceeeeeeeee 177 86 Manager quorum ot EIS Kes cataactd sete ates Wee tae a dacs veosatv was earcaw re Ga rem ae 182 87 Notification of taking volumes offline s 2 3y succinct ly Wily Eide ed eee 185 88 Manually setting management group to normal mode cccseeeseeeeeeeeetteeeeeeesteeeees 186 89 VMware Console opens with Failover Manager installed and registered 00 192 90 Failover Manager boots Up ecindeisetehidetadercarsivadetccminadeeticehedeeentaseeniemadaneueen woes 192 91 Setting the host name and IPsaddressias cnwranarwascimnseiaaeas ven saes 193 92 Confirming the new IP address ss nnonnnnsnneoeseeeesneesonosseeeesrntesotssseressrsrseesssrersseeee 193 93 Logging in to the SAN iQ Configuration Interface cccccccceeceesceeeeeeeeeeeetseeeeeneeeees 197 94 Two site failure scenarios that are correctly using a virtual manager eccceeeeeeeeees 202 95 Adding storage nodes to cluster in alternating site Order c cccceeeeeeeeeeeeettteeeeees 204 96 2 way replicated volume on 2 site Cluster ceeeesceceeeeseteeeeeeeeeseeeeeeeeenteeeeeeseetaeeees 204 97 Management group with virtual manager added ccceeeceeeceeeeeeteeeeeeettteeeeeetsaees 205 98 Starting a virtual manager when storage node running a manager becomes WEG dts a insi sea E T bi ut oct E E a edo Ana ae
233. ealed aia AEE KiE EEEE EREA EEE 209 Prerequisites eu e aa E E E AR E A E OS 209 Number of storage nodes in clusters cic icsnsteinsinianehaivnesiebaveanbe canal cabetansibuonbneawiaanebenuys 209 To create additional clusters a 0shscassnranersomnesrsncnianerearsmranes onndbormiasantnsaesan bins antedRoobasmedesmuencnr 209 Configure virtual IP and iSNS for 1SCSI 6 siiswsexindcnn inacalucninnvindsdlicaluad lulose venselebutnnnaiucasiniwniaeds 210 New for release 8 0 ca 2sca cette nemnacaazactaenceadieedbee rade suantas ica uanoerriaaee chanaiacae canines 210 Adding an ISINS SEIVEN comiencen oitan ei iie E Raby cdoimaaniveaasaciawe lane Oe EEE 210 Tracking cluster usage acrrntieotapannanna rtarcna venta inca nna esoumaan estan ndonda nee aoenaanotnan saat aa uae ean RAE 210 Editing a clyster vertir ieii a e ia E EE E AE noi Sel E E E 211 Prereguisile casina ea a a a aa ar aa a ican T eee ets 211 Getting there 555 Ba ep sini sede Salva bie a a a a a a a A 211 Adding a new storage node to an existing cluster ccceeeeeseeeeeeeeeeneeeeeceenceeeeeeestteeeeeeeeeaees 211 Prereg isile sis Suts insianaisie vous Susana e E A A O 211 Adding storage to a cluster sreccancasananeniassdtoncatacauteamonandatexieteciaudesiabonissahi inclocentaawaees 212 Removing a storage node from a cluster nsnnnnonneosenneseoneseontseoreeserersrttrstesrsrerssrtrsetererererne 212 Changing or removing the virtual IP 0 0 scseserssevessnarnnaseoseevabecasnu
234. eceseeesereeensesenseeseeceseeceseceseesseeseseeeaeecnseeseseeesnteseneeeees 320 Using li nse keys a c cabs sasieialnnhaseeanyehusonaniys taneeiidnnaainn abe EE E E t 320 Registering available storage nodes for license keys ss ccseseeesteeeeneeeeneeeseeeeneeeeeeeeeeeeeeatees 320 Submitting storage node feature keys ais sucnsasiedsrssussouniocanwesaeieavwivadiiensersendsadicaniivcaheades 321 Entering license keys to storage nodes cesccccesseeeeeeeeeseeteeesenseeesenteeeseaeeeeeneteeeneteess 321 Registering storage nodes in a management Group cccceeeeeeteeeeeeeentececeeeeeeeeeeeeeeteeeeeeeees 32 Submitting storage node feature keys cccccccceesseceesececeeceeeeeeeeeceeeeeeeeseeeeeseeseneeeens 322 Entering lic nse keys eee meee eee em eC o E e ere ane 323 Saving license key information sutdvorcdnenuiansndeagaueseicandeateanenndceanet Gn geneninenedaemeitnenninei 324 Saving and editing your customer information s i soieaseivansivannnensnedteuplonsnieaidlasieiaspienetebvbhenwenesveaens 324 Editing your customer information file sas iccsncconsncvadeontorensanevaundondebnniransunsecndnenadaaaaneeniedsennienrs 324 Saving your customer information sirxsracueiastaasetecaativesciucu tan lo dietnayrnsavebuinsattebatatelenitenise 324 20 SNMP MIB information saccoccicsccusnscccrosvacicvanchsacenucuartsncsundeesereccaecanes 325 The supported MIBs 5 coups acxaxpecats ania caaensaven cs veceononto sau an ociaaniien sana
235. ed A CAUTION Decreasing the volume size is not recommended If you shrink the volume in the CMC before shrinking it from the server file system your data will be corrupted or lost Making an unavailable redundancy volume available If a storage node becomes unavailable and needs to be repaired or replaced and a replicated volume that is configured for redundancy becomes unavailable to servers the following procedure allows you to return the volume to fully operational status 1 Stop any applications that are accessing the volume and log off all associated iSCSI sessions 2 Select the volume in the navigation window 3 Right click and select Edit Volume 4 On the Advanced tab change the Replication Priority from Redundancy to Availability The Replication Level must be 2 or greater 242 Using volumes 5 Use the Repair Storage Node procedure described in Repairing a storage node on page 216 6 Reconnect the iSCSI sessions and restart the applications that access the volume Deleting a volume Delete a volume to remove that volume s data from the storage node and make that space available Deleting a volume also deletes all the snapshots underneath that volume except for clone points and shared snapshots For more information see Clone point on page 272 and Shared snapshot on page 274 A CAUTION Deleting a volume permanently removes that volume s data from the storage node Prereq
236. ed e NICs must be configured to the same subnet e NICs must be connected to a single switch that is LACP capable and supports 802 3ad link ag gregation If the storage node is directly connected to a server then the server must support 802 3ad link aggregation Which physical interface is preferred Because the logical interface uses both NICs simultaneously for data transfer neither of the NICs in an aggregation bond are designated as preferred Which physical interface is active When the Link Aggregation Dynamic Mode bond is created if both NICs are plugged in both interfaces are active If one interface fails the other interface continues operating For example suppose EthO and Eth are bonded in a Link Aggregation Dynamic Mode bond If EthO fails then Eth remains active Once the link is fixed and EthO is operational it becomes active again Eth remains active Table 19 Link aggregation dynamic mode failover scenario and corresponding NIC status Example failover scenario NIC status 1 Link Aggregation Dynamic Mode bond0 is created F EthO and Eth1 are both active 2 EthO interface fails Because Link Aggregation Dynamic Mode is configured Eth continues operating 3 EthO link failure is repaired BondO is the master logical interface EthO is Active Eth is Active EthO status becomes Passive Failed Eth status remains Active EthO resumes Active status Eth remains Active 97 Sum
237. ed at ah tase ies 63 RAIDS inthe NSM 260 nte a ete e E e a A tn sale eee tumcaaaeaieers 65 RAID6 in the DL320s NSM 2120 HP LeftHand P4500 ccccccccceeeesseeeeeesssssseeeeeesaas 66 RAIDS in the HP LeftHand P4300 so ctuteagwongiasnc syaueaestonvisresnanaaanesaaaeiaseahesaboneannneehnr scale 66 Planning the RAID configuration ss sosseooseneeseoesseoesseotsseetsestterseessstossettrsrttrsrttrssoressressseesrseeesrene 67 Data replicahiom sasn e eia e ate oe n a a a NE E E 67 Using RAID for data recency sw uled seatohcodieealis ela divit syed sted deed dey Absloeaaledelihe Signin tea 67 Using volume replication in a cluster a cisaxens sac durenia stator toes ceasnenanestaRudynenaateaanendnaaensnenlaauns 68 Using RAID with replication in exclusierc nvcutisacse case aeeawiar wou wena e ean 68 Mixing RAID Contig Utahns ssa ndeaAcendeoderealonemiariedanseaestsanseenus gaadaay E ANA E a aN 69 Setting RAID reb ild Taten ranr ea e E a va Eee i 69 Set RAID r build ral sai eean eena eosa eE ned alee A E E E OE E O i 70 RECOM UIT RAID tear O aT ty actu EE A A rt nc ian pe ea ah 70 Requirements for reconfiguring RAID duu tus Woadelauscvardcnacs sasconaaAoisasuadceeadacusal dondicgautigsevdgdebeys 70 Changing preconfigured RAID on a new storage node ccceeeeceesseeeeeseeeeeseeeeeeseeeeeeaees 70 Changing RAID on storage nodes in management groups cccceseeeeeeeeeceeeenteeeeesntnteeeees 70 T reconfigure RAID fee erent Carts E
238. ed disk set Data is stored across all disks in the array which increases performance However RAIDO does not provide fault tolerance If one disk in the RAID set fails all data on the set is lost Storage node capacity in RAIDO is equal to the sum total capacity of all disks in the storage node RAID1 10 RAID1 10 combines mirroring data within pairs of disks and striping data across the disk pairs RAID1 10 combines data redundancy or disk mirroring RAID1 with the performance boost of striping RAIDO Storage Capacity in RAID1 10 Storage capacity in RAID1 10 is half the total capacity of RAIDO in the storage node The capacity of a single disk pair is equal to the capacity of one of the disks thus yielding half the total capacity Or to put it another way RAID10 capacity single disk capacity x total of disks 2 250 GB 250 G8 250 68 250 GB Pair 1 Pair 2 capacity capacity 250GB 250GB Storage Node capacity 500GB Figure 11 Example of the capacity of disk pairs in RAID10 57 RAID5 RAIDS spare or RAIDSO RAID5 provides data redundancy by distributing data blocks across all disks in a RAID set Redundant information is stored as parity distributed across the disks The figure shows an example of the distribution of parity across four disks in a RAIDS set Fo Fo Fo eer 4 1 1 parity A 2 parity 2 2 3 3 3 Disk 1 Disk 2 Disk 3 Disk 4 Figure 1
239. ed list of monitored variables see List of monitored variables on page 138 EY NOTE Critical variables such as the Temperature Status CPU and motherboard temperatures have thresholds that trigger a shutdown of the storage node 135 Getting there 1 Select a storage node in the navigation window and log in 2 Open the tree below the storage node and select Alerts As you can see some alerts are delivered to the console only some include email delivery and some are routed through the SNMP system as a trap Selecting alerts to monitor When your software was first installed all variables were selected to be reported on You can change actively monitored variables as needed e Adding variables to monitor See Adding variables to monitor on page 136 e Removing variables from monitoring See Removing a variable from active monitoring on page 138 Changing the way variables are monitored See Editing a monitored variable on page 137 The section List of monitored variables on page 138 provides a list of all variables available for active monitoring Adding variables to monitor The variables that the storage node is currently monitoring are listed in the box All variables in the list are configured and set for CMC alerts 1 Click the Alert Setup tab to bring it to the front 2 Click Alert Setup Tasks and select Add Monitored Variables 3 Select the variable that you want to begin
240. ed with a Multi Site SAN and require a feature key For more information about Multi Site SANs see the HP LeftHand P4000 Multi Site HA DR Solution Pack User Manual installed in the Documentation subdirectory with the CMC program files Clusters Clusters are groupings of storage nodes within a management group Clusters contain the data volumes and snapshots Volumes Volumes store data and are presented to application servers as disks Snapshots Snapshots are copies of volumes Snapshots can be created manually as necessary or scheduled to occur regularly Snapshots of a volume can be stored on the volume itself or on a different remote volume Remote Copies Remote copies are specialized snapshots that have been copied to a remote volume usually at a different geographic location using the SAN iQ software feature Remote Copy Getting started Icons Each item in the navigation window has an icon depicting what type of item it is A faded looking icon indicates a remote item that is local or primary A description is available of all the icons used in the CMC 1 Click Help on the menu bar 2 Select Graphical Legend from the menu 3 View the Items tab and the Hardware tab The Items tab displays the icons that represent items activities and status in the navigation window The Hardware tab displays the icons that represent the different models of physical storage nodes that display in the navigation window Using
241. eee 141 Setting SNMP notification of alerts sc cainissssniressndoadsacawsanei adi dhuinssdlsedunul aaleeantitonawielawslosienseibealunt 141 Setting email notifications of alerts ccccccesecccesseeceeseeeeeseeeceeeeeeeeececseeeeeeseecenaeeeeeeeeens 141 Setting SMIP Settings sinsscnsctenabtensbnriawel samara denaccsavs E e Nama E 142 Applying SMTP settings to a management group cceeesseeeeeteeeeeeteeesenteeeseaeeeeeneteeenaes 142 Setting notification for one variable a cciniccaninsdeais aie dndenoniendaesnanpioastdasndonnecaiuniadiensunannbes 142 Setting notification for several variables cscccesseceeseeeseeeeeeeeeeeeeaeecsseeesteeseneeeenteeses 143 Viewing and saving dlens z o os aa a treme i aae E EE ae wr 143 Saving the alert log of all variables y c s0caccvcaremennnessinatananeutaneaneienngecanneerancnaseansneiutunanns 143 Saving the alert history of a specific variable cccecccecccceseeceeeeceeeeseseecesseeeessteeenseees 144 Using hardware information reports c ccccccceccsseececsneeecesseeceseaeceseseeeseeaeceseneeeeseseeeenseeeeeneeeens 144 R nning diagnostic TepoTis s anne een aT cen ee ee Nr e 144 Ite IME E E E E T 144 Viewing the diagnostic report aan ssecsivevssvensuarsietevcedeverdevaasnveeecrnnanininvordernieneansatadancndaene 145 List St diagnostic fess saeni e E E E E T SE ON 145 Using the hardware information report ccccccceseeeeseeeeeeeeeeeeseeeeeeseeeceeaaeeseeaeeese
242. eees 51 Checking status of dedicated boot devices c ccccesseecsesseeeeeeeeeceseeeecseeeeeseeeenaaeesseaeessneeeeenaees 51 Checking boot device SAUS cwavacmsoycnonmadervencvecaiverexenvunravaduevenatniumnanerteivamianctendisnneanimascieds 52 Geiling hete roaren a TE a err E T eT ere Creer er ee 52 Starting or stopping a dedicated boot device NSM 160 NSM 260 ccceecseeeesseeeeeeeeeees 52 Powering on or rebooting storage nodes with two dedicated boot devices NSM 160 NSM 260er e a a ea E E E EN EE EER 53 Replacing a dedicated boot device i icccisesiciorsnonsannaiadnonecons nedeavanspequdieaaguoasnoaimannsnananpenoniennies 53 ASM TOO a a E E A E E 53 ASM ATSO e a A E T N 53 Replacing and activating a new boot flash card NSM 160 NSM 260 ceeseeeereeeeeees 54 SPSS 55 ee ne eee ere eee eee REE Teen Pee eae meee See ee aT nee Terre roan Ja Configuring RAID and managing disks i iicssussaannviansnsascontennptucpateee pheaehstehomentoimental neva bemsanenunbenbnensea 55 RAID as a storage requirement cup ssisisicsnaS aia bg da usuliay en rert ertt t ttt EErEE E E Lon oan Geeta tusiealieiallens 55 Gening hele eneee eea a a a a A TA 55 SINUS MASONS aiius biden avai ee aiaa e EEEE E OEE EE E E ESAS 56 Configuring and managing RAID scsnsusvearersacsaarecnenvenntekonteinbesinanengnestnivevanaanarenhieensancnnmta nine ene 56 Benefits 5f RAID cisnienie a eoe i a a S E iene ininis 57 RAID configurations defined
243. eeeseeseneeeeenaes 45 Backing up storage node does not save all settings ccccccceseseeeeeceeeeetseeeeseeeestseeeeses 45 Backing up the storage node configuration file cccccccesecceeeeeeeeeeeeeeseeeeeetaeeenseeeesaes 46 Restoring the storage node configuration from a file ccccccesseeceeeeeeeeeeeeeenseeeeesseeensaes 46 Powering off or rebooting the storage node vq 2 sisenidelnaiunnalsnndiganessenm sain avanipuaamniaapeoanieuinaneisinanee 47 Powering on or off or rebooting NSM 4150 c csceccccecsscenecnsasesensscnnstencebecsesnestontsentcsvensees 47 Rebooting the storage node acicassprmstoreedacimaenarmarteirenvialoemunslaen ene 48 Powering off the storage node v scciquavecnersavceusaapaced ecrecaanaregaaveraayi nes atanieradymnatvadiainantinanniennehanlyy 48 Upgrading the SAN iQ software on the storage node cccccceeseeeeeeeeeeeeeeeeeeeeenseeeeneseeeneeeeess 49 PREFEQUISITOS meien aidean an e iee a a peeve eE eN EEE EE E e aa aa dyad eE 49 Copying the upgrade files from web site ssicc scsesssosceiesrdcesanssvessdeaves catrauednewoudsaneesldenenten dees 49 Upgrading the storage 60S oiisvsescsveninansevecsanatcacanstestsdunoisnnstananeasaiqncn tata shemenereratidetanaiin 50 Registering advanced features for a storage node ccccceceseceeeenseeeeseceeeeeeeeseeeeeeeseeeeesseeeentaeeeees 50 Determining volume and snapshot availability cccccccceecccceeseeeeeeeeeeneeeceeeeeeeeateeseeteeeeesee
244. eeestseeeeeaes 240 Editinga volume ssi eT ee eer e E er eT es 240 Mer Sth volumen nene n a A nage eaceteaee nate ane a nae ret acon ease 241 Changing the volume description wiz icigaiusransd ianwuaniis isiesissesoaiaaitiaimphuonadaoelanstinameemieunsels 241 Changing the cluster Srsc dnsnaastnanvaninevaversbeenanborine cn neeceancsin bara halos pimaaneea zap iaeaianaenmmenntians 242 Changing the replication lEVells c sc icsosesvaisaupliinnwiand cairsenadinn leisalinwsdaiaathnassavnleweMalioceuiinaiee 242 Changing the replication priority aciiecdssmsoserasconavaeronancaniunentissanennntungiinserbosannaccedennes 242 Changing the Size neiere oie a EE A A E E E de naaebande 242 Making an unavailable redundancy volume available cccceceeesececeeeeeeneeeeeeeeeneeeeseeeeetaeeeeeens 242 Deleting aivolume ee he eer en a a aE EE rs 243 Frereguisiles gensser ar aa a a E a a a E RO 243 New in release 8 0 iiron a a a A a a a EAE EER 243 To delete the volume spccss bacco acca ceetecctaenctcanchotesh micnea anna eae accienerte ealanexieaetatanaidatanmnes 243 Ve RN I OE ONEA A EEO 245 Using snapshots OVErviEW sessies oiii i e a ie ante dea E E EE EE E SiBs 245 Snapshots versus backups c cncenseeuxsoawdnerdumaniints teaqeieaeitaeaieranansaGodivadameaniotonidebalecacionants 245 PreregUisites sjonnies o aa o e e a ea E O A A 245 Using AGING MINS sssrinin risane renn EO E EE E TE A Ea ERE 245 Single snapshots versus scheduled snapshots cccs
245. ement group Configuration s cccesceeeeeeeeeseeeeeeeeeeenseeeeseeeeneesenseseneeeees 183 Backing up a management group with remote copy relationships sssseseeeeserereerereerrreeeereee 184 Backup a management group configuration csccceeeeeeeeseeeneeeeeeenseeeeseeestetsneeeenneeeaees 184 Restoring a MANAGEMENT group eeeeeeeeeeeceeececeeccececeeceeeeecaeeeaeaaseeeeeeeeeseeeeeeeeeeeeeeeeeeeeeeeees 184 Safely shutting down a management Group seccccesceeeseeeseeeeeseseeeesneeceaeeceseeeseeseeesenteetteeenates 184 PLETOQUISICS eicere ce dhveaea unas ddiea ieee daanad ovtad stig eeeduatedaleaeranines av ssaaausvdea ede Peed 62 185 Shut down the management group soi iandeoriveisteier iauadiots tusdewmd Gaarad maemnnrmatantne 185 If volumes are still connected to servers Or hosts ccssceeeeeeeeteeeeneeseecenreeeneeeeeteeneaeensas 185 Start the management group back Up i s tcmseintancenumeagaeneiupaemaranaeae ie anen 185 Restarted management group in maintenance mode cccceeceeeseeeeeeteeeeesteeeeesteeeeeaees 186 Manually change management group to normal mode cccccceseeceesteeeeeteeeeeetseeentaees 186 Removing a storage node from a management group cccceseceeesseeeeeeeeeeeeceeseeeeeeeeaeceeseeeeeseaeees 187 PrEreQuISITES 35 iivceaes0aetaraanes dannduane dead elasuenseeouseedivece caced epenandeenannddanetlaoussbdnevtendesediveenereeaapente 187 Remove the storage nO caisieven
246. enee nae a e N E E E E E O 27 HP websieS serraria ienee e ra a Te rie aE Ee E E Ea a a Eara EN Ek 27 Doc mentation feedback sssiraseuininisenei aa a A a E 28 EE eE e E A EN EAA A AE T 29 Using the CMC sian in ttt i n eo shoal ia E ci ed pi E EAR 29 Pio discover en a reac pentane E 29 RE NE E E erm mer er str errr eomnern Tarrant tienen em Te tet 29 Performing tasks in the CMC using the menu bor ccceeecceeeesceeeeeeeeeeeeeceeeseeeeceeeseneeeeesaes 30 Using INE navigation Windows ssrsoissiei iese iin en iE iar ia EEEE AE RE i Eaa 31 POQGING IM eenean e aa a a a E E E 31 Traversing the navigation window a isiissi0sises sonbsban gt ldaatnnneponmnetbutbasy danissabuveaincbosiennsieotunivelnalnhoans 31 Singleclicking eee ence te eee ee eee ee O eee cee Renee renee 31 Doubleelicking santis siii beaten Misha E E anata an E E A AR 32 FICC ICH ine REA E E E EEE eh ace EAE ates 32 Available AVR sess dsp Bioeng ei ia a a de a aea a a 32 CMC storage hierarchy annie susasacuoesennscmekimnnetcadmenneessiaaesedaueaintaaincertiaaanan hacen aanemurnadan 32 ICONS acne en i ta Ul ls Ua eh oct esse a Ni rh O lila aaa A 33 Using the tab window a caanan arcana ar annceiataazoapaazaauenaeanpennbaciemiessniards Gor aeeapeenGeinarieaounmaaotores 33 Tab window conventions nsicoaninschsejennsasseniteectireniiiaesclesh sin wasepaienie lamenting Sbusensarianiieeayhielawitenaienidiaes 33 Using the alert window sscansovncsaannanancacnenaasoanndensatienedaansac
247. ense key Feature Registration Scripting Evaluation Golden 1 License Key none set Golden 2 Feature Key 00 19 B9 D4 F3 C6 License Key none set Golden 3 Feature Key 00 19 B9 E9 42 68 License Key none set Edit License Key Figure 163 Selecting the feature key For each storage node listed in the window select the Feature Key Right click and press Crtl C to copy the Feature Key Use Ctrl V to paste the feature key into a text editing program such as Notepad uu bk wD Go to https webware hp com to register and generate the license key 322 Registering advanced features EY NOTE Record the host name or IP address of the storage node along with the feature key This record will make it easier to add the license key to the correct storage node when you receive it Entering license keys When you receive the license keys add them to the storage nodes in the Feature Registration window 1 2 3 4 5 In the navigation window select the management group Select the Registration tab Click Registration Tasks and select Feature Registration from the menu Select a storage node and click Edit License Key Edit License Key Edit License Key for Denver 1 428 3CF5 4854 B5C8 D1 59 1 SBC 861 6 77F8 EC64 9074 7E6C 04DC FDB3 0B 0 44D7 421 5 668B 8259 CD34 0664 8B79 C4F9 9426 D618 4832 ED04 CAF1 882D 453B B2D0 7613 C1 70 4F Cancel Figure 164 Enteri
248. ensures that data is written to each site as part of the 2 way replication you configure when you create the volume See Creating additional clusters on page 209 Cluster Name CreditData Add storage nodes to cluster in the following order e Ist storage node Boulder 1 e 2nd storage node Golden 1 3rd storage node Boulder 2 e Ath storage node Golden 2 A CAUTION If storage nodes are added to the cluster in any order other than alternating order by site you will not have a complete copy of data on each site 203 New Cluster x General iSCSI Cluster Hame Description Cluster Status N A Cluster Type Standard Create cluster with storage nodes Multi Site clusters require that you have the same number of storage nodes in each site within the cluster Name IP Address Site RAID Contiguration Golden 1 10 0 2818 Unassigned RAD 0 Boulder 2 10 0 28 25 Unassigned RAD 0 Golden 2 10 0 28 23 Unassigned RAD 0 Remove Nodes Cancel Figure 95 Adding storage nodes to cluster in alternating site order 4 Create the volume with 2 way replication Two way replication causes two copies of the data to be written to the volume Because you added the storage nodes to the cluster in alternating order a complete copy of the data exists on each site See Planning data replication on page 223 i HP LeftHand Networks Centralized Management Console Get
249. ent group See Backing up a management group configuration on page 183 Adding a storage node to an existing management group Storage nodes can be added to management groups at any time Add a storage node to a management group in preparation for adding it to a cluster 1 oF 2 a vi In the navigation window select an available storage node that you want to add to a management group Click Storage Node Tasks on the Details tab and select Add to Existing Management Group Select the desired management group from the drop down list of existing management groups Click Add Optional If you want the storage node to run a manager select the storage node in the management group right click and select Start Manager Repeat Step 1 through Step 4 to add additional storage nodes Save the configuration data of the changed management group See Backing up a management group configuration on page 183 Logging in to a management group You must log in to a management group to administer the functions of that group 1 2 In the navigation window select a management group Log in by any of the following methods e Double click the management group Open the Management Group Tasks menu and select Log in to Management Group You can also open this menu from a right click on the management group e Click any of the Log in to view links on the Details tab Enter the user name and password and click Log In
250. ent group with four storage nodes in one cluster The cluster spans two geographic sites with two storage nodes at each site The cluster contains a single volume with 2 way replication that spans both sites Configuration steps The following configuration steps ensure that all the data is replicated at each site and the managers are configured correctly to handle disaster recovery 202 Using specialized managers 1 Name storage nodes with site identitying host names To ensure that you can easily identify which storage nodes reside at each site use host names that identity the storage node location See Changing the storage node hostname on page 44 Management Group Name TransactionData Storage Node Names Boulder 1 e Golden 1 Boulder 2 e Golden 2 2 Create management group plan the managers and virtual manager When you create the management group in the 2 site scenario plan to start two managers per site and add a virtual manager to the management group You now have five managers for fault tolerance See Managers overview on page 171 3 Add storage nodes to the cluster in alternating order Create the cluster Add the storage nodes to the cluster in alternating order of site as shown in the bulleted list The order in which the storage nodes are added to the cluster determines the order in which copies of data is written to the volume Alternating the addition of storage nodes by site location
251. ent models may result in a monitored variable configuration with unsupported variables incorrect thresholds or removed variables Be sure to verify that the configuration is correct on the storage nodes you copy to 1 In the navigation window select the storage node that has the configuration that you want to copy to other storage nodes 2 Click Storage Node Tasks on the Details tab and select Copy Configuration 40 Getting started In the Configuration Settings section select which configurations you want to copy In the Copy Configurations to nodes section select the storage nodes to which you want to copy the configurations Click Copy The configuration settings are copied to the selected storage nodes Click OK to confirm the operation and close the window Al 42 Getting started 2 Working with storage nodes Storage nodes displayed in the navigation window have a tree structure of configuration categories under them The storage node configuration categories include Alerts e Hardware e SNMP Storage e TCP IP Network Storage node configuration categories Storage node configuration categories allow access to all the configuration tasks for individual storage nodes You must log in to each storage node individually to configure modify or monitor the functions of that storage node f HP LeftHand Networks Centralized Management Console mE Getting Started Deteis EERE s Con
252. eplication level and priority setting Volume is available to a server with a priory setting and a replication level of of None 2 Way 3 Way 4 Way 1 of every 2 adjacent storage nodes must lof 3 adi be up of every 3 adja PE Availabili All storage nodes p cent storage of every 4 adja vailability side Adjacent storage ate ene cent storage nodes P nodes are those that x must be up are adjacent in the pi cluster 2 of every 3 adja i R All storage nodes cent storage ec ereny A edundancy N A mud ben nodes must be cent storage nodes P up must be up A CAUTION A management group with 2 storage nodes and a Failover Manager is the minimum configuration for automated fault tolerant operation Although the SAN iQ software allows you to configure 2 way replication on 2 storage nodes this does not guarantee data availability in the event that one storage node becomes unavailable due to the communication requirements between managers See Managers overview on page 171 Best practice for setting replication levels and redundancy modes For mission critical data and using a 3 node cluster choose 3 Way or 4 Way replication and redundancy priority This configuration sustains the first fault and ensures that the volume is redundant and available If your volumes contain critical data configure them for 2 Way replication and a priority of redundancy 225 Provisioning snapshots Snapshots provid
253. eport on the Disk Setup tab The Disk Setup tab provides a status report of the individual disks in a storage node Figure 39 shows the Disk Setup tab and Table 12 describes the corresponding disk report vagement Console RAID Setup Disk Setup Boot Devices Wed Health Safe to Remo Model Serial Number Class Capacity normal normal normal normal normal normal normal normal normal normal normal normal Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes HDS725050 HDS725050 HDS725050 HDS725050 HDS725050 HDS725050 HDS725050 HDS725050 HDS725050 HDS725050 HDS725050 HDS725050 KRVNOSZAH KRYNOSZAH KRYNOSZAH KRYNO3ZAH KRYNOSZAH KRYNOSZAH KRYNO3ZAH KRYNOSZAH KRYNO3ZAH KRYNO3ZAH KRYNOSZAH KRYNO3ZAH SATA 3 0GB SATA 3 0GB SATA 3 0GB SATA 3 0GB SATA 3 0GB SATA 3 0GB SATA 3 0GB SATA 3 0GB SATA 3 0GB SATA 3 0GB SATA 3 0GB SATA 3 0GB 465 66 GB 465 66 GB 465 66 GB 465 66 GB 465 66 GB 465 66 GB 465 66 GB 465 66 GB 465 66 GB 465 66 GB 465 66 GB 465 66 GB Figure 39 Example of columns in the Disk Setup tab Table 12 Description of items on the disk report This item Describes this Disk Corresponds to the physical slot in the storage node 73 This item Describes this Whether the disk is e Active on and participating in RAID e Uninitialized is not part of an ar
254. er Volume volume set Volume Lists Volume Size VSS VSS Provider Writable Space Definition Application managed snapshots created for a volume set Use traps to have an SNMP tool send alerts when a monitoring threshold is reached Storage server software maintains the customer s data It reads to and writes from disks in response to customer reads and writes of SANiQ volumes Striped data is stored across all disks in the array which increases performance but does not provide fault tolerance Target secret is used in both 1 way and 2 way CHAP when the target volume challenges the iSCSI initiator Thin provisioning reserves less space on the SAN than is presented to application servers Temporary space is created when a snapshot is mounted for use by applications and operating systems that need to write to the snapshot when they access it Temporary space can be converted to a volume using the SmartClone process The time zone for the physical location of the storage node The trap community string is used for client side authentication when using SNMP Communication between a single sender and a single receiver over a network A highly available address that ensures that if a storage node in a cluster becomes unavailable servers can still access the volume through the other storage nodes in the cluster A virtual storage appliance that provides one or more simultaneous storage envir onments in which SAN iQ may execute
255. er all the storage nodes on the network when it opens turn off Auto Discover 1 From the menu bar select Find gt By Subnet and Mask 2 Clear the Auto Discover check box The next time you open the CMC it will not search the network for all storage nodes However if you have a subnet and mask listed it will continue to search that subnet for storage nodes Troubleshooting Storage nodes not found If the network has a lot of traffic or if a storage node is busy reading or writing data it may not be found when a search is performed Try the following steps to find the storage node 1 If the storage node you are looking for does not appear in the navigation window search again using the Find menu 2 If you have searched by Subnet and Mask try using the Find by IP or Host Name search or vice versa 3 If searching again does not work try the following e Check the physical connection of the storage node e Wait a few minutes and try the search again If activity to the storage node was frequent the storage node might not have responded to the search 39 Possible reasons for not finding storage nodes Other problems can occur that prevent CMC from finding a storage node e Extremely high network traffic to and from the storage node The IP address could have changed if the storage node is configured to use DHCP not recommen ded The name could have been changed by an administrator e The storage node may h
256. er must have sufficient storage nodes and unallocated space to support the new Replication replication level For example you just added more storage to a cluster and have more ca Level pacity You decide to change the replication level for a volume from O to 2 to ensure you have redundancy for your data To change the replication priority the replication level must support the change You can al ways go from Redundancy to Availability However you cannot go from Availability to Re dundancy unless a sufficient number of storage nodes in the cluster are available For example Replication if you have 2 way replication with 3 storage nodes in the cluster you can change from Priority Availability to Redundancy if all the storage nodes in the cluster are available You can use Redundancy to ensure data integrity if you know you are going to take a cluster offline Re dundancy ensures that when any one storage node goes offline then the volume becomes unavailable in order to protect the data To increase the size of the volume e If you have enough free space in the cluster simply enter the new size e If you do not have enough free space in the cluster delete volumes and or snapshots or add a storage node to the cluster Size To decrease the size of the volume e If the volume has been or is mounted by any operating system you must shrink the file system on the volume before shrinking the volume in the CMC e You also should not decrease the size of th
257. er you want The Performance Monitor window opens By default it displays the cluster total IOPS cluster total throughput and cluster total queue depth Status Normal oP gt O0 se secs Export gt fi 5 Cna Performance Monitor Denver 2 16 00 PM 2 17 00 PM 2 18 00 PM 2 19 00 PM Mountain Standard Time America Denver 2 20 00 PM 2 21 00 PM Display Line Name Server Statistic Junits Value Minimum Maximum Average Scale H denver Throughput Total B s 5 925 606 538 5 683 779 853 6 270 189 700 6 089 724 548 Auto 0 00001 Y i IOPS Total lO s 723 341 1 009 545 Auto 0 01 Y Auto 1 Performance Monitor Tasks Y 1 Toolbar 2 Graph 3 Default statistics 4 Statistics table Figure 151 Performance Monitor window and its parts You can set up the Performance Monitor with the statistics you need The system continues to monitor those statistics until you pause monitoring or change the statistics The system maintains any changes you make to the statistics graph or table only for your current CMC session It reverts to the defaults the next time you log in to the CMC For more information about the performance monitor window see the following e Performance Monitor toolbar on page 304 e Performance monitoring graph on page 305 303 e Performance monitoring table on page 306 Performance Monitor toolbar The toolbar lets
258. erated by a specific server Planning for SAN improvements If you have questions such as these about planning for SAN improvements the Performance Monitor can help e Would enabling NIC bonding on the storage nodes improve performance e ls the load between two clusters balanced If not what should do have budget to purchase two new storage nodes e Which volumes should move to them to improve performance e Which cluster should add them to The Performance Monitor can let you see the following e Network utilization to determine if NIC bonding on the storage nodes could improve performance e Load comparison of two clusters e Load comparison of two volumes 300 Monitoring performance Network utilization to determine if NIC bonding could improve performance example This example shows the network utilization of three storage nodes You can see that Denver 1 averages more than 79 utilization You can increase the networking capacity available to that storage node by enabling NIC bonding on the storage node You can also spread the load out across the storage nodes using iSCSI load balancing Performance Monitor Denver 100 3 47 00 PM 3 48 00 PM 3 40 00 PM 3 50 00 PM 3 51 00 PM 3 52 00 PM 3 53 00 PM Mountain Standard Time America Denver Display Line Name Server Statistic units Value WA Minimum Maximum Average Scale S Denver 1 Network Ltiization Motherboard Port 9 75408 654
259. erformance Monitor Storage Nodes 3 EH amp volumes 11 and Snapshots 1 LS C _SCsnap EEE C class_1 1 LS C _SCsnap C aclass_2 1 C aclass_3 1 B C amp class_4 1 C amp class_5 1 C amp class_6 1 C amp class_7 1 I C aclass_8 1 C aclass_9 1 I C aclass_10 1 Figure 119 Example of using a base name with 10 SmartClone volumes After you designate a base name for the SmartClone volumes while you are creating them you can then edit individual names of SmartClone volumes in the table list before you finish creating them EY NOTE Rename the SmartClone volume at the bottom of the list Then the numbering sequence won t be disrupted 268 SmartClone volumes New SmartClone Volumes EJ Original Volume Setup Management Group TrainingOS Volume Hame c Snapshot Hame C _SCsnap SmartClone Volume Setup Base Name caclass o O Provisioning Thin v Server No Server v Permission Readi irite v it ex Quantity Max of 25 10 SmartClone Volume Name Provisioning Server Name Permission C class_1 Thin wv No Server v v C amp class_2 Thin v No Server i C class_3 Thin No Server v v C class_4 Thin w No Server bi ad C class_5 Thin w No Server v C class_6 Thin No Server lt bd C class_7 Thin Y No Server v v C class_8 Thin No Server Le w v 7 C class_9 Thin w No Server Cance
260. ers a warning by turning that line red As soon as the total number of storage nodes reduces below the boundary the summary bar returns to the previous indicator either orange or green Storage nodes in the cluster The optimum number of storage nodes in a cluster ranges up to 10 If the cluster contains 11 to 16 storage nodes the Configuration Summary displays orange for that line of the management group Over 16 storage nodes in a cluster triggers a warning by turning that line red As soon as the total number of storage nodes reduces below the boundary the summary bar returns to the previous indicator either orange or green Reading the configuration summary Each management group in the SAN is listed on the Configuration Summary Underneath each management group is a list of the storage items tracked such as storage nodes volumes or iSCSI 175 sessions As items are added to the management group the Summary graph fills in and the count is displayed in the graph The Summary graph fills in proportionally to the optimum number for that item in a management group as described in the Best practices on page 175 Optimal configurations Optimal configurations are indicated in green For example in Figure 83 there are 15 storage nodes in the management group CJS1 Those 15 storage nodes are divided among the clusters c c2 and c3 The length of the graph is relative to the recommended maximums in each catego
261. erver access for volumes Requirements e Cluster configured with a virtual IP address See Virtual IP addresses on page 335 e A compliant iSCSI initiator Compliant iSCSI initiators A compliant initiator supports iSCSI Login Redirect AND has passed HP LeftHand Networks test criteria for iSCSI failover in a load balanced configuration Find information about which iSCSI initiators are compliant by clicking the link in the New or Edit Server window Figure 168 iSCSI Security Y Allow access via iSCSI IV Enable load balancing Enabling load balancing on non compliant initiators compromise volume availability To function correctly load balancing requires that the cluster virtual IP be configured Figure 168 Finding compliant initiator information The link opens the iSCSI initiator information window Figure 169 Scroll down for a list of compliant initiators If your initiator is not on the list do not enable load balancing 336 iSCSI and the HP LeftHand Storage Solution I iSCSI Initiator Information Compliant iSCSI initiators for iSCSI Load Balancing IMPORTANT To properly configure iSCSI load balancing you must use a Virtual IP ddress VIP on the cluster containing the volumes that are accessed using the iSCSI initiator Verify that all appropriate clusters have VIPs configured if an authentication group is modified to enable iSCSI Load Balancing you must jogout and log back into t
262. es to Resume Export Log when export is paused 10 Pause Export Log Temporarily stops exporting of data 11 Stop Export Log Stops the exporting of data 12 Export Log Progress Figure 152 Performance Monitor toolbar 304 Monitoring performance Shows the progress of the current data export based on the selected duration and elapsed time Centralized Management Console xi WARNING The Centralized Management Console is having trouble monitoring the performance of the following nodes ENSM1 10 0 12 31 ENSM2 10 0 12 35 Performance monitoring statistics marked as N A are not available The issue may be caused by monitoring too many statistics at one time or due to a momentary Communication interruption with the network or software component Reduce the the number of performance statistics being monitored or wait several seconds and try again Figure 153 Example of a warning message Performance monitoring graph The performance monitor graph shows a color coded line for each displayed statistic Performance Monitor Stores 3 17 00 PM 3 17 30 PM 3 18 00 PM 3 18 30 PM 3 19 00 PM 3 19 30 PM 3 20 00 PM 3 20 30 PM 3 21 00 PM Mountain Standard Time America Denver Figure 154 Performance Monitor graph The graph shows the last 100 data samples and updates the samples based on the sample interval setting The vertical axis uses a scale of O to 100 Graph data is automatically adjusted
263. es per second for the sample in X X X terval Throughput Total Average read and write bytes per second for the X X X sample interval Average Read Size Average read transfer size for the sample interval X X X 307 Volume e ee r Statistic Definition Cluster 2 NSM Snap shot Average Write Size Average write transfer size for the sample interval X X X Avowge VO SBE Average read and write transfer size for the sample X X X interval Queue Depth Reads Number of outstanding read requests X X Queue Depth Writes Number of outstanding write requests X X Queue Depth Total Number of outstanding read and write requests X X X ilaing Reads Average time in milliseconds to service read re X X X quests G ilteney Wales Average time in milliseconds to service write re X X X quests Silatehey Tiol Average time in milliseconds to service read and X X X write requests Cache Hits Reads Percent of reads served from cache for the sample X X interval CPU Utilization Percent of processor used on this storage node for X the sample interval Meniery Uilizaiion Percent of total memory used on this storage node X for the sample interval Percent of bi directional network capacity used on Network Utilization this network interface on this storage node for the X sample interval Network Bytes Read Bytes read from the network for the sample interval X Network Bytes Write Bytes written to the
264. escribed in the following steps l 2 Log in to the management group in which you want to create a volume In the navigation window select the cluster in which you want to create a volume 3 Click Cluster Tasks and select New Volume Creating a basic volume You can create a basic volume simply by entering a name and a size for the volume 1 2 3 4 Enter a name for the volume Optional Enter a description of the volume Designate a size for the volume Optional Assign a server to the volume 239 5 Click OK The SAN iQ software creates the volume The volume is selected in the navigation window and the Volume tab view displays the Details tab To set advanced characteristics for a volume continue on the Advanced tab of the New Volume window Configuring advanced volume settings optional Set additional characteristics for a volume in the Advanced tab in the New Volume window Advanced settings include the following Cluster changing the cluster is typically used to migrate volumes to a different cluster at some later time e Replication level e Replication priority Volume type e Provisioning Descriptions of these characteristics are found in Table 52 on page 238 Configuring advanced volume settings Configure the Advanced settings when you create the new volume if you do not want fo use the default settings 1 Click the Advanced tab on the New Volume window 2 Change the desired ch
265. esnaeeees 234 Increasing the volume size in Microsoft Windows cccscccceeeeeceeeeseeeeeeeeseeteeeeteaeeeesaeess 234 Increasing the volume size in other environments ccscccceesseeceeteeeeeeteeeeeeteeeeeneeeeetees 235 Changing configuration characteristics to manage SPACE ccceeeceeeeeeseeteeeeeeeeeeseeteeeeeeaeeeees 235 Snapshot temporary Space iiceciadaiint ceriaia rene paanvvenngantntanensisabingeseataabdeeneaderinaenannmenineans 235 Managing snapshot temporary Space cscs cisseccasseoaradsacadeananssnivaessadondidganaintancedaviarraecianues 236 EE OE E AA A E EAE E EET 237 Volumes and server OcCESS aiaa a a E E E a E E a 237 FreregUisie Senee a a a aE a E N 237 Planning volumes nsession ee E cae erie A aA ota sea ap dee e aaa AN 237 Planning how many volumes ccc e cnct aeccasee nanan cqaveeia teenth dcetunae tenes eganenseneia 237 Planning VO WING Fy ES 42e5 65 baat eaenaag ainena a itataaae MONO AalcdaaEt agi 238 Guide for volumes esas sastrpcanenpendatetadianutncetaanoneie honianceio meena ane Aeerattanenicameion era eeee ERO REN 238 AS EAI 5h SA IMD ys vs tnt vic le wale nya ov lst Uv Sint ole da wine atts ai Seg 239 Creating a basic volume 4 142 cisnonaanrecapaseqasadioseneuoreiosearaanediss oeeuaamaanids 239 Configuring advanced volume settings optional cccsceeceeseceeseeeeeeeseeeeeseeeeesseeeenseeeees 240 Configuring advanced volume settings cccccsececeeeseeceseceeeeeceeeeeeeeceeeeeeene
266. ete 3 Click Hardware Information Tasks and select Refresh to monitor the ongoing progress Diagnostics Hardware Information Log Files Last Refreshed 06 21 2007 11 57 32 AM MDT ttem Value Drive Status Status Health Temperature a Drive 1 Active normal NIA nm Drive 2 Active normal NIA Drive 3 Active normal N A Drive 4 Active normal N A Drive 5 Active normal N A Drive 6 Active normal NA Drive 7 Active normal N A Drive 8 Rebuilding normal NA Drive 9 Active normal NA Drive 10 Active normal NIA Drive 11 Active normal N A Drive 12 Active normal NIA RAID Rebuilding Rebuild Rate Medium Unused Devices Disk 8 Rebuilding Statistics 2 Units Unit 1 ideviccissicOd1 disc DATA Partition Raid 5 1117 59 GB Normal Unit 2 idevicciss cOd2 disc DATA Partition Raid 5 1117 59 GB Rebuilding 7 complete RAID O S Partitions Normal Minimum Rebuild Speed 10 MB sec Maximum Rebuild Speed 100 MB sec Statistics 0 Units Controller Cache Items Card 4 I Hardware Information Tasks Y Figure 167 Checking RAID rebuild status For Dell 2950 NSM 2060 and NSM 4150 only Use the RAID Setup tab to check the status of the RAID rebuild 1 Select the Storage category and then select the RAID Setup tab 2 Click the link on the tab to see the rebuild status You can view the RAID rebuild rate and percent complete 3 Click the link whenever you want to update the progress Return storage node to cluster R
267. eteeeesteeensas 150 Generating a hardware information report cc ccccesseeceeseceeseeceeceseeeeeeeeeensaeeenseeeeseaaes 150 Saving a hardware information report c ccccccccceesceceesececeseeeeeeeeeseneeeneeeeesseeeeeneaeees 151 Hardware information report details cccccccceescceeeseeseenseeeeeneeeeseeeeeseseeeeseeesenseeeees 152 Using hardware information log files cic scxacuressssassnacnisatavenavesseservaroenguntienrsaeesdentiaisaduaansserindednaties 167 Saving log filesinin a a EE A a eae eee 167 Using remote log files cs scenesarvceshanvanensvasecivactc shai sdeeod venga AEEA EEEN uy tots NTEN 168 Addinga remote log sersiiurcaeii nin E E A E ae aan wees 168 Configuring the remote log target computer ccecceeeseceeesteeceetseeceeeceeseeeeeeseesenaeeeeneeeens 169 Editing remote log targets 3152 caspyoeuins rast aces wena eases oye nian se eae nda 169 DAC Tema ele CE 169 9 Working with management groups sce scscsecnscontecnteceveenissessteaessassecenecet 171 Functions of management groups cicnscrrncndunsedsaaeddonanceenatenderandaaaeiuderarniedsatniadoastodscatdgtadseddvanc dnote 171 Requirements for creating a management group csccceeesecesseeeeeeeseeeeceeeeceeeeeeeaseeeeteeeeeeeeeess 171 Managers OVETVIOW sesonon E E AER A R A EERS 171 F nctions of mandagens coc eats Vinheta E a E ERT E EEE AE E MR a estat 172 Managers and qUOrUM s ruariireadieeaareiosedenceeamarieneaatmainienrcaieatorsins maanaancante
268. ether or not you can replace it without losing data agement Console RAID Setup Disk Setup Health Safe to Remo Model Serial Number Class Capacity normal normal normal normal normal normal normal normal Yes Yes Yes Yes Yes Yes Yes Yes HP DFO300B HP DFO3OOB HP DF3004B HP DFO300B HP DFO300B HP DFO3OOB HP DF0300B HP DFO300B JLY920KC JLY4SJJC SLM3NF4Y0 JLY9BUHC JLY9879C JLVETESC JLYI3LIC JLYI2KKC SAS 3 0GB SAS 3 0GB SAS 3 0GB SAS 3 0GB SAS 3 0GB SAS 3 0GB SAS 3 0GB SAS 3 0GB 279 49 GB 279 49 GB 279 49 GB 279 49 GB 279 49 GB 279 49 GB 279 49 GB 279 49 GB Figure 60 Diagram of the drive bays in a HP LeftHand P4300 Replacing a disk The procedures for replacing a disk are different for various platforms e RAIDO in all platforms e Platforms that support hot swapping drives which include the NSM 160 NSM 260 DL380 and DL320s NSM 2120 Dell 2950 NSM 2060 NSM 4150 HP LeftHand P4500 HP LeftHand P4300 Non hotswap platforms including the IBM x3650 In the VSA If you are replacing a disk on a server that is hosting the VSA refer to the manufacturer s instructions If you want to change the disk size on a VSA you must recreate the hard disk in the VI Client See the VSA User Manual for detailed instructions Using Repair Storage Node In certain circumstances you may have to use the Repair Stor
269. eturn the repaired storage node to the cluster 1 In the navigation window right click the storage node and select Add to Existing Management Group 331 2 Select from the list the Group Name that the storage node used to belong to and click Add The storage node appears in the management group and the icon in the navigation window flashes for a few minutes as it initializes Restarting a manager Before proceeding make sure that the storage node is finished initializing and is completely added to the management group If necessary ensure that after the repair you have the appropriate configuration of managers If there was a manager running on the storage node before you began the repair process you may start a manager on the repaired storage node as necessary to finish with the correct number of managers in the management group If you added a virtual manager to the management group you must first delete the virtual manager before you can start a regular manager First right click on the virtual manager and select Stop Virtual Manager Next right click on the virtual manager and select Delete Virtual Manager Finally right click on the storage node and select Start Manager Add repaired node to cluster 1 After the initialization completes right click on the cluster and select Edit Cluster The list of the storage nodes in the cluster should include the ghost IP address You now need to add the repaired storage nod
270. evices on your network see Changing NIC frame size on page 109 1 On the Configuration Interface main menu tab to Network TCP Status and press Enter 2 Tab to select the network interface for which you want to set the TCP speed and duplex and press Enter 3 To change the speed and duplex of an interface tab to a setting in the Speed Duplex list To change the frame size select Set To in the Frame Size list Then tab to the field to the right of Set To and type a frame size The frame size value must be between 1500 bytes and 9000 bytes On the Network TCP Status window tab to OK and press Enter On the Available Network Devices window tab to Back and press Enter Removing a storage node from a management group Removing a storage node from a management group deletes all data from the storage node clears all information about the management group and reboots the storage node A CAUTION Removing a storage node from a management group deletes all data on the storage node 1 On the Configuration Interface main menu tab to Config Management and press Enter 2 Tab to Remove from management group and press Enter A window opens warning you that removing the storage node from the management group will delete all data on the storage node and reboot it 3 Tab to Ok and press Enter On the Configuration Management window tab to Done and press Enter Resetting the storage node to factory defaults Resettin
271. ew f Available Nodes 1 Snapshot as W Senin Hame S cr _SCsnap Servers 1 Description y Administration Stes Cluster Programming Programming Status Normal Temp space None Performance Monitor BH Storage Nodes 3 Type Primary Clone Point Created by Manual gnt EPF 6 and Snapshots 3 Size 56B Created 08 05 2008 02 29 05 PM MDT EH c 3 EES C Scena Replication Level None Replication Priority Availabilty a fess S Provisioned Space 512MB Provisioning Thin class om ke 3 Utilization i 1 Be ced 3 EHA Ciclass_5 3 Target Information Z sysadm iSCSI Name ign 2003 10 com lefthandnetworks trainingos 107 c scsnap 57 Alerts Remaining Date Time Hostname IP Address Alert Message 57 08 05 20 Gold 10 0 14 90 Management Group TrainingOS Snapshot C _SCsnap Restripe Complete a 56 08 05 20 Deny 10 0 60 32 Management Group TrainingOS Snapshot C _SCsnap Restrine Complete 55 08 05 20 Deny 10 0 61 16 Management Group TrainingOS Snapshot C _SCsnap Restripe Complete 54 08 05 20 Deny 10 0 6117 Management Group TrainingOS Volume C class_10 Restripe Complete I K gt 2 t Lakma soi 1 Clone point 2 New SmartClone volumes Figure 130 New SmartClone volumes in Navigation window Viewing SmartClone volumes As you create multiple SmartClone volumes you can view them and their associated volumes and snapshots in both the navigation window and in t
272. figuration Summary Er Available Nodes 1 Storage Node e p He Alerts OQ Hardware ese Hostname Golden 2 W storage 35 TCPAP Network IP Address 10 0634 EHR PrimarySites m Site N A Logged In User Login to view Figure 6 Storage node configuration categories Storage node configuration category definitions The storage node configuration categories are described below e Alerts Configure active monitoring settings of selected monitored variables and notification methods for receiving alerts Hardware Use the hardware category to run hardware diagnostic tests to view current hardware status and configuration information and to save log files e SNMP Monitor the storage node using an SNMP management station You can also enable SNMP traps Storage Manage RAID and the individual disks in the storage node e TCP IP Network For each storage node configure and manage the network settings including network interface cards NICs DNS servers the routing table and which interface carries SAN iQ communication Storage Node Tasks This section describes how to perform basic storage node tasks 43 Working with the storage node on page 44 Logging out of a storage node on page 44 Changing the storage node hostname on page 44 Locating the storage node in a rack NSM 260 DL 320s NSM 2120 DL 380 and HP LeftHand P4500 on page 45 Backing up and restoring the stor
273. for RAIDO c cccssceeeeeceeeeeeeeeeeneeeeeeseeeensseees 83 Physically replace the disk drive in the storage node eeeeeseeceeeeteneeeeeeeeseeeeeeeeeeneeeeeeeees 83 Manually powering on the disk in the CMC ccccceecccceceeeesteeceecenneeeeeeeeeeeeeeeeeeneeeeeenees 83 Vol me TESTA DINGY aean anha snegltvetn tenes tesa E enw Souk I RESETE REE 84 Replacing a disk in a non hot swap platform IBM x3650 cc ccceesscceeececeeeeeeeeetseeessneeeeeeees 84 Manually power off the disk in the CMC for RAID1 10 and RAIDS cecceeeceeeeseeeneeeeeees 84 Physically replace the disk drive in the storage node eeeeeeseeceeeeeeeeeeeeeeeneeeeeeeeeneeeeeenees 85 Manually powering on the disk in the CMC cccccsscccccceesseneeeeeeesneeeeeesenseeeeeeseeneeeeeenes 85 RAID rebuilding Soc catersartaanteitiaetae heaead n a a E E E E AE N aa E eaS 86 Replacing a disk in a hot swap platform NSM 160 NSM 260 DL380 DL320s NSM 2120 Dell 2950 NSM 2060 NSM 4150 HP LeftHand P4500 HP LeftHand P4300 eeereeeees 86 Repl cee tlie disk direicien nn aa e O EEE E A E E 86 Physically replace the disk drive in the storage node cescceeseeeseeeeeeeeeesetenseeenseeneeeeates 86 RAID rebuilding eens real te sesh ote sad ant a nbd Ea 86 A Peni TS TONG E EE E E E E 89 Network best pirates piiseassnnsdeinnaudnnss wentanindeissinsisatuiaaiiagne abc E E AAEE E AEE O N RETRE 89 Changing network configurations ccscensacda
274. g the TCP speed duplex and frame size on page 344 Changing NIC frame size Configure or change the settings of the network interfaces in the storage nodes See Network best practices on page 89 for more information Requirements If you plan to change the frame size that change must be configured before creating NIC bonds 109 Best practices Change the frame size while the storage node is in the Available Nodes pool and not in a management group The frame size specifies the size of data packets that are transferred over the network The default Ethernet standard frame size is 1500 bytes The maximum allowed frame size is 9000 bytes Increasing the frame size improves data transfer speed by allowing larger packets to be transferred over the network and by decreasing the CPU processing time required to transfer data However increasing the frame size requires that routers switches and other devices on your network support that frame size EY NOTE Increasing the frame size can cause decreased performance and other network problems if routers switches or other devices on your network do not support frame sizes greater than 1500 bytes If you are unsure about whether your routers and other devices support larger frame sizes keep the frame size at the default setting If you edit the frame size on a disabled or failed NIC the new setting will not be applied until the NIC is enabled or connectivity is restored
275. g a DNS server Change the IP address for a DNS Server in the list In the navigation window select a storage node and log in Open the tree and select the TCP IP Network category Select the DNS tab Select the server to edit Click DNS Tasks and select Edit DNS Servers Select the server again and click Edit Type the new IP address for the DNS server and click OK Ss OS SY gt Editing a domain name in the DNS suffixes list Change a domain name of a storage node In the navigation window select a storage node and log in Open the tree and select the TCP IP Network category Select the DNS tab Click DNS Tasks and then select Edit DNS Domain Name Enter the change to the domain name Click OK ot CO PS 113 Removing a DNS server Remove a DNS server from the list OS NAMA WN a In the navigation window select a storage node and log in Open the tree and select the TCP IP Network category Select the DNS tab Select the server you want to remove from the DNS Servers list Click DNS Tasks and then select Edit DNS Servers Select the name again in the Edit DNS Servers window Click Remove Click OK to remove the DNS server from the list Removing a domain suffix from the DNS suffixes list 1 Oo NAM PWN In the navigation window select a storage node and log in Open the tree and select the TCP IP Network category Select the DNS tab Select the suffix you want to remove Click DNS Tasks and then select E
276. g failover 98 example configurations 98 preferred interface 97 requirements 97 list of diagnostic tests 145 list of monitored variables 135 Load Balancing compliant iSCSI initiators 290 editing 29 iSCSI 290 360 load balancing compliant iSCSI initiators 336 gateway session when using 336 iSCSI 336 local bandwidth setting 183 locate NSM 260 in rack 45 locating a storage node in a rack 45 log files backing up management group configuration file 184 backing up storage node configuration file 46 downloading variable 143 144 hardware information 167 saving for technical support 167 log in to a storage node in a management group 44 to HP SIM 130 to management group 80 to storage nodes in Available Nodes pool 31 to System Management Homepage 30 log out of management group 181 Logging on to volumes in iSCSI 295 M maintenance mode changing to normal mode 186 management group in 186 management group registering 180 starting up 185 management group time refreshing 119 Management Groups adding servers to 290 management groups adding 177 requirements for 78 adding storage nodes 178 180 backing up configuration 183 best practice recommendations 175 configuration guidance 74 configuration summary roll up 174 creating 178 defined 32 deleting 187 editing 182 function 171 functions of managers 72 logging in 180 logging out 181 maintenance mode 186 normal mod
277. g the storage node to factory defaults deletes all data and erases the configuration of the storage node including administrative users and network settings A CAUTION Resetting the storage node to factory defaults deletes all data on the storage node 1 On the Configuration Interface main menu tab to Config Management and press Enter 345 2 Tab to Reset to factory defaults and press Enter A window opens warning you that resetting the storage node configuration will delete all data on the storage node and reboot the storage node 3 Tab to Ok and press Enter On the Configuration Management window tab to Done and press Enter 346 Using the Configuration Interface 24 Glossary Terms Used The following glossary provides definitions of terms used in the SAN iQ software and the HP LeftHand Storage Solution Table 78 Glossary Term Active Monitoring Active Passive Adaptive Load Balan cing Add on application application managed snapshot Authentication group Auto Discover BondO Bonding Boot Device HP LeftHand Centralized Management Console CHAP Clone point Definition Active monitoring tracks the health of the storage node using notifications such as emails alerts in the CMC and SNMP traps A type of network bonding which in the event of a NIC failure causes the logical interface to use another NIC in the bond until the preferred NIC resumes operation At that point
278. ge 293 e Assigning volumes from a server connection on page 293 Prerequisites e The server connections you want to assign must already exist in the management group See Adding server connections to management groups on page 290 e The volumes or snapshots you want assign must already exist in the management group See Creating a volume on page 239 When you assign the server connections and volumes or snapshots you set the permissions that each server connection will have for each volume or snapshot The available permissions are described in Table 67 Table 67 Server connection permission levels Type of Access Allows This No Access Prevents the server from accessing the volume or snapshot Restricts the server to read only access to the data on the volume or Read Access h snapshot Read Write Access Allows the server read and write permissions to the volume EY NOTE Microsoft Windows requires read write access to volumes Assigning server connections from a volume You can assign one or more server connections to any volume or snapshot For the prerequisites see Assigning server connections access to volumes on page 292 1 In the navigation window right click the volume you want to assign server connections to Select Assign and Unassign Servers Click the Assigned check box for each server connection you want to assign to the volume or snapshot 4 From the Permission drop
279. guration 55 RAID rebuild rate 69 RAIDO devices 60 RAID50 65 NSM 4150 powering off the system controller and disk enclosure correct order powering on the system controller and disk enclosure correct order disk arrangement in disk setup 78 disk status 78 RAID levels and default configuration 55 RAID10 initial setup 61 RAID50 capacity 59 RAID50 initial setup 63 NSM 4150 RAID rebuild rate 69 NTP selecting 120 server 20 server deleting 121 servers changing list order 121 O off RAID status 7 ordering NTP access list 121 overview add on applications 317 Centralized Management Console 29 clusters 209 disk replacement in special cases 327 Failover Manager 189 management groups 171 managers 171 network 89 provisioning storage 22 replacing a disk 81 reporting 144 setting date and time 119 SmartClone volumes 263 snapshots 226 245 SNMP 129 storage category 55 volumes 237 P parity in RAID5 58 Passwords changing in Configuration Interface 343 Pausing monitoring 313 pausing scheduled snapshots 256 performance See O performance performance and iSCSI 336 Performance Monitor current SAN activity example 298 exporting data from 315 fault isolation example 299 learning about applications on the SAN 299 learning about SAN performance 298 load comparison of two clusters example 301 load comparison of two volumes example 302 NIC bonding example 301 overview 297 p
280. guration Summary Hi Exchange Last Refreshed N A fis Servers 0 Ss Administration item sites Click to Refresh ES Logs E Performance Monitor Storage Nodes 3 Denver 1 L TCPAP Network Denver 2 Denver 3 Al Volumes 1 and Snapshots 0 L Least 0 Figure 78 Opening the hardware information window 1 Link to obtain hardware statistics On the Hardware table use the link Click to Refresh to obtain the latest hardware statistics vagement Console Jog Diagnostics I Hardware Information I Log Files Last Refreshed 02 20 2009 10 10 55 AM MST Hostname Golden 1 Storage Node Softw Version 8 1 00 0010 0 Software Patches IP Address 10 0 61 17 Support Key unknown call support HIC Data Card 1 Description Broadcom Corporation NetXtreme ll BCM5708 Gigabit Ethernet Mac Address 00 19 B9 D4 FA 25 Address 10 0 61 17 Mask 255 255 0 0 10 0 255 254 auto Firmware Version 291 Driver Name bnx2 Driver Version 1 5 10d Card 2 Hardware Information Tasks Figure 79 Viewing the hardware information for a storage node Saving a hardware information report 1 Click Hardware Information Tasks and select Save to File to download a text file of the reported statistics The Save dialog opens Choose the location and name for the report Click Save The report is saved with an html extension 151 Hardware information
281. h remote copy relationships If you back up a management group that is participating in Remote Copy it is important to back up the associated Remote Copy management groups at the same time If you back them up at different times and then try to restore one of the groups the back up files will not match This mismatch will cause problems with the restore Backup a management group configuration 1 In the navigation window select the management group and log in 2 Click Management Group Tasks on the Details tab and select View Management Group Configuration 3 Click Save A Save window opens so that you can select the location for the bin file or txt file 4 In the Save window accept the default name of the file or type a different name From the Files of Type drop down menu select the bin file type 6 Repeat this procedure and in Step 5 select the txt file type The txt file describes the configuration Restoring a management group Call Customer Support for help if you need to restore a management group using a bin file Safely shutting down a management group Safely shut down a management group to ensure the safety of your data Shutting down lets you e Perform maintenance on storage nodes in that group e Move storage nodes around in your data center e Perform maintenance on other equipment such as switches or UPS units e Prepare for a pending natural disaster Also use a script to configure a safe shut down in
282. h you want to create a cluster 2 Right click on the storage node and select Add to Existing or New Cluster 3 Select New Cluster and click Add 4 Enter a meaningful name for the cluster A cluster name is case sensitive and must be from 1 to 127 characters It cannot be changed after the cluster is created 209 Optional Enter a description of the cluster 6 Select one or more storage nodes from the list Use the up and down arrows on the left to promote and demote storage nodes in the list to set the logical order in which they appear For information about one specific disaster recovery configuration when the order matters see Configuring a cluster for disaster recovery on page 202 7 Click the iSCSI tab Configure virtual IP and iSNS for iSCSI VIPs are required for iSCSI load balancing and fault tolerance and for using HP LeftHand DSM for MPIO For more information see Chapter 22 on page 335 New for release 8 0 Virtual IP VIP addresses are required for all clusters in SAN iQ software versions 8 0 and higher 1 Click the iSCSI tab to bring it to the front Because a VIP is required in release 8 0 and greater the choice to use a virtual IP is disabled by default in the 8 0 CMC If you have management groups that are running 7 0 or earlier software the choice to use a VIP remains enabled 2 Add the IP address and subnet mask Adding an iSNS server Optional Add an iSNS server EY NOTE If you use an
283. hange the characteristics of a SmartClone volume Table 64 Requirements for changing SmartClone volume characteristics Shared or Indi ltem vidual Requirements for Changing Description Individual May be up to 127 characters Size Individual Available space on cluster Servers Individual Existing server defined 283 Shared or Indi ltem vidual Cluster Shared Replication Shared Level Requirements for Changing All associated volumes and snapshots will move automatically to the target cluster The target cluster must e Reside in the same management group e Have sufficient storage nodes and unallocated space for the size and replication level of the volume and all the other associated volumes and snapshots being moved When moving volumes to a different cluster those volumes temporarily exist on both clusters All associated volumes and snapshots must change to the same replication level The cluster must have sufficient storage nodes and unallocated space to support the new replication level for all related volumes Replication Pri Shared All associated volumes and snapshots must change to the same replication priority To change the replication priority the replication level must support the change You can always go from Redundancy to Availability However you cannot go from Availability to Redundancy unless a suffi cient number of storage nodes in the cluster are available For a detailed orit
284. he Configure Variable wizard opens to Step 1 EY NOTE For some variables only the notification method can be changed For example the frequency for the Storage Server Latency variable is set to 1 minute and cannot be changed 4 If allowed change the frequency for the variable and click Next The Configure Variable wizard opens to Step 2 Optional Change the alert notification method Click Finish EY NOTE If you are requesting email notification be sure to set up the SMTP settings on the Email Server Setup tab 137 Removing a variable from active monitoring Use Remove to remove variables to stop active monitoring that become pointless or impractical You can return a variable to active monitoring at any time Permanent variables such as Cache Status cannot be removed See List of monitored variables on page 138 1 Show the Alert Setup tab window 2 Select the variable you want to remove 3 Click Alert Setup Tasks and select Remove Monitored Variable 4 Click Remove A confirmation message opens 5 Click OK in the confirm window The variable is removed EY NOTE Variables are not deleted when they are removed from active monitoring You can add them back to active monitoring at any time List of monitored variables This section contains tables that show the variables that are monitored during active not passive monitoring For each variable the table lists the following information
285. he From field of email notifications will display root hostname where hostname is the name of the storage node 6 Select the box if you want to apply the settings to all the storage nodes in the management group If you DO apply the settings to other storage nodes in the management group and if you have a sender address entered all the other storage nodes will use that same sender address Optional Test the email connectivity now if you wish Click OK EY NOTE Notification of undeliverable email messages are sent to the sender address If you are requesting email notification be sure to set up the email notification in Alert Setup Applying SMTP settings to a management group Perform the steps in Setting SMTP settings on page 142 and check the check box to Apply these SMIP settings to all storage nodes in the management group Setting notification for one variable Specify email notification in two ways By editing an individual monitored variable See Editing a monitored variable on page 137 142 Reporting By setting the Set Threshold Actions window and affecting several monitored variables Setting notification for several variables Setting threshold actions determines the routing preferences for notification of alerts With this procedure you can set the same email address for several alerts 1 2 3 4 Select the Alert Setup tab Select the variables you want to change Click Alert Se
286. he Map View tab shown in Figure 131 on page 280 Because a SmartClone volume is the same as any other volume the icon is the standard volume icon However the clone point and the shared snapshot have unique icons as illustrated in Figure 124 on page 273 Map view The Map View tab is useful for viewing the relationships between clone point snapshots shared snapshots and their related volumes For example when you want to make changes such as moving a volume to a different cluster or deleting shared snapshots the Map View tab allows you to easily identify how many snapshots and volumes are affected by such changes 279 Getting Started Configuration Summary Available Nodes 1 E Trainingos EHE Servers 1 23 Administration H Sites FHS Programming 22 Performance Monitor Storage Nodes 3 Volumes 18 and Snap cx S C _SCsnap2 C SCsnap H C _online_4 3 S C _SCsnap S C _snap2 E C _snapt H C _online_2 3 C SCsnap H Ca _oniine_3 3 EES C _SCsnap H Ca _oniine_4 3 C _SCsnap BH C _oniine_5 3 EHA C _remote_1 4 C _SCsnap2 C SCsnap C _snap2 C _remote_2 4 C _remote_3 4 C _remote_4 4 C _trngrmt_4 2 amp HP LeftHand Networks Centralized Management Console Details iSCSI Sessions Remote Snapshots Assigned Servers Map View Layout Tree X laaa ik o Be C _snapt as C _snap2 G c tmgnmt_
287. he disk is nearly full Thin provisioning Thin provisioning reserves less space on the SAN than is presented to application servers Use thin provisioning when the application that is writing to the volume is effective at reusing disk space However the SAN iQ software warns you that the cluster is nearly full You always know that a thin volume may risk a write failure The SAN iQ software allocates space as needed However thin provisioning carries the risk that an application server will fail a write because the SAN has run out of disk space Best practice for setting volume size Create the volume with the size that you currently need Later if you need to make the volume bigger increase the volume size in the CMC and then expand the disk on the server In Microsoft Windows 222 Provisioning storage you expand a basic disk using Windows Logical Disk Manager and Diskpart For detailed instructions see Changing the volume size on the server on page 234 Planning data replication Data replication creates redundant copies of a volume on the SAN You can create up to four copies using 4 Way replication Because these copies reside on different storage nodes replication levels are tied to the number of available storage nodes in a cluster The SAN iQ software and the HP LeftHand Centralized Management Console provide flexibility through two features when you are planning data replication Replication levels allow you to ch
288. he remote volume e Make the remote volume into a primary volume Retaining the data on the remote target _ Disassociate the primary and remote management groups if the remote copy was between management groups Scripting evaluation Application based scripting is available for volume and snapshot features You can create scripts to e Create snapshots e Create remote volumes and snapshots Because using scripts with advanced features starts the 30 day evaluation period without requiring that you use the CMC you must first verify that you are aware of starting the 30 day evaluation clock when using scripting If you do not enable the scripting evaluation period any scripts you have running licensed or not will fail Turn on scripting evaluation To use scripting while evaluating advanced features enable the scripting evaluation period 1 In the navigation window select a management group 2 Select the Registration tab 3 Click Registration Tasks and select Feature Registration from the menu 4 Select the Scripting Evaluation tab 1 Read the text and check the box to enable the use of scripts during a license evaluation period 2 Click OK Turn off scripting evaluation Turn off the scripting evaluation period when you take either one of these actions e You purchase the feature you were evaluating e You complete the evaluation and decide not to purchase any advanced features To turn off the scripting evaluation 1
289. he target snapshot This volume gets a new name and the target snapshot becomes a clone point shared between the original volume and the new SmartClone volume For detailed information about SmartClone volumes see What are SmartClone volumes on page 263 Use Remote Copy to copy the newer snapshots that you want to keep before performing the rollback See the Remote Copy User Manual for more information about copying data New in release 8 0 When rolling back a volume to a snapshot the volume retains the original name Releases before 8 0 required a new name for the rolled back volume 257 Requirements for rolling back a volume Best Practices Stop any applications that are accessing the volume and log off all related iSCSI sessions e If a volume is part of a volume set typically you want to roll back each volume using its corres ponding snapshot A future release will identify snapshot sets in the CMC For more information see Creating snapshots for volume sets on page 249 Prerequisite e If you need to preserve the original volume or any snapshots that are newer than the one you will use for rolling back use Remote Copy to create a copy of the volume or snapshots before beginning the roll back operation A CAUTION When performing a roll back snapshots that are newer than the one you intend to roll back are deleted You will lose all data stored since the rolled back snapshot was created Consider creati
290. he time directly Select a time zone for the Time Zone drop down list EY NOTE If you use an NTP server you have the option of setting the time zone only 12 5 Click OK A warning message informs you that there may be a slight time lag for a reset to take effect 6 Click OK Editing the time zone only You initially set the time zone when you create the management group You can change the time zone later if necessary If you do not set the time zone for each management group the management group uses the GMT time zone whether or not you use NTP Files display the time stamp according to this local time zone 1 Click Time Tasks and then select Edit Time Zone 2 From the drop down list select the time zone in which this management group resides 3 Click OK Note the change in the Time column of the Time tab window 122 Setting the date and time 6 Administrative users and groups When you create a management group the SAN iQ software configures two default administrative groups and one default administrative user You can add edit and delete additional administrative users and groups All administrative users and groups are managed at the management group level Getting there In the navigation window log in to the management group and select the Administration node Managing administrative users When you create a management group one default administrative user is created Use the default
291. health and temperat ure RAID Information about RAID x lt Rebuild rate RAID Rebuild Rate is a prior ity measured against other OS tasks 159 This term Means this DL 380 DL 320s P4500 HP LeftHand HP LeftHand P4300 Unused devices Statistics Any device which is not participating in RAID This in cludes Drives that are missing e Uncon figured drives Drives that were powered down Failed drives re jected by array w IO errors Drives that are rebuild ing e Hotspare drives Information about the RAID for the storage node Unit number Identifies devices that make up the RAID configura tion including e Type of storage BOOT LOG SA NIQ DATA e RAID level 0 1 5 X i j Status Nor mal Re building Degraded Off e Capacity e Rebuild statistics complete time remain ing 160 Reporting This term Means this HP LeftHand P4500 HP LeftHand P4300 RAID O S parti tions Information about O S RAID Statistics Information about the O S RAID for the storage node Unit number Identifies devices that make up the O S RAID con figuration in cluding e Type of storage BOOT LOG SA NIQ DATA e RAID level 0 1 5 Status Nor mal Re building Degraded Off e Capacity e Rebuild statistics complete time remain ing Contro
292. hose two volumes make a volume set When you create an application managed snapshot of a volume in a volume set the CMC recognizes that the volume is part of a volume set SAN iQ then prompts you to create a snapshot for each volume in the volume set This creates a snapshot set that corresponds to the volume set A future release will identify snapshot sets in the CMC amp NOTE After you create snapshots for a volume set typically you do not want to delete individual snapshots from the snapshot set You want to keep or delete all snapshots for the volume set If you need to roll back to a snapshot typically you want to roll back each volume in the volume set to its corresponding snapshot The procedure below assumes that you select a volume that is part of a volume set for the snapshot 1 Log in to the management group that contains the volume for which you want to create a new snapshot Right click on the volume and select New Snapshot Select the Application Managed Snapshot option This option requires the use of the VSS Provider For more information see Requirements for application managed snapshots on page 248 This option quiesces VSS aware applications on the server before SAN iQ creates the snapshot The system fills in the Description and Servers fields automatically You cannot edit them Type a name for the snapshot 5 Click OK The New Snapshot Associated Volumes window opens with a list of all vo
293. hose volumes from the host to complete configuration Last update February 26 2009 Figure 169 Viewing compliant iSCSI initiators Authentication CHAP Server access with iSCSI can use the following authentication methods e Initiator node name single host e CHAP Challenge Handshake Authentication Protocol which can support single or multiple hosts EY NOTE The iSCSI terminology in this discussion is based on the Microsoft iSCSI Initiator terminology For the terms used in other common operating systems see iSCSI and CHAP terminology on page 338 CHAP is a standard authentication protocol The SAN iQ software supports the following configurations e No CHAP authorized initiators can log in to the volume without proving their identity The target does not challenge the server 1 way CHAP initiators must log in with a target secret to access the volume This secret proves the identity of the initiator to the target 2 way CHAP initiators must log in with a target secret to access the volume as in 1 way CHAP In addition the target must prove its identity to the initiator using the initiator secret This second step prevents target spoofing 1 Challenging for the target secret Initiator Laa Target volume L L __ 1l way CHAP 1 Challenging for the target secret 2 Challenging for the initiator secret LO H SS sS 2 way CHAP Figur
294. how four categories e Device Name e Device Type or the RAID level e Device Status Subdevices Status indicators On the RAID Setup tab and the Disk Setup tab the text or icon color indicates status Table 7 lists the status and color indicators for three categories e RAID Device Status e Disk Status e Disk Health Table 7 Status and color definitions Status Color Normal Green Inactive Yellow orange Uninitialized Yellow Rebuilding Blue Off or Removed Red Marginal Yellow Faulty Red Hot Spare Green Hot Spare Down Yellow Configuring and managing RAID Managing the RAID settings of a storage node includes e Choosing the right RAID configuration for your storage needs Setting or changing the RAID configuration if necessary 56 Storage Setting the rate for rebuilding RAID e Monitoring the RAID status for the storage node e Reconfiguring RAID when necessary Benefits of RAID RAID combines several physical disks into a larger logical disk This larger logical disk can be configured to improve both read and write performance and data reliability for the storage node RAID configurations defined The RAID configuration you choose depends upon how you plan to use the storage node The storage node can be reconfigured with RAIDO RAID 10 RAID5 RAID5 hot spare RAID5O or RAIDS depending on the model See Table 6 on page 55 for a list of RAID levels by model RAIDO RAIDO creates a strip
295. ied Yes g from another storage node in the cluster Replicdied yelumeson elusioied Yes 1 disk per RAID set can fail without storage nodes RAIDS 50 copying from another storage node in the Yes Replicated volomes on clusiered Yes 2 disks per RAID set can fail without P copying from another storage node in the Yes storage nodes RAID6 cluster Depends on the underlying RAID configura tion of the platform on which the VSA is in stalled HP recommends configuring RAIDS or RAID6 Yes if underlying plat form configured for RAID other than RAIDO Replicated volumes on clustered VSAs with virtual RAID Mixing RAID configurations You may mix storage nodes with different configurations of RAID within a cluster This allows you to add new storage nodes with different RAID levels Be certain to calculate the capacity of additional storage nodes running the desired RAID level because the cluster operates at the smallest usable per storage node capacity For instance you have four 1 TB NSM 160s running RAID10 You purchase two additional 1 TB NSM 160s which you want to run with RAIDS In your existing cluster a single 1 TB NSM 160 in RAID10 provides 0 5 TB usable storage A single NSM 160 in RAIDS5 provides 0 75 TB usable storage However due to the restrictions of how the cluster uses capacity the NSM 160 in RAID5 will still be limited to 0 5 TB per storage node Your best choice in this situation might be to configure the
296. if storage space equal to the volume size is available in the cluster This volume size may exceed the true allocated disk space on the cluster for data Primary storage which facilitates adding more storage nodes to the cluster later for seamless storage growth However if the volume size does exceed true alloc ated disk space the ability to make snapshots may be impacted See Chapter 14 on page 245 Remote volumes contain no data and so do not have a size Servers Optional Servers are set up in the management group to connect application Both hosts to volumes Select the server that you want to have access to the volume you are creating Advanced Tab Cluster If the management group contains more than one cluster you must specify Both f the cluster on which the volume resides Replication Level Default value 2 The number of copies of the data to create on storage nodes in the cluster The replication level is at most the number of storage Both nodes in the cluster or 4 whichever is smaller If you select a replication level of None the replication priority is not used See Planning data replica tion on page 223 238 Using volumes Volume char acteristic Replication Pri ority Type Configur able for Primary or Remote Volume Both Both What it means Default value Availability Availability These volumes remain available as long as at least one storage node out of ever
297. igate to the directory where you want to save the license key and customer information file In the File Name field enter a name for the file which defaults to a txt file Click Save Verify the information by viewing the saved txt file Registering advanced features 20 SNMP MIB information The SNMP Agent resides in the storage node The agent takes SNMP network requests for reading or writing configuration information and translates them into internal system requests Management Information Base MIB files are provided which can enable the system administrator to use their favorite SNMP tool to view or modify configuration information The SNMP Agent supports versions 1 and 2c of the protocol EY NOTE To ensure that all items display properly in your SNMP tool use version 2c or later of the protocol The supported MIBs The following are the supported MIBs though not every function in each MIB is supported DISMAN EVENT MIB HOST RESOURCES MIB IF MIB IP FORWARD MIB IP MIB NET SNMP AGENT MIB NET SNMP EXTEND MIB e NOTIFICATION LOG MIB e RFC1213 MIB e SNMP TARGET MIB SNMP VIEW BASED ACM MIB SNMPv2 MIB SNMPv2 SMI e UCD DLMOD MIB e UCD SNMP MIB 325 326 SNMP MIB information 21 Replacing disks appendix This document describes the disk replacement procedures for cases in which you do not know which disk to replace and or you must rebuild RAID on the entire
298. ile exporting performance statistics to 315 cuplex configuring 108 customer information 324 customer support registering Remote Copy 320 D data and deleting volumes clearing statistics sample 312 preserving with snapshots 247 data availability changing 242 requirements for setting replication priority 239 data mining using SmartClone volumes 265 356 data redundancy and RAID status 7 1 changing 242 requirements for setting replication priority 239 data replication 223 levels allowed in clusters 223 planning 223 requirements for setting 238 data transfer and RAID status 7 1 data transmission 1 1 date setting with NTP 120 setting without NTP 121 date and time for scheduled snapshot 255 decreasing volume size 242 defaults restoring for the Performance Monitor 313 definition clusters 32 management groups 32 remote copies 32 servers 32 sites 32 SmartClone volumes 263 snapshots 32 volumes 32 definition of RAID configurations 57 degraded RAID 71 Deleting servers 292 deleting administrative groups 27 administrative users 124 an administrative group 27 clone point 285 clusters 219 DNS servers 14 management groups 87 multiple SmartClone volumes 285 network interface bonds 104 NIC bond in Configuration Interface 344 NTP server 121 routing information 115 SmartClone volumes 284 snapshot schedules 256 snapshots 227 261 volumes 243 prerequisites for 258
299. ilover the Primary Volume to the Selected Remote Volume Below option e Edit Volume and changing a remote snapshot to a primary volume Making an application managed snapshot available on a stand alone server Use this procedure to make an application managed snapshot available on a stand alone server not part of a Microsoft cluster 1 2 ON AMS w 10 11 12 13 14 15 16 17 Disconnect the iSCSI sessions Do one of the following based on what you want to do with the application managed snapshot Convert temporary space e Create a SmartClone e Promote a remote volume to a primary volume using Failover Failback Volume Wizard and selecting the Failover the Primary Volume to the Selected Remote Volume Below option e Edit Volume and changing a remote snapshot to a primary volume Connect the iSCSI sessions to the new target volume Launch Windows Logical Disk Manager Bring the disk online Open a Windows command line and run diskpart exe List the disks that appear to this server by typing the command list disk Select the disk you are working with by typing select disk where is the corresponding number of the disk in the list Display the options set at the disk level by typing detaildisk If the disk is listed as read only change it by typing att disk clear readonly Select the volume you are working with by typing select volume where is the corresponding number of the volume
300. in a normal state even though the storage node is unavailable Verifying virtual manager status Verify whether a virtual manager has been started and if so which storage node it is started on Select the virtual manager icon in the navigation window The Details tab displays the location and status of the virtual manager 206 Using specialized managers Stopping a virtual manager When the situation requiring the virtual manager is resolved either the unavailable site recovers or the communication link is restored you must stop the virtual manager Stopping the virtual manager returns the management group to a fault tolerant configuration 1 Select the storage node with the virtual manager 2 Click Storage Node Tasks on the Details tab and select Stop Virtual Manager A confirmation message opens 3 Click OK The virtual manager is stopped However it remains part of the management group and part of the quorum Removing a virtual manager You can remove the virtual manager from the management group altogether 1 Select the management group from which you want to remove the virtual manager and log in 2 Click Management Group Tasks on the Details tab and select Delete Virtual Manager A confirmation window opens 3 Click OK to continue The virtual manager is removed EY NOTE The CMC will not allow you to delete a manager or virtual manager if that deletion causes a loss of quorum 207 208 Using s
301. ing you are in console mode Press Ctrl You don t have the cursor available or Alt to regain the cursor you don t have the keyboard available If your keyboard is missing move the mouse to the console window and click once You want to see your Failover Manager Your console window has timed out Click in the window with but the window is black your mouse then press any key 199 Virtual manager overview A virtual manager is a manager that is added to a management group but is not started on a storage node until it is needed to regain quorum A virtual manager provides disaster recovery for one of two configurations e Configurations with only 2 storage nodes A virtual manager will automatically be added when creating a management group using 2 storage nodes e Configurations in which a management group spans 2 geographic sites See Managers and quorum on page 172 for detailed information about quorum fault tolerance and the number of managers Because a virtual manager is available to maintain quorum in a management group when a storage node goes offline it can also be used for maintaining quorum during maintenance procedures When to use a virtual manager Use a virtual manager in the following configurations A management group across two sites with shared data e A management group in a single location with two storage nodes Use a virtual manager for disaster recovery in a two site configuration
302. ing and Planning Remote Copy in the Remote Copy User Manual Provisioning storage Provisioning storage with the SAN iQ software entails first deciding on the size of the volume presented to the operating system and to the applications Next decide on the configuration of snapshots including schedules and retention policies 221 Best practices To take full advantage of the features of the HP LeftHand Storage Solution use the appropriate combination of RAID volume and snapshot replication and schedules to snapshot a volume and related retention policies Table 44 Recommended SAN configurations for provisioning storage RAID Replication Level RAIDO 2 Way or 3 Way RAID10 2 Way RAID5 2 Way RAID6 2 Way Provisioning volumes Configure volume size based on your data needs how you plan to provision your volumes and whether you plan to use snapshots The SAN iQ software offers both full and thin provisioning for volumes Table 45 Volume provisioning methods Method Settings Volume size x replication level amount of space allocated on the Full provisioning SAN Thin provisioning Volume size gt amount of space allocated on the SAN Full provisioning Full provisioning reserves the same amount of space on the SAN as is presented to application servers Full provisioning ensures that the application server will not fail a write When a fully provisioned volume approaches capacity you receive a warning that t
303. ing the diagnostic report The results of diagnostic tests are written to a report file For each diagnostic test the report lists whether the test was run and whether the test passed failed or issued a warning EY NOTE If any of the diagnostics show a result of Failed call Customer Support To view the report file 1 After the diagnostic tests complete save the report to a file 2 Browse to the location where you saved the diagnostics report txt file 3 Open the report file List of diagnostic tests This section shows the diagnostic tests that are available for the storage node For each test the table lists the following information e A description of the test e Pass fail criteria See the specific table for your platform e For NSM 160 and NSM 260 see Table 29 e For DL 380 DL 320s NSM 2120 HP LeftHand P4500 and HP LeftHand P4300 see Table 30 e For IBM x3650 see Table 31 145 For Dell 2950 NSM 2060 and NSM 4150 see Table 33 Table 29 List of hardware diagnostic tests and pass fail criteria for NSM 160 and NSM 260 Diagnostic Test Description Pass criteria Fail criteria NSM 160 NSM 260 Fan Test ee the status of alll Fan is normal Fan is faulty or X X ans missing Power Test Checks the status of all Supplies norinal Supply is faulty X X power supplies or missing Temperature Checks the status of all Temperature gt Temperature i within normal outside
304. ing up 1 Click OK when you are finished changing or removing an iSNS server 2 Reconfigure the iSCSI initiator with the changes 3 Reconnect to the volumes 4 Restart the applications that use the volumes Troubleshooting a cluster Auto Performance Protection monitors individual storage node health related to performance issues that affect the volumes in the cluster Repairing a storage node provides a way to replace a failed disk in a storage node and minimize the time required to bring the storage node back to normal operation in the cluster with fully resynchronized data Auto Performance Protection If you notice performance issues in a cluster a particular storage node may be experiencing slow I O performance overload or latency issues You can identify whether Auto Performance Protection is operating by checking the storage server status on the storage node Details tab Auto Performance Protection is indicated by two unique statuses reported on the Details tab You will also receive alert notifications on these statuses Storage Server Overloaded The Overloaded status indicates that operations to the storage node are completing too slowly During the overloaded state volume availability is maintained while the storage node is quarantined in the cluster While the storage node is quarantined it does not participate in I O which should relieve the performance degradation After the operations return to normal in 10 minutes
305. ings window to finish Select By Name and type a host name That host name must exist in DNS and the NSM must be configured with DNS for the client to be recognized by the host name Click OK The host name appears in the Access Control list Click OK in the Edit SNMP Settings window to finish Editing access control entries 1 SO NAM WH Log in to the storage node and expand the tree Select the SNMP category from the tree On the SNMP General tab window click SNMP General Tasks and select Edit SNMP settings Select the Access Control entry from the list Click Edit Change the appropriate information Click OK Click OK on the Edit SNMP Settings window when you are finished Deleting access control entries 1 2 3 Log in to the storage node and expand the tree Select the SNMP category from the tree On the SNMP General tab window click SNMP General Tasks and select Edit SNMP settings The Edit SNMP Settings window opens Select a client listed in the Access Control list and click Delete A confirmation message opens Click OK 131 6 Click OK on the Edit SNMP Settings window when you are finished Using the SNMP MIB The LeftHand Networks MIB provides read only access to the storage node The SNMP implementation in the storage node supports MIB II compliant objects In addition MIB files have been developed for storage node specific information These files when loaded in the SNMP management too
306. ion window and the Volumes and Snapshots tab 290 21 141 Warning after changing load balancing check box cceeecceeeeseceeeeeeeeeeseeeeeseeeeaes 292 142 Example showing overview of cluster activity c cccceececeeseceeeeeeeeeeseeeeeseeeenseeeenaaes 298 143 Example showing volume s type of Workload cccccccceseeceeeeeeeeeeeeeeseeeeeeetseeeetaeeeees 298 144 Example showing fault isolation as3 s53 cugadecassa nov sou puesyateoneutandensannmiasanangeegeevuandorsetgaiets 299 145 Example showing IOPS of two volumes cccececeeeseeeeeneeeeesceeseeeeeeeeaeeeeseteeeeseeeeees 299 146 Example showing throughput of two volumes ccccccccesecceeeceeeeseeeeeeeneeeeeeteeeesteeeees 300 147 Example showing activity generated by a specific server ccccccceeeteeeeesteeeesteeeeaes 300 148 Example showing network utilization of three storage nodes ccesceeeeeeeeteeetteeeteees 301 149 Example comparing two clusters s ccccsseswieatssd case ececdsaas tdaca cdussntaedesesadinglusedisentdeseaeads 302 150 Example comparing two volumes ini ices zie voduninisdneevenivaiei ean nacueea eels 302 151 Performance Monitor window and its parts ccccccccceesececeeeseeeeeeseeeesneeeeeeeeeesteeeeeses 303 152 Performance Monitor toolbar ucshsutauatsrvetarsiateaantresersasvdeua aienassinuaniandeontianteanten 304 153 Example of a warning message s scsccscsncceteatstossztadeat ade daude cious tvs etvoekcvensntocendl case
307. ional settings For more information about characteristics of SmartClone volumes see Defining SmartClone volume characteristics on page 267 Click OK when you have finished setting up the SmartClone volume and updated the table The new volume appears in the navigation window with the snapshot now a designated clone point for both volumes Assign a server and configure hosts to access the new volume if desired HP LeftHand Networks Centralized Managen Getting Started Ee Configuration Summary fy TrainingOS if Servers 1 2 Administration B Sites E 2 Programming Performance Monitor Ciu Storage Nodes 3 Stal Ee volumes 2 and Snapshots Gea lt _ Typ Ee C _backup2 Size 6 C _backup1 EH I c _Newvolume 1 Rep LS ce backup Ba Utili Targ isc 1 Original volume 2 New SmartClone volume from snapshot 3 Shared clone point Figure 116 New volume with shared clone point If you created the SmartClone from an application managed snapshot use diskpart exe to change the resulting volume s attributes For more information see Making an application managed snapshot available on page 250 Cancel the roll back operation If you need to log off iSCSI sessions stop application servers or other actions cancel the operation perform the necessary tasks and then do the roll back 1 2 3 260 Click Cancel Perform necessary actions Start the roll
308. ipating in RAID This in cludes Drives that are missing e Unconfigured drives Drives that were powered down e Failed drives rejected by ar ray w IO er rors Drives that are rebuilding e Hot spare drives Information about Statistics the RAID for the storage node Identifies devices that make up the RAID configura tion including Type of storage BOOT LOG SANIQ DATA e RAID level 0 Unit Number 1 5 Virtual Status Normal Rebuilding De graded Off e Capacity e Rebuild statist ics com plete time re maining RAID O S partitions Information about O S RAID This term Means This Minimum Rebuild Speed Minimum amount of data in MB seconds that will be transferred during an O S RAID rebuild The higher this num ber the less band width available for users because the system will not transfer at a rate lower than what is set Maximum Rebuild Speed The maximum amount of data in MB second that will be transferred during an O S RAID rebuild Statistics Information about the O S RAID for the storage node Unit Number Identifies devices that make up the O S RAID contigur ation including Type of storage BOOT LOG SANIQ DATA e RAID level 0 1 5 e Status Normal Rebuilding De graded Off e Capacity e Rebuild statist ics com plete time re maining Boot device statist ics D
309. isk drives are drives are X X X drives present missing S M A R T Self Monitoring Analys is and Reporting Technology is imple mented in all mod ern disks A pro Warsing or conan ist All drives Failed if one Disk SMART constantly tracks a pass health or more X X X Health Test range of the vital fees test drives fails characteristics in health test cluding driver disk heads surface state and electronics This information may help predict hard drive failures Generate SMART logs The report Th e report for analysis Generates a drive was success was not gen X X X contact Cus health report fully gener erated tomer Sup ated port Generate DSET Report amp Perc Event The report The report Logs for Generates a drive was success was not gen X X X analysis health report fully gener erated contact Cus ated tomer Sup port Using the hardware information report Hardware information reports display statistics about the performance of the storage node its drives and configuration Statistics in the hardware reports are point in time data gathered when you click the Refresh button on the Hardware Information tab Generating a hardware information report To generate a Hardware Information report 150 Reporting Select the Hardware Information tab HP LeftHand Networks Centralized Management Console E ceting stated Diagnostics Hardware information Log Files FE Confi
310. isk into the storage node the disk must be powered on from the Storage category Disk Setup tab Until the disk is powered on it is listed as Off or Missing in the Status column and the other columns display dotted lines like this Figure 62 shows a representation of a missing disk in a storage node danagement Console RAID Setup Disk Setup Disk Status Health Safe to Remove Model SerialNumber Class Capacity Inactive normal Yes COMPAQ B D210F25K_ SCSI320MB 33 92 GB Inactive normal Yes COMPAQ B D212X4CK_ SCSI320MB 33 92 GB Inactive normal Yes COMPAQ B 3KTOM8550 SCSI 320MB 33 92 GB Inactive normal Yes COMPAQ B D210AH6K SCSI320MB 33 92 GB Inactive normal Yes COMPAQ B D2108K3K SCSI 320MB 33 92 GB Disk Setup Tasks Y Figure 62 Viewing a power off or missing disk Manually powering on the disk in the IBM x3650 1 In the navigation window select the IBM x3650 in which you replaced the disk drive 2 Select the Storage configuration category A CAUTION Wait until the RAID status on the RAID Setup tab displays Rebuilding Click the Disk Setup tab Select the disk in the list to power on Click Disk Setup Tasks and select Power On Disk Click OK on the confirmation message PS E 85 RAID rebuilding After the disk is powered on RAID starts rebuilding on the replaced disk Note that there may be a delay of up to a couple of minutes before you can see that RAID
311. isk number flash status capacity driver version me dia used for device and model number Power supply Information about the type of power supplies in the storage node NSM 160 NSM 260 VSA X X X X X X X X X X X X X X X 156 Reporting This term Means This NSM 160 NSM 260 VSA Power supplies Status information about those power supplies X Controller cache items Information about RAM including but not limited to the model serial number status battery status ver sioning cache size memory size and voltage X Sensor data For the hardware listed shows in formation about fan voltage and temperature sensors on the mother board in cluding minimums and maximum Table 35 Selected details of the Hardware Report for DL 380 DL 320s NSM 2120 HP LeftHand P4500 and HP LeftHand P4300 This term Last refreshed Means this Date and time X report created DL 380 DL 320s HP LeftHand P4500 X HP LeftHand P4300 X Hostname Hostname of the storage X node Storage node software Full version number for storage node software Also lists any X patches that have been ap plied to the storage node IP address IP address of the storage X node Support key Support Key is used by a Technical Sup port represent X ative to log in to the storage node 157 This te
312. it is the right size and has the right carrier For confirmation on which disks need to be replaced contact customer support Prerequisites All replicated volumes and snapshots should show a status of Normal Non replicated volumes may be blinking If volumes or snapshots are not replicated change them to 2 way replication before replacing the disk If the cluster does not have enough space for the replication take a backup of the volumes or snapshots and then delete them from the cluster After the disk replacement is complete recreate the volumes and restore the data from the backup Any volumes or snapshots that were being deleted should have finished deleting Write down the order in which the storage nodes are listed in the Edit Cluster window You must ensure that they are all returned to that order when the repair is completed 327 Replacing disks Use this procedure for any one of the listed cases e RAIDO goes off e When multiple disks needs to be replaced on a storage node with RAID5 RAID50 or RAID6 e When multiple disks on the same mirror set need to be replaced on a storage node with RAID10 Verity storage node not running a manager Verify that the storage node that needs the disk replacement is not running a manager 1 Log in to the management group 2 Select the storage node in the navigation window and review the Details tab information If the Storage Node Status shows Manager Normal and the Management G
313. ized and ready to be used The device is ready to be removed from the storage node It will not be used nine to boot the storage node Failed The device encountered an I O error and is not ready to be used Uintewneiied The device has not yet been used in a storage node It is ready to be activ ated Not Recognized The device is not recognized as a boot device The device cannot be used For example the compact flash card is the Unsupported wrong size or type EY NOTE When the status of a boot device changes an alert is generated See Using alerts for active monitoring on page 135 Starting or stopping a dedicated boot device NSM 160 NSM 260 On storage nodes with dedicated boot devices use this procedure to remove a boot device from service and later return it 1 From the navigation window select the storage node and log in if necessary 52 Working with storage nodes Open the tree below the storage node and select Storage Select the Boot Devices tab In the tab window select the boot device you want to start or stop 2 SS Click Boot Devices Tasks and select either e Activate to make a boot device available in the event of a failure or e Deactivate to remove a boot device from service Powering on or rebooting storage nodes with two dedicated boot devices NSM 160 NSM 260 When a storage node with two dedicated boot devices powers on or reboots it references boot configuration informati
314. k OK when done Use Name Element Defautt Use Custom O Management Groups gJ i SmartClone Volumes VOL_ m Snapshots SS_ yj Remote Snapshots RS_ m Schedules to Snapshot a Yolume _Sch_SS_ yj Schedules to Remote Snapshot a Yolume _Sch_RS_ For elements that are derived from volumes the volume name will be pre pended to the default or custom name Restore Default Settings Cancel Figure 3 Default naming conventions for snapshots and SmartClone volumes Changing naming conventions Change the elements that use a default naming convention or change the naming convention itself Table 1 illustrates the default naming conventions built into the SAN iQ software Table 1 Default names provided Element Default name Disabled by default Management Groups MG_ 34 Getting started Element Clusters Volumes Enabled by default SmartClone Volumes Snapshots Remote Snapshots Schedules to Snapshot a Volume Schedules to Remote Snapshot a Volume Default name cCL_ VOL_ _Sch_SS_ _Sch_RS_ If you were to use the given defaults for all the elements the resulting names would look like those in the following example Notice that the volume name carries into all the snapshot elements including SmartClone volumes which are created from a snapshot Table 2 Example of how default names work Element Disabled at installation Management Groups Clusters Volumes Ena
315. k in a hot swap platform NSM 160 NSM 260 DL380 DL320s NSM 2120 Dell 2950 NSM 2060 NSM 4150 HP LeftHand P4500 HP LeftHand P4300 on page 86 A CAUTION You must always use a new drive when replacing a disk in an Dell 2950 NSM 2060 or NSM 4150 Never reinsert the same drive or another drive from the same Dell 2950 NSM 2060 or NSM 4150 Rebuilding data The following steps take you through the steps to first rebuild RAID on the storage node and then to rebuild data on the storage node after it is added to the management group and cluster Re create the RAID array 1 Select the Storage category and then select the RAID Setup tab 2 Click RAID Setup Tasks and select Reconfigure RAID The RAID Status changes from Off to Normal EY NOTE If RAID reconfigure reports an error reboot the storage node and try reconfiguring the RAID again If this second attempt is not successful call customer support 330 Replacing disks appendix Checking progress for RAID array to rebuild For NSM 160 NSM 260 DL380 DL320s NSM 2120 IBM x3650 HP LeftHand P4500 HP LeftHand P4300 only Use the Hardware Information report to check the status of the RAID rebuild 1 Select the Hardware category and then select the Hardware Information tab 2 Click the link on the tab Click to Refresh and scroll down to the RAID section of the Hardware report shown in Figure 167 You can view the RAID rebuild rate and percent compl
316. ks Centralized Management Console Joe Details Use Summary VolumeUse Node Use iSCSI Sessions Remote Snapshots iscsi Storage Space Total 68528068 aa Provisioned 728 96 GB es OO Hot Provisioned 6123 64 GB 6000 Provisioned Space 5500 Volumes 650GB sopp Snapshots 12 46 GB e w Total 728 96 GB 000 a 123 64 Used Space 3600 80 1 Volumes 000GB 3000 Snapshots 770GB og Total 77068 2000 Saved Space 1600 Thin Provisioning 441 79GB 1000 SmartClone Feature 579368 gop Fr ay i070 Ga Total 499 78 GB o Provisioned for Provisioned for Not Provisioned Total Space Provisionable Space s 0 Alerts Remaining _Datertime Hostname IP Address Alert Message Figure 107 Reviewing the Use Summary tab In the Use Summary window the Storage Space section reflects the space available on the storage nodes in the cluster Storage space is broken down as shown in Table 48 Table 48 Information on the Use Summary tab Category Description Table information Storage space Total Combined space available in the cluster for storage volumes and snapshots Amount of space allocated for storage including both volumes and snapshots Provisioned This value increases as snapshots are taken or as a thinly provisioned volume grows 228 Provisioning storage Category Description Not provisioned Amount of spa
317. l 1 Rename SmartClone volume in list Figure 120 Rename SmartClone volume from base name Shared versus individual characteristics Characteristics for SmartClone volumes are the same as for regular volumes However certain characteristics are shared among all the SmartClone volumes and snapshots created from a common clone point If you want to change one of these shared characteristics for one SmartClone volume that change will apply to all related volumes and snapshots including the original volume and snapshot from which you created the SmartClone volumes Simply use Edit Volume on the selected volume and make the change to the volume A message opens stating that the change will apply to all of the associated volumes which are noted in the message For example in Figure 121 in the cluster Programming there are 10 SmartClone volumes created from one source volume and its clone point You want to move the first of the SmartClone volumes C class_1 to the cluster SysAdm 269 HP LeftHand Networks Centralized Mana Getting Started st Configuration Summary Available Nodes 1 cd FailoverManager HoH TrainingOS A Servers 1 C4 eon Administration Performance Monitor Storage Nodes 3 Volumes 11 and Snapshots 1 T ca s Storage Nodes 3 L amp Volumes 0 and Snapshots 0 1 Source volume 2 Clone point 3 SmartCLone volumes 10 Figure 121 Programming cluster
318. l 2 Double click to open the tree underneath the storage node The list of storage node configuration categories opens 3 Select the Storage category The Storage tab window opens Select the RAID Setup tab and verify the RAID settings 5 In the list of configuration categories select the TCP IP Network category and configure your network settings 6 In the list of configuration categories select the SNMP and or Alerts categories to configure the monitoring for your IP SAN Creating a volume using the wizard Next you create the storage hierarchy using the Management Groups Clusters and Volumes wizard found on the Getting Started Launch Pad Select Getting Started in the navigation window to access the Getting Started Launch Pad On the Launch Pad select the Management Groups Clusters and Volumes wizard The first task in the wizard is to assign one or more storage nodes to a management group The second task is to cluster the storage nodes The third task is to create a storage volume This storage hierarchy is depicted in Figure 5 i Servers 1 Lie ExchLogs ean Administration B Sites Virtual Manager a Denver 4 ExchLogs Performance Monitor gt Storage Nodes 3 Cr ee volumes 2 and Snapshots 2 T BkupLogs 1 I HdetrsLogs 1 Figure 5 The SAN iQ software storage hierarchy While working through the wizard you need to have ready the following information Aname for your management group not
319. l ler cache items Information about the RAID controller card and Battery Backup Unit BBU includ ing model number serial number cache status battery status hard ware version and firmware version Power supply Shows the type or number of power sup plies 161 HP LeftHand HP LeftHand This term Means this DL 380 DL 320s P4500 P4300 Status informa Power supplies tion about the X X X X power sup plies Shows for the hardware lis ted the status Sensors real measured X X X X value minim um and maxim um values Table 36 Selected details of the hardware report for IBM x3650 This term Means this IBM x3650 Last refreshed Date and time report created X Hostname Hostname of the storage node X IP number IP address of the storage node X Full version number for storage node software Also lists any Storage node software patches that have been applied to 5 the storage node Support Key is used by a Technical Support key Support representative to log into X the storage node Information about NICs in the stor age node including the card num ber manufacturer description MAC address mask gateway NIC data and mode Mode shows manv X al auto disabled Manual equals a static IP auto equals DHCP and disabled means the interface is disabled Information about DNS if a DNS DNS data server is being used providing the X IP address of the DNS servers Informa
320. l allow you to see storage node specific information such as model number serial number hard disk capacity network characteristics RAID configuration DNS server configuration details and more Installing the LeftHand networks MIB The LeftHand Networks MIB files are installed when you install the HP LeftHand Centralized Management Console if you selected a Complete installation If you selected a Typical installation you must load the LeftHand Networks MIB in the Network Management System as outlined below 1 Load LEFTHAND NETWORKS GLOBAL REG MIB 2 Load LEFTIHAND NETWORKS NSM MIB 3 The following MIB files can be loaded in any sequence LEFTHAND NETWORKS NSM CLUSTERING MIB LEFTHAND NETWORKS NSM DNS MIB LEFTHAND NETWORKS NSM INFO MIB LEFTHAND NETWORKS NSM NETWORK MIB LEFTHAND NETWORKS NSM NOTIFICATION MIB LEFTHAND NETWORKS NSM NTP MIB LEFTHAND NETWORKS NSM SECURITY MIB LEFTHAND NETWORKS NSM STATUS MIB LEFTHAND NETWORKS NSM STORAGE MIB This is the case when an SNMP management tool console and the HP LeftHand Centralized Management Console run on the same server EY NOTE Any variable that is labeled Counter64 in the MIB requires version 2c or later of the protocol Other standard version 2c compliant MIB files are also provided on the CD Load these MIB files in the Network Management System if required Disabling the SNMP agent Disable the SNMP Agent if you no longer want to use SNMP applications to monitor your
321. l disks are combined into mirrored sets of disks and then combined into one striped disk Examples of RAID10 devices are shown in Figure 19 through Figure 25 If RAID1 is configured for the NSM 260 the physical disks are combined into mirrored pairs of disks as shown in Figure 20 RAID1 uses only one pair of disks RAID10 uses up to eight pairs of disks depending on the platform NSM disks 1 striped RAID device Figure 19 RAID10 on an NSM 160 mirroring done at hardware level Storage Node Disks Mirrored RAID disk pairs Figure 20 RAID1 on an NSM 260 6l DL 380 Disks Mirrored disk pairs 1 striped RAID device Figure 21 RAID10 on a DL380 DL 320s NSM 2120 and HP LeftHand P4500 Disks Mirrored RAID disk pairs nle Figure 22 RAID10 on the DL320s NSM 2120 and the HP LeftHand P4500 2 striped RAID device ka a bi Mirroreddisk pairs in the HP LeftHand P4300 1 5 3 7 2 6 4 8 1 RAID device Figure 23 RAID1 0 in the HP LeftHand P4300 Disk Status Health Safe to Ri 0 Active normal Yes B 1 Active normal Yes B 2 Active normal Yes B a Active normal Yes E 4 Active normal Yes E 5 Active normal Yes 1 Mirrored disk pair 1 2 Mirrored disk pair 2 3 Mirrored disk pair 3 Figure 24 Initial RAID10 setup of the Dell 2950 and NSM 2060 62 Storage Disk Sta
322. l manager or failover manager recommended ates Special Manager None Quorum 2 Local Bandwidth 4 MB sec S ExchLogs EE Performance Monitor Nous suree Nodes 3 Name IP Address Model RAID Status RAID Configu Software Ve Manager Special Mana Denyar Denver 3 10 0 6117 DELL2950 Normal RAID 5 8 0 00 1643 0 Normal mA Q Denver 1 10 0 6116 DELL2950 Normal RAD5 8 0 00 1643 0 Normal Ee aes Denver 2 100 6032 NSM4150 Normal RAID 50 8 0 00 1659 0 Volumes 1 and Snapshot L P HaLogFiles 0 1 Management group displays icon 2 Status describes quorum risk Figure 86 Manager quorum at risk Deleting the management group is the only way to stop the last manager Implications of stopping managers Editing Quorum of the storage nodes may be decreased e Fewer copies of configuration data is maintained e Fault tolerance of the configuration data may be lost e Data integrity and availability may be compromised CAUTION Stopping a manager can result in the loss of fault tolerance 1 In the navigation window select a management group and log in Select the storage node for which you want to stop the manager Click Storage Node Tasks on the Details tab and select Stop Manager A confirmation message opens 4 Click OK to confirm stopping the manager a management group Editing management group tasks includes the following items e Changing local bandwidth priority Editing Remote Bandwidth
323. led O through 5 from left to right top to bottom when you are looking at the front of the IBM x3650 Figure 49 For the IBM x3650 the columns Health and Safe to Remove will respectively help you assess the health of a disk and whether or not you can replace it without losing data RAID Setup Disk Setup Status Health Safe to Remove Model SerialNumber Class Capacity Active normal No IBM ESXSGNA J20LC2RK SAS 306Gb 68 37 GB Active normal IBM ESXSGNA J20VWS4LK SAS 306Gb 68 37 GB Active normal IBM ESXSGNA J2Z0ARYNK SAS 306Gb 68 37 GB Active normal IBM ESXSGNA J2MAJVQK SAS 306Gb 68 37 GB Active normal IBM ESXSGNA JZOKYQFK SAS 3 0 Gb 68 37 GB Active normal IBM ESXSGNA J20VWOVW1K SAS 3 0 Gb 68 37 GB Figure 49 Arrangement of drives in the IBM x3650 Viewing disk status for the VSA For the VSA the Disk Setup window shows 1 virtual disk RAID Setup Disk Setup Disk Status Health Safe to Remove Model Serial Number Class Capacity 1 Active normal No VMware Virtual vMO000 Virtual 2047 00 GB Figure 50 Viewing the disk status of a VSA EY NOTE If you want to change the size of the data disk in a VSA see the HP LeftHand P4000 VSA User Manual for instructions about recreating the disk in the VI Client Viewing disk status for the Dell 2950 and NSM 2060 The disks are labeled O through 5 in the Disk Setup window and correspond with
324. leting management group causes data loss 187 DHCP static IP addresses 92 unicast communication 92 return repaired node to same place 217 stopping manager can cause data loss 82 websites HP HP Subscriber s Choice for Business 27 product manuals 27 windows Alert window 30 navigation window 30 Tabs window 30 wizards Getting Started Launch Pad 32 write failure warnings write failure 222 371 372
325. lient IO This is the iSCSI traffic and does not include other traffic such as replication remote snapshots and management functions For storage nodes and devices the statistics report the total traffic including iSCSI traffic along with replication remote snapshots and management functions 306 Maonitoring performance The difference between what the cluster volumes and snapshots are reporting and what the storage nodes and devices are reporting is the overhead replication remote snapshots and management functions Client I O NSM IOPs Physical Disks Physical Disks Physical Disks Figure 156 Performance statistics and where they are measured The following statistics are available Table 69 Performance Monitor statistics iSCSI statistics for clusters volumes and snapshots Cluster volume and snapshot reads and writes occur at this layer Statistics for storage nodes and devices Includes replication remote snapshot and management functions such as manager communications Volume Statistic Definition Cluster Q NSM Snap shot IOPS Reads Average read requests per second for the sample X X X interval IOPS Writes Average write requests per second for the sample X X X interval IOPS Total Average read write requests per second for the X X X sample interval Throughput Reeds Average read bytes per second for the sample in X X X terval Throughput Writes Average write byt
326. line help and other information about the CMC and the SAN iQ software File Find Tasks Help 1 ed Gett iii a Volume Normal Coordinating man becial Manager None Q Model IBMX3650 Ne 1 Menu bar Figure 2 Viewing the menu bar in the navigation window Using the navigation window The navigation window displays the components of your network architecture based the criteria you set in the Find item in the menu bar or by using the Find Storage Nodes wizard Choose to display a small group of storage nodes such as those in one management group or display all storage nodes at one time Logging in The CMC automatically logs in to storage nodes in the Available Nodes pool to access the configuration categories After you have created management groups and you then open the CMC you must log in to access the management group After you have logged in to one management group the CMC attempts to log you in automatically to other management groups using the first login A CAUTION Do not open the CMC on more than one machine at one time Opening more than one session of the CMC on your network is not supported Traversing the navigation window As you move through the items in the navigation window the tab window changes to display the information and tabs about that item Single clicking Click an item once in the navigation window to select it 31 Click the plus sign once to open up a tree th
327. ll the Microsoft MPIO DSM if you want to use the HP LeftHand DSM for MPIO Finding the storage nodes Open the CMC and using the Getting Started Launch Pad start the Find Nodes Wizard To use the wizard you need to know either the e The subnet and mask of your storage network or The IP addresses or host names of the storage nodes When you have found the storage nodes they appear in the Available Nodes pool in the navigation window Configuring storage nodes Configure the storage node next If you plan to use multiple storage nodes they must all be configured before you use them for clustered storage The most important categories to configure are e RAID The storage node is shipped with RAID already configured and operational Find instructions for ensuring that drives in the storage node are properly configured and operating in Chapter 3 on page 55 e TCP IP Network Bond the NIC interfaces and set the frame size NIC flow control and speed and duplex settings Read detailed network configuration instructions in Chapter 4 on page 89 37 Alerts Use the email alert function or SNMP to ensure you have immediate access to up to date alert and reporting information Detailed information for setting up SNMP and alerts can be found in Chapter 7 on page 129 and Using alerts for active monitoring on page 135 Configure storage node categories 1 From the navigation window select a storage node in the Available Nodes poo
328. ltan eously for data transfer Log files for the storage node are stored both locally on the storage node and are also written to a remote log server A collection of one or more storage nodes which serves as the container within which you cluster storage nodes and create volumes for storage Manager software runs on storage nodes within a management group You start managers on designated storage nodes to govern the activity of all of the storage nodes in the group The Management Information Base provides SNMP read only access to the storage node Variables that report health status of the storage node These variables can be monitored using alerts emails and SNMP traps Graphically depicts the status of each storage node Storage Nodes on the network are either available or part of a management group Network Time Protocol In RAIDS redundant information is stored as parity distributed across the disks Parity allows the storage node to use more disk capacity for data storage An overprovisioned cluster occurs when the total provisioned space of all volumes and snapshots is greater than the physical space available on the cluster This can occur when there are snapshot schedules and or thinly provisioned volumes asso ciated with the cluster Snapshots that are taken at a specific point in time but an application writing to that volume may not be quiesced Thus data may be in flight or cached and the actual data on the volum
329. ltaneously For Link Aggregation Dynamic Mode both NICs must be plugged into the same switch and that switch must be LACP capable and both support and be configured for 802 3ad aggregation e For Active Passive plug the two NICs on the storage node into separate switches While Link Aggregation Dynamic Mode will only survive a port failure Active Passive will survive a switch failure NIC bonding and speed duplex frame size and flow control settings These settings are controlled on the TCP Status tab of the TCP IP Network configuration category If you change these settings you must ensure that both sides of the NIC cable are configured in the same manner For example if the storage node is set for Auto Auto the switch must be set the same See The TCP status tab on page 107 for more information Table 14 Comparison of active passive link aggregation dynamic mode and adaptive load balancing bonding Feature Active passive ms ile ail dynam Adaptive load balancing Bandwidth oe mised Sg Simultaneous use of both Simultaneous use of both width NICs increases bandwidth NICs increases bandwidth Protection during port failure Yes Yes Yes Protection during switch failure Yes NICs can be plugged into different switches No Both NICs are plugged into the same switch Yes NICs can be plugged into different switches Requires support for 802 3ad link aggrega tion No Yes
330. lume is created Provisioning Server SmartClone volumes default to Thin Provisioning You can select Full Provisioning when you create them You can also edit the individual volumes after they are created and change the type of provisioning Server assigned to the volume While you can only assign one server when you create SmartClone volumes you can go back and add additional clustered servers later For more information see Assigning server connections access to volumes on page 292 Permission Type of access to the volume Read Read Write None Naming SmartClone volumes Because you may create dozens or even hundreds of SmartClone volumes you need to plan the naming convention for them For information about the default naming conventions built into the SAN iQ software see Setting naming conventions on page 34 When you create a SmartClone volume you can designate the base name for the volume This base name is then used with numbers appended incrementing to the total number of SmartClone volumes you create For example Figure 1 19shows a SmartClone volume with the base name of C and 10 clones The number in parenthesis indicates how many snapshots are under that volume 267 Z HP LeftHand Networks Centralized Manag Getting Started st Configuration Summary gt Available Nodes 4 EH fy TrainingOS is Servers 1 em ministration Sites E z Programming lt lt P
331. lumes in the volume set 6 Optional Edit the Snapshot Name for each snapshot EY NOTE Be sure to leave the Application Managed Snapshots option selected This option maintains the association of the volumes and snapshots and quiesces the application before creating the snapshots If you deselect the option the system creates a point in time consistent snapshot of each volume listed 7 Click Create Snapshots to create a snapshot of each volume The snapshots all display in the CMC A future release will identify snapshot sets in the CMC 249 Editing a snapshot You can edit both the description of a snapshot and its server assignment The description must be from O to 127 characters 1 Log in to the management group that contains the snapshot that you want to edit In the navigation window select the snapshot Click Snapshot Tasks on the Details tab and select Edit Snapshot Change the description as necessary Change the server assignment as necessary Click OK when you are finished The snapshot Details tab refreshes a a ae oN Mounting or accessing a snapshot A snapshot is a copy of a volume To access data in the snapshot you have two choices e Create a SmartClone volume from the snapshot to use for data mining development and testing or creating multiple copies See Create a new SmartClone volume from the snapshot on page 259 e Mount the snapshot for backing up or data recovery You assign
332. ly interface listed in the window Tab to select the bond and press Enter Tab to Delete Bond and press Enter Press Enter on the confirmation window wb wn On the Available Network Devices window tab to Back and press Enter Setting the TCP speed duplex and frame size You can use the Configuration Interface to set the TCP speed duplex and frame size of a network interface TCP speed and duplex You can change the speed and duplex of an interface If you change these settings you must ensure that BOTH sides of the NIC cable are configured in the same manner For example if the storage node is set for Auto Auto the switch must be set the same For more information about TCP speed and duplex settings see Managing settings on network interfaces on page 108 344 Using the Configuration Interface Frame size The frame size specifies the size of data packets that are transferred over the network The default Ethernet standard frame size is 1500 bytes The maximum allowed frame size is 9000 bytes Increasing the frame size improves data transfer speed by allowing larger packets to be transferred over the network and by decreasing the CPU processing time required to transfer data However increasing the frame size requires that routers switches and other devices on your network support that frame size For more information about setting a frame size that corresponds to the frame size used by routers switches and other d
333. m has been lost The virtual manager must be added to the management group before any failure occurs A virtual manager must run only until the site is restored or commu nication is restored The virtual manager should run only until the site is restored and data is resynchronized or until communication is restored and data is resynchron ized Illustrations of correct uses of a virtual manager are shown in Figure 94 201 Normal Operation Virtual Manager Added and Not Started E Virtual Manager 2 regular managers 2 regular managers Scenario 1 Communication Link Lost 2 regular managers 2 regular managers Start virtual manager on only one site Scenario 2 Site A Fails 2 regular managers Start virtual manager on site B 2 regular managers Start virtual manager on site A Examples of 2 site failure scenarios where a virtual manager is started to regain quorum In all the failure scenarios only one site becomes primary with a virtual manager started Figure 94 Two site failure scenarios that are correctly using a virtual manager Configuring a cluster for disaster recovery In addition to using a virtual manager you must configure your cluster and volumes correctly for disaster recovery This section describes how to configure your system including the virtual manager Best practice The following example describes configuring a managem
334. m the host 1 In the navigation window right click the volume whose server connection assignments you want to edit Select Assign and Unassign Servers Change the settings as needed Click OK Editing server assignments from a server connection You can edit the assignment of one or more volumes or snapshots to any server connection A CAUTION If you are going to unassign a server connection or restrict permissions stop any applications from accessing the volume or snapshot and log off the iSCSI session from the host In the navigation window right click the server connection you want to edit Select Assign and Unassign Volumes and Snapshots Change the settings as needed Click OK ee ee Completing the iSCSI Initiator and disk setup After you have assigned a server connection to one or more volumes you must configure the appropriate iSCSI settings on the server For information about iSCSI see Chapter 22 on page 335 294 Controlling server access to volumes Refer to the operating system specific documents in the Resource Center for more information about setting up volumes and iSCSI Persistent targets or favorite targets After you configure the iSCSI initiator you can log on to the volumes When you log on select the option to automatically restore connections This sets up persistent targets that automatically reconnect after a reboot For persistent targets you also need to set up dependencies
335. mal Tem 2 Way 16GB C 0 thin 08 08 2008 0 GB LogsAdmin Administrative Normal 2Way 1068 Full 08 08 2008 0 LogsFina Finance grou Normal 2 Nay 10GB C 0 Ful 08 08 2008 0 Figure 112 Viewing multiple volumes and snapshots Shift click or Ctrl click to select the volumes and snapshots you want to delete Getting Started Configuration Summary 8 Exchange Servers 1 Sq Administration E LogsFinance 2 Volumes 2 and Snapshots 4 GHP LeftHand Networks Centralized Management Console Woe Name Description Status JReplication Le rovisioned Sp Utiization Provisioning Created AdminSna Administrative Normal Tem AdminSna Administrative Normal Tem Finances Finance grou Normal Tem Finances Finance grou Normal Tem 2Way 2 Way 2Way 2Way 16B 168 16B Thin Thin 0 _ thin Figure 113 Deleting multiple volumes in one operation Right click and select Delete Volumes 08 08 2008 0 08 08 2008 0 08 08 2008 0 08 08 2008 0 A warning message opens asking you to verify that you want to delete the volumes and all the data on them Select the check box to confirm the deletion and click Delete The volumes their associated snapshots except for clone points and shared snapshots are deleted from the cluster Using volumes 14 Using snapshots Using
336. mary of NIC states during failover Table 20 shows the states of EthO and Eth when configured for Link Aggregation Dynamic Mode Table 20 NIC status during failover with link aggregation dynamic mode Failover Status Status of EthO Status of Eth i Preferred NoStatus ActiveData Trans Preferred NoStatus ActiveData Normal Operation fer Yes Transfer Yes EthO Fails Data Transfer Preferred NoStatus Passive Preferred NoStatus ActiveData Fails Over to Eth Failed Data Transfer No Transfer Yes Preferred NoStatus ActiveData Trans Preferred NoStatus ActiveData EthO Restored f i fer Yes Transfer Yes Example network configurations with link aggregation dynamic mode A simple network configuration using Link Aggregation Dynamic Mode in a high availability environment is illustrated NSM 260 NSM 260 NSM 260 Storage Cluster A Figure 67 Link aggregation dynamic mode in a single switch topology How adaptive load balancing works Adaptive Load Balancing allows the storage node to use both interfaces simultaneously for data transfer Both interfaces have an active status If the interface link to one NIC goes offline the other interface continues operating Using both NICs also increases network bandwidth Requirements for adaptive load balancing To configure Adaptive Load Balancing 98 Managing the network Both NICs must be enabled e NICs must be configured to the same subnet e NICs can be connected to se
337. ment Console Coleg Detais Use Summary Volume Use Node Use ISCSI Sessions Remote Snapshots iSCSI This table is updated every 5 seconds Name Size Replication Provisioning IProvisioned Max Used Litiization Level Type Space Space E Wj Financescsst_1 3GB 2Way Ful 5GB Reclaimable 6GB 66B 191 5 MB LS Financescss1 3GB 2Way Thin 5 GB Saved 168 168 533MB ME G FinanceSCss1_2 36GB 2Way Thin 5 GB Saved 16B 66B OMB FinanceSCsst 36B 2 Way Thin 5 GB Saved 16B 16B 538 MB G Financescss1_3 3GB 2 Way Thin 5 GB Saved 165 66B OMB CO G Logsadmin 56B None Full 3 67 GB Reclaimable 56B 56B 1026B8 20 L Snapshott 56B None Thin 4 12 GB Saved 896MB 8MB 3785MB GD LogsFinance 3GB 2Way Thin 5 GB Saved 16B 66B 65MB COZ US FinanceSCss1 3GB __2 Way Thin 5 GB Saved 16B 16B 538M8 ME _ BT GB Rechimable 241268 Saved 158768 308768 2168 ME 1 Space saved or reclaimable displayed here Figure 108 Viewing the space saved or reclaimable in the Volume Use tab 231 BHP LeftHand Networks Centralized Management Console Wee Getting Started Detais Use Summary Volume Use NodeUse iSCSI Sessions Remote Snapshots iscsi Configuration Summary Available Nodes 1 This table is updated every 5 seconds 8 Exchange T Servers 1 Name Size porien Provisioning Provisioned Max Used N Utilizatio y Administr
338. mes See Table 3 Table 3 Numbering conventions with no defaults enabled Element Default name Disabled at installation Management Groups None Clusters None Volumes None Enabled at installation SmartClone Volumes Name_ Snapshots None Remote Snapshots None Schedules to Snapshot a Volume Name 36 Getting started Element Default name Schedules to Remote Snapshot a Volume Name_Pri Name_Rmt Creating storage by using the Getting Started Launch Pad Follow the steps in this section to set up a volume quickly Using the wizards on the Getting Started Launch Pad you will work through these steps with one storage node and with one strategy The rest of this product guide describes other methods to create storage as well as detailed information on features of the iSCSI SAN Prerequisites e Install the storage nodes on your network e Know the IP address or host name you configured with the KVM or serial Configuration Interface when you installed the storage node e Install the HP LeftHand Centralized Management Console software on a management workstation or server that can connect to the storage nodes on the network e Install an iSCSI initiator on the application server s such as the latest version of the Microsoft iSCSI initiator EY NOTE The HP LeftHand DSM for MPIO is the only supported multi path solution for the HP LeftHand Storage Solution Starting with SAN iQ software release 7 0 you must insta
339. mmon uses for SmartClone volumes e Deploy large quantities of virtual machine clones including virtual servers and virtual desktops e Copy production data for use in test and development environments e Clone database volumes for data mining e Create and deploy boot from SAN images Prerequisites You must have created a management group cluster and at least one volume You must have enough space on the SAN for the configuration you are planning You must be running SAN iQ software version 8 0 or later 263 Glossary Table 58 lists terms and definitions used for the SmartClone volumes feature The illustration in Figure 117 on page 264 shows how the SmartClone volumes and related elements look in the CMC Table 58 Terms used for SmartClone features Term Definition A volume created using the SmartClone process In Figure 117 the volume SmartClone Volume C class_1 is a SmartClone volume Clone point The snapshot from which the SmartClone volumes are created The clone point cannot be deleted In Figure 117 the snapshot C _SCsnap is the clone point Shared snapshot Shared snapshots occur when a clone point is created from a newer snapshot that 3 has older snapshots below it in the tree Shared snapshots can be deleted In Figure 117 the snapshots C _snap1 and C _snap2 are shared snapshots Map vi Tab that displays the relationships between clone points and SmartClone volumes ap view See the map view in Fig
340. mneloaeytacavertivaniaveeins teensiqacennmornininint 126 Change the description of a group 4csiupnandindebianeiiansnesnsiseneranerenarsemdaemdnancmnanpenel 126 Changing administrative group permissions ccccccceeececeesceeeesneeeeesceeseeteeeeeeaeeeeeaeees 126 Adding users to an existing group scicnanmurnnecenancemenuaNaan een 127 Removing users from a group sag ay ccarincesyenissaanesdavararzngeraeentnanineriiuetarianeenaaandanenntennte 127 Deleting administrative groups cic2irasitiareanasnrdlonenarinierte denne aiuenen ene 127 PR aeaaea E E 129 Using S NME scsues eootecscatitech ween Way an bauelion usa duedrascrauy eta fuehetesSanncea dvenniedatiads esiueinubeueseah O 129 SNMP on the DL 380 and DL 320s NSM 2120 giccccccssccnptadonesseacnegerpanedonesonsactasmcieteniehengont 129 Getting here monena oran E E E E A 129 Enabling SNMP agents lt 2 azscanvaaabeianciesbuinamenuncind ance elesaras asa einrnaw enone nakaniadenieaaeieeetuarnermidns 129 Community strings for DL 380 and DL 3205 NSM 2120 sic csviiennstainiienialariivawssanwionenrsnosnsoduaes 130 Enabling an SNMP agent vic cncxsnxectaestaensedadhcntss tent canincedscenmsgnanctarapentuarnugaeacn 130 Editing access control entries su jandiaie iaiswnanisnbaiStoiie isidumbadebbawinsdulshy dulsmddal Aanueanniss nine ape loabannens 131 Deleting access control entries ccccceeceeeeeeccceeeeeeneeeeeeeeeneeeeeceseenaeeeeccenaaeeeeeeessieeeeseneeeees 131 Using the SNMP MIB icv
341. n 184 backing up storage node configuration 46 restoring 46 Configuration Interface deleting NIC bond in 344 configuring frame size in 344 configuring network connection in 343 configuring TCP speed and duplex in 344 connecting to 341 creating administrative users in 343 resetting DSM configuration in 345 resetting NSM to factory defaults in 345 Configuration Summary overview 74 reading for management group 75 configuration summary configuration guidance 174 management group summary roll up 174 355 configurations RAID 57 Configuring iSCSI single host 339 configuring dedicated boot devices 53 disabled network interface 107 Failover Managers 190 frame size in Configuration Interface 344 IP address manually 9 1 multiple storage nodes 40 network connection in Configuration Interface 343 network interface bonds 100 network interfaces 9 1 NIC speed and duplex 108 RAID 56 storage nodes 37 TCP speed and duplex in Configuration Interface 344 virtual IP address 210 virtual manager 204 connecting to the Configuration Interface 34 1 converting temporary space from application managed snapshots 253 copying storage node configurations 40 copying volumes from snapshots 250 creating See adding administrative users in Configuration Interface 343 new administrative group 27 NIC bond 101 SmartClone volumes 276 volumes using the wizard 38 creating storage 22 CSV f
342. n ESX and must be installed on network hardware other than the storage nodes in the SAN Figure 80 shows the Failover Manager installed configured and appearing in the CMC Z HP LeftHand Networks Centralized Management Console Joe Getting Started Configuration Surmery WF Failover Manager iS Available Nodes 2 Details E3 Pe poet Hostname X FalioverManager Model Failover Manager fi Exchange IP Address 10 011100 Software Version 8 0 00 1579 0 BHE Servers 1 n LS Administration Site NA MAC Address 00 0C 29 5B CC 91 re Stes Logged In User Defaut BSS Logs T Pertormance Monitor Status NA Storage Nodes 3 Denver 3 Management Group Denver 1 Hame NIA Denver 2 amp Volumes 33 and Snap Manager No Figure 80 Failover manager in the available nodes pool Once installed and configured the Failover Manager operates as a storage node in how you add it to a management group where it serves solely as a quorum tie breaking manager Virtual Managers A Virtual Manager is added to a management group as shown in Figure 81 but is not started on a storage node until a failure in the system causes a loss of quorum Unlike the Failover Manager which is always running the Virtual Manager must be started manually on a storage node after quorum is lost It is designed to be used in 2 node or 2 site system configurations which are at risk for a loss of quorum Z HP LeftHand
343. na an EAE ee E a EE EEE E E a E EA NEER 309 Access patern snsc reee ee one eeina NE a Eea aa aE a eS aS 309 Queue depli aah duet ested a n i S Rt EE EES E aai 309 Changing the sample interval and time zone ccccceecceesseceeeseeeeeeeeeeseaeeceeseecseeeeeseeeeessseeeenaes 309 Adding statistics sisseocoiannnin nara rains aoa Oa ee 310 Viewing statistic details sscsavesaceuanaeiiacenacdypsidunteCrmaasvinuirseeavenna neds dtestanteldenddecudenuat intents 312 Removing and clearing statistics o sieemawmaivenreraseuaenanondnene edn intra ene 312 REMOVING GStatiStiG sssini aei eied eiae iE aa aes aha eie E eeii teias 312 Clearing the sample dota 3 scncgaargneaninseesdaroaunsgive dasa acauaanles weed oemvany manatee aa 312 Clearing the display ac ctip vance secession ecto et avec ecg carats bps eat aapeoted ai vial iat eats 313 Resetting defa hs s mosun ns ae tahd ts Sates esa onde e ca ean E A E TS 313 Pausing and restarting monitoring scs0scineraevancininsvvorinatvugstndedevenqurs cd exeuneineanenasmiersermeeatrednins 313 Changing the grapi oeisio e aa E T N A E N E A 313 Hiding and showing the graph ssa stanvccrsdtuveriduetesercomnwievaenendnaaiaseatelaeeidtinidedakeenania 314 Displaying or hiding a line gerne ime anne e nner ene ir ntenn ren ret Sr ner er a ants er center errr tree 314 Changing the color or style of a line 5 crcs ecarverssaessxaccsmavceutnswielantntenaetnatinaennanadsnsie tenets 314 Aignligiingraine seee ee e enn E A nento Ente e
344. nager Bi Clusters 1 Virtual manager added Figure 97 Management group with virtual manager added The virtual manager remains added to the management group until needed Starting a virtual manager to regain quorum Only start a virtual manager when it is needed to regain quorum in a management group Figure 94 page 202 illustrates the correct way to start a virtual manager when necessary to regain quorum Two site scenario one site becomes unavailable For example in the 2 site disaster recovery model one of the sites becomes unavailable On the site that remains up all managers must be running Select one of the storage nodes at that site and start the virtual manager on it That site then regains quorum and can continue to operate until the other site recovers When the other site recovers the managers in both sites reestablish communication and ensure that the data in both sites are resynchronized When the data is resynchronized stop the virtual manager to return to the disaster tolerant configuration EY NOTE If the unavailable site is not recoverable you can create a new site with new storage nodes and reconstruct the cluster Refer to customer support for help with cluster recovery You must have the serial number of one of your storage nodes when making a support call Two site scenario communication between the sites is lost In this scenario the sites are both operating indepen
345. nce of alll drives are drives are X X X X disk drives present missing The temper Disk Tem Checks the temper ature is The Papert ne ure is outside perature ature of all disk within nor X normal oper Test drives mal operat ing range ating range 147 HP HP a Description as criter Fail criteria DL380 DL320s LeftHand LeftHand P4500 P4300 S M A R T Self Monitoring Ana lysis and Report ing Technology is implemented in all modern disks A program inside the Warning or Disk disk constantly All drives Failed if one SMART tracks a range of pass health or more X X X X Health Test the vital character test drives fails istics including health test driver disk heads surface state and electronics This information may help predict hard drive failures Generate SMART The report logs for The report lysi Generates a drive was success analysis was not gen X X X health report fully gener contact erated Cust ated ustomer Support Generate HP Dia gnostic Re l The report The repoit port for Generates a drive was success was not gen X X X analysis health report fully gener erated contact ated Customer Support Table 31 List of hardware diagnostic tests and pass fail criteria for IBM x3650 Diagnostic Test Description Pass criteria Fail criteria IBM x3650 Fan Test Checks the status of all fans Fan is normal A is faulty or miss X Power Test Checks the status of all po
346. nd an optional description Click OK The temporary space becomes a volume with the name you assigned The original snapshot becomes a clone point under the new volume For more information about clone points see Rolling back a volume to a snapshot or clone point on page 257 If you converted temporary space from an application managed snapshot use diskpart exe to change the resulting volume s attributes For more information see Making an application managed snapshot available on page 250 Delete the temporary space The snapshot temporary space is deleted when the snapshot is deleted However you can manually delete the snapshot temporary space if you need to free up space on the cluster Prerequisite Stop any applications that are accessing the snapshot and log off all related iSCSI sessions Note that if you have written any data to the snapshot that data will be deleted along with the temporary space If you want to save that data convert the temporary space to a volume 1 In the navigation window select the snapshot for which you want to delete the temporary space 253 2 Right click and select Delete Temporary Space A warning message opens 3 Click OK to confirm the delete Creating a schedule to snapshot a volume You can schedule recurring snapshots of a volume Recurring snapshots of volumes can be scheduled in a variety of frequencies and with a variety of retention policies You can sche
347. nd to delete all data on the SAN A CAUTION When a management group is deleted all data stored on storage nodes in that management group is lost Prerequisites e Log in to the management group e Remove all volumes and snapshots e Delete all clusters Delete the management group 1 In the navigation window log in to the management group 2 Click Management Group Tasks on the Details tab and select Delete Management Group 3 In the Delete Management Window enter the management group name and click OK After the management group is deleted the storage nodes return to the Available Nodes pool 187 Setting the management group version If instructed by customer support you can set the management group version back to a previous version of the software Setting the version back requires using a special command line option before opening the CMC Customer support will instruct you on using the command line option 188 Working with management groups 10 Using specialized managers The SAN iQ software provides two specialized managers that are used in specific situations The Failover Manager is used in 2 node and in Multi Site SAN configurations to support automatic quorum management in the SAN A virtual manager is added to a management group but is not started on a storage node until a failure in the system causes a loss of quorum It is designed to be used in 2 node or 2 site system configurations which are at risk f
348. ndsavaarnunonnness 107 TCP SUNS 2555 v5 ag ncuslesiet age pakotneedcoue a A O A A a apentecteten ay hiMie lowe 107 The TCP status IGN lage eee shestesy aed can cian see E OE anab bens aaa E ETRE 107 Managing settings on network interfaces ccccecccceseeeeeeeseeeeeseeeeeseeeeeaeeeeseaeeeeseeeeeseeeeetaeess 108 REQUITEMENES 2 eesices cawedennattndedadetune teh edvenna dh dude ees ea eceattt Gas vacunt E E OE E T 108 Changing speed and duplex settings wicos sasatersedenasnciainsacstacddsecesesxaer east seaeimardesaneecnrananea varie 108 REQUIFEMONES soiscoe5caocedaedonernsys e E E soueanas anncadnndaaydibyedarsecamunteaseaedeatenencenns 109 Best practice siisii enade a Hoth enS eR andoks Aeiio ne EEEE EEEE NERSE 109 To change the speed and duplex scsi ass iiateiandendedneteucaeenenrreairandoaeuewmacmadiants 109 Changing NIC frame Size 5 io rsaconcradcinasnarnndvaanoveceoxameawantang tastesetacsspeiteneraienintetunadementivennheu 109 Reg iremenisesiarai a e a E E EE 109 Best practices icerisine erea ireen E E E e aa nad i E a 110 Jumbo fames ame eee ral eso ea ee tee eR Tae wre eRe 110 Editing the NIC frame size ca inarracensnaanialseuenennaamutzarrneansannneniecansearmaniesennaiwadiene 110 Changing NIC flow contiol siisii E A 111 R guiremenis issegati raaa r r a eaa aeeie aa e Nada ahe anea Eei 111 Enabling NIC flow control xen eGrcudaesicdandsede vebencencenenta tienda eile eet eo ermmaeunes 1 Using a DNS Serv r ccccecnectccers
349. ne copies of your production LUNs and mount them in another environment Then you can run new software install upgrades and perform other maintenance tasks When the new software or upgrades testing is complete either redirect your application to the SmartClone volume you have been using or delete the SmartClone volume and proceed with the installation or upgrades in the production environment Data mining Say you want to track monthly trends in web requests for certain types of information Monthly you create a SmartClone volume of the web server transaction mount the volume to a different server and analyze and track usage or other trends over time This monthly SmartClone volume takes minimal additional space on the SAN while providing all the data from the web server database Clone a volume In addition to the cases described above SmartClone volumes can be created as needed for any purpose These volumes provide exact copies of existing volumes without the need to provision additional space until and unless you want to write new data Planning SmartClone volumes Planning SmartClone volumes takes into account multiple factors such as space requirements server access and naming conventions for SmartClone volumes Space requirements SmartClone volumes inherit the size and replication level of the source volume and snapshot When creating a SmartClone volume you first create a snapshot of the source volume and create the Smar
350. nerate IBM Generates IBM Support logs Support logs for h The logs were suc The logs were not when requested by Customer X analysis contact Support cesstully generated generated IBM Support ppor Table 32 List of hardware diagnostic tests and pass fail criteria for VSA Diagnostic Test Description Pass criteria Fail criteria VSA Checks for the presence of All disk drives are One or more drives are isk aiaia Es all disk drives present missing i Table 33 List of hardware diagnostic tests and pass fail criteria for Dell 2950 NSM 2060 and g P NSM 4150 piagnostic Description Pass crite Fail criteria Dell 2950 NSM2060 NSM 4150 Fan Test Checks the status of Fan is nor Fan is faulty X X X all fans mal or missing Supply is Power Test ae status ar PUPP faulty or miss X X X all power supplies normal mg Temperature Temperature Temperature ete the sonig is within nor is outside Ted all temperature t Xx X X es mal opera normal oper sensors i ing range ating range Cache He sane Cache is Cache is cor e disk controller X X X Status normal rupt caches 149 Diagnostic Test Description Pass criter ia Fail criteria Dell 2950 NSM2060 NSM 4150 Checks the status of eee Vee is Cache BBU normal and charging the Battery Backup f i X X X Status Uni not charging testing or Aih BBUN or testin fault g faulty Disk Status Checks for the pres All disk One or more Test ence of all d
351. network of storage nodes Disabling SNMP 1 Log in to the storage node and expand the tree 2 Select the SNMP category from the tree 132 Using SNMP 3 On the SNMP General tab window click SNMP General Tasks and select Edit SNMP settings The Edit SNMP Settings window opens Select Disable SNMP Agent 5 Note that the Agent Status field shows disabled now The SNMP client information remains listed but cannot be used Adding SNMP traps Prerequisite You must have first enabled an SNMP agent in order use SNMP traps The DL 380 and the DL 320s NSM 2120 always have an SNMP agent enabled e Add SNMP traps to have an SNMP tool send alerts when a monitoring threshold is reached Enable SNMP traps You add a Trap Community String used for client side authentication 1 Log in to the storage node and expand the tree 2 Select the SNMP category from the tree 3 Select the SNMP Traps tab 4 Click SNMP Trap Tasks and select Edit SNMP Traps 5 Click Add to add trap recipients 6 Enter the IP address or hostname for the SNMP client that is receiving the traps 7 Click OK 8 Repeat Step 5 through Step 7 for each host in the trap community 9 Click OK on the Edit SNMP Traps window when you are finished adding hosts Editing trap recipients 1 Log in to the storage node and expand the tree 2 Select the SNMP category from the tree 3 Select the SNMP Traps tab The SNMP Traps Settings window opens 4 Click SNMP Tr
352. network for the sample interval X Network Bytes Total Bytes read and written over the network for the X sample interval Storage Server Total Average time in milliseconds for the RAID control X Latency ler to service read and write requests Monitoring and comparing multiple clusters You can open the Performance Monitor for a cluster in a separate window This lets you monitor and compare multiple clusters at the same time You can open one window per cluster and rearrange the windows to suit your needs 1 From the Performance Monitor window right click anywhere and select Open in Window The Performance Monitor window opens as a separate window Use the Performance Monitor Tasks menu to change the window settings 308 Monitoring performance 2 When you no longer need the separate window click Close Performance monitoring and analysis concepts The following general concepts are related to performance monitoring and analysis Workloads Access Access Access Queue A workload defines a specific characterization of disk activity These characteristics are access type access size access pattern and queue depth Application and system workloads can be analyzed then described using these characteristics Given these workload characterizations test tools like iometer can be used to simulate these workloads type Disk accesses are either read or write operations In the absence of disk or controller caching
353. nformation about the statistics selected for monitoring The table values update based on the sample interval setting To view the statistic definition hold your mouse pointer over a table row Table 68 on page 306 defines each column of the Performance Monitor table Table 68 Performance Monitor table columns Column Definition Display Toggles the display of the graph line on or off Line Shows the current color and style for the statistic s line on the graph Name Name of the cluster storage node or volume being monitored Server For volumes and snapshots shows the server that has access Statistic The statistic you selected for monitoring Units Unit of measure for the statistic Value Current sample value for the statistic Minimum Lowest recorded sample value of the last 100 samples Maximum Highest recorded sample value of the last 100 samples Average Average of the last 100 recorded sample values Scaling factor used to fit the data on the graph s O to 100 scale Only the line on Scale the graph is scaled the values in the table are not scaled The values in the log file if you export the file are also not scaled For information about adding statistics see Adding statistics on page 310 Understanding the performance statistics You can select the performance statistics that you want to monitor For clusters volumes and snapshots the statistics being reported are based on c
354. ng a SmartClone volume or a Remote Copy before the roll back to preserve that data Rolling back a volume from a snapshot or clone point You can roll back a specific volume from a clone point The clone point selected will roll back to the parent volume it is listed under in the navigation view 1 Log in to the management group that contains the volume that you want to roll back 2 In the navigation window select the snapshot to which you want to roll back Review the snapshot Details tab to ensure you have selected the correct snapshot 258 Using snapshots Click Snapshot Tasks on the Details tab and select Roll Back Volume A warning message opens This message illustrates all the possible consequences of performing a roll back including e Existing iSCSI sessions present a risk of data inconsistencies e All newer snapshots will be deleted e Changes to the original volume since the snapshot was created will be lost If you do not have connected iSCSI sessions or newer snapshots those issues will not be re flected in the message Centralized Management Console x WARNING There is at least one connected iSCSI session associated with volume C To avoid possible data inconsistencies you must stop any applications and log off these sessions from the initiators before proceeding All changes to volume C since snapshot C _backup1 was created will be lost The following snapshots will be permanently deleted
355. ng a license key Copy and paste the appropriate license key for that storage node into the window EY NOTE When you cut and paste the license key into the window ensure that there are no leading or trailing spaces in the box Such spaces prevent the license key from being recognized Click OK The license key appears in the Feature Registration window Feature Registration hp com to register and receive your license srttlement certificate and the feature key s se Edt License Key to added the Feature Registration Scripting Evaluation Golden Fene Key 00 19 B2 D4 FA 25 License Key 0475 D2A7 A06D 1082 4708 EDA3 63AF 4 Golden 2 Festwe Key 00 19 B9 D4 F9 C6 License Key 0475 D2A7 AD8D 1082 4708 EDA3 634F 4 Golden 3 Feature Key 00 19 89 69 4265 License Key FBFA DC8C B8S7 8840 925F AGD6 9044 Figure 165 Viewing license keys Click OK again to exit the Feature Registration window 323 Saving license key information For record keeping save the license information to a file when you have entered all the license keys LF 2 3 4 Click Registration Tasks on the management group Registration tab Select Save Information to File from the menu Navigate to the location where you want to save the license key information Enter a name for the registration information file and click Save The license information is saved to a tx
356. nneelants 201 Requirements for using a virtual manager s sissvsornneosavensisdaniiis oawbacsvicidnilvaldnideictansealnstvnlntinidsyaeinhddelsniel 201 Configuring a cluster for disaster recovery cccccccccssscccesseeceenseeeseaeecceseeeeeaeeeeceeeeeseseeeeseeeeesaes 202 Best Oro ctiee siscsssadadssadsianidbaschinarebonathesabseecbadanbaaanobeaniuessebegd E a E A E E aE 202 Configuration steps ach resih caaursstnnaynaus prawns seca nnn sonensntiai stt rttr estrenn terrere teitnd reos rr erneer eer erene e 202 Adding a virtual manager saci catsesakar ov svis arco drvatepceaen te oan nia teenahat aia nndlaivaataiinderuceneumnabantebiges 204 Starting a virtual manager to regain qUOTUM 4 n0r devise nai dedaeneuianamtnaaniieanonasiuad 205 Starting a virtual manager acres ocsedetenen te vcecaveteuvess nasveceaqeeacidecaieedascclenaseenanindsenernteredenas 206 Verifying virtual manager status axaparsangniantennonereeaweace aisle dara donpuerapannvenietanaeauneauetaettetanilas 206 Stopping a virtual Manager ass ancezeteceeacecitansvecdtavegctadnetiedne ade ices antennas baevascneasenetioteanemonleweaness 207 Removing aviva managet atc sal cach eoni e o E heeded a donne act 207 Eae A CINE E EA E AE A E EN 209 Clusters and storage node capacity oicshsissdlanidasnedavinanbersensbsninsansdebsiodonnedaneshenedanvelauniaulionthenwbiiens 209 PIOQUISITCS aere R E ceed tien a ata stenae ee atereretas 209 Creating additional GlUstErS assises tineretii iie ned gut v
357. nnennnsgeonsrteamearcnrgemngnastagts 79 57 Viewing the Disk Setup tab in a HP LeftHand P4500 s sssssseississseisessssssrseesssesserssesse 79 58 Diagram of the drive bays in a HP LeftHand P4500 ssssississsiseeississeerserssssrrsrrssressees 79 59 Viewing the Disk Setup tab in a HP LeftHand P4300 s sssssseissessseiseesssssserseesssesserssesse 80 60 Diagram of the drive bays in a HP LeftHand P4300 nssssssessssseissesssreseessrissersressseesee 80 61 Viewing a power off or missing disk 2 0 12 s cccqsncettne danse tes tgssacneraatheaveandeteelstersgiecessedtine 84 62 Viewing a power off or missing disk sus cusicsionieuioriharucncuedsum enemas 85 63 RAID rebuilding on the RAID Setup tab ccceeccsseeceeesneceeeeeeeeseeeeeeeeeeneeeeeensentaeeeeees 86 64 Disk rebuilding on the Disk Setup tab cosas outacetasectaancsnacatnage sana psewncy ta dasieharancwaes 87 65 Active passive in a two switch topology with server failover ccccceceeeseeeeesteeeeeeeeees 96 66 Active passive failover in a fourswitch topology cccecccceeeseeeeeeceeeeeeseeeeseeeesneeeeeees 96 67 Link aggregation dynamic mode in a single switch topology ceesceeeeesteeeeenteeeeeees 98 68 Adaptive Load Balancing in a two switch topology cescceceeeeeeseeeeeeeeeeeeeeeeeeseetaeeeees 100 69 Searching for the bonded storage node on the network cccceeeseeeseeeeeeeteeeeeneeeeaes 102 19 1 1 1 1 20 70 Viewing a new active
358. nning volumes on page 237 to ensure that you configure snapshots correctly When creating a snapshot you define the following characteristics or options Table 54 Snapshot characteristics Snapshot Parameter What it means This option quiesces VSS aware applications on the server before SAN iQ creates the snapshot This option requires the use of the VSS Provider For more inform ation see Requirements for application managed snapshots on page 248 If the VSS Provider is not installed SAN iQ will let you create a point in time consistent snapshot not using VSS Application managed Snapshot The name of the snapshot that is displayed in the CMC A snapshot name must be from 1 to 127 characters and is case sensitive Snapshots have a default naming convention enabled when the CMC is installed You can change or disable this naming convention See Setting naming conventions on page 34 for information about this naming convention The following are illegal characters tay Snapshot Name Description Optional A description of the snapshot Assign and Unassign Serv fi Optional Configure server access to the snapshot Planning snapshots When planning to use snapshots consider their purpose and size If you are planning to use schedules to snapshot a volume see Storage nodes and cluster capacity on page 211 and Table 55 on page 246 for approximate data change rates for some common applications
359. node individually Changing the storage node hostname The storage node arrives configured with a default hostname Use these steps to change the hostname of a storage node 44 Working with storage nodes 1 In the navigation window log in to the storage node 2 On the Details tab click Storage Node Tasks and select Edit Hostname 3 Type the new name and click OK 4 Click OK EY NOTE Add the hostname and IP pair to the hostname resolution methodology employed in your environment for example DNS or WINS Locating the storage node in a rack NSM 260 DL 320s NSM 2120 DL 380 and HP LeftHand P4500 The Set ID LED On turns on lights on the physical storage node so that you can physically locate that storage node in a rack 1 Select a storage node in the navigation window and log in 2 Click Storage Node Tasks on the Details tab and select Set ID LED On The ID LED on the front of the storage node illuminates a bright blue Another ID LED is located on the back of the storage node When you click Set ID LED On the status changes to On 3 Select Set ID LED Off when you are finished The LED on the storage node turns off Backing up and restoring the storage node configuration Back up and restore storage node configurations to save the storage node configuration to a file for use in case of a storage node failure When you back up a storage node configuration the configuration information about the storage
360. normal X X Test temperature sensors operating range operating range Checks the status of all Voltage is within Voltage is out Voltage Test normal operat side normal oper X X voltage sensors ing range ating range Cache Status Checks the status of the Cache is normal Cache is corrupt X X disk controller caches Checks the status of the The BBU ig nor The BBU is char Cache BBU mal and not f f battery backed up ging testingor X X Status charging or test cache faulty ing Disk Status Checks for the presence All disk drives One SS i drives are miss X X Test of all disk drives are present ing Disk Temperat Checks the temperature ihe pe ie temperature is within normal is outside normal X X ure Test of all disk drives operating range operating range S M A R T SelfMonitor ing Analysis and Re porting Technology is implemented in all mod ern disks A program in side the disk constantly warning Disk SMART All drives pass Failed if one or tracks a range of the vi i X Health Test aie health test more drives fails tal characteristics includ health test ing driver disk heads surface state and elec tronics This information may help predict hard drive failures Generate SMART logs A The report was for analysis Generates a drive health The report was successfully gen X X contact Cus tomer Sup port report erated not generated 146 Reporting Diagnostic Test Descrip
361. nostics and generate a hardware report for the storage node The Hardware category provides a report of system statistics hardware and configuration information See Using the hardware information report on page 150 Active monitoring overview Alerts actively report on the condition of the hardware and storage network of a HP LeftHand Storage Solution The Alerts category in the tree under every storage node includes multiple types of information and reporting capabilities Review configuration information save log files set up email alerting and review a log of alerts generated automatically by the operating system Use alerts to e View real time statistical information about the storage node e View and save log files Set up active monitoring of selected variables Set up email notification e View alerts You can also set up SNMP traps to have an SNMP tool send alerts when a monitoring threshold is reached For more information see Adding SNMP traps on page 133 Using alerts for active monitoring Use active monitoring to track the health and operation of the storage node and management group Active monitoring allows you to set up notification through emails alerts in the CMC and SNMP traps You can choose which variables to monitor and choose the notification methods for alerts related to the monitored variables Different storage nodes contain different sets of variables that can be monitored For a detail
362. nstall the Failover Manager Only one Failover Manager per management group You cannot have a virtual manager and a Failover Manager in the same management group You cannot run the Failover Manager inside a virtual Windows machine with VMware Server running Minimum system requirements for using with VMware server or player 10 100 ethernet 384 MB of RAM 189 e 5 GB of available disk space e VMware Server 1 x Minimum system requirements for using with VMware ESX Server e VMware ESX Server version 3 x e 1024 MB of RAM A CAUTION Do not install the Failover Manager on the HP LeftHand Storage Solution since this would defeat the purpose of the Failover Manager Planning the virtual network configuration Before you install the Failover Manager on the network plan the virtual network configuration including the following areas e Design and configuration of the virtual switches and network adapters Failover Manager directory host name and IP address The Failover Manager should be on the iSCSI network If you do not have an existing virtual ma chine network configured on the iSCSI network vswitch create a new virtual machine network for the Failover Manager Upgrading the 7 0 Failover Manager The Failover Manager released with SAN iQ software version 7 0 cannot be upgraded or patched To upgrade to the Failover Manager released with SAN iQ software version 8 0 you must uninstall the previous version Fr
363. nt to change the order of servers in the list e Delete the NTP Server Delete an NTP server 1 Select an NTP server in the list on the Time tab window 2 Click Time Tasks and select Delete NTP Server 3 Click OK on the confirmation window The list of NTP servers refreshes the list of available servers Changing the order of NTP servers The window displays the NTP servers in the order you added them The server you added first is the one accessed first when time needs to be established If this NTP server is not available for some reason the next NTP server that was added and is preferred is used for time serving To change the order of access for time servers 1 Delete the server whose place in the list you want to change 2 Add that same server back into the list It is placed at the bottom of the list and is the last to be accessed Editing the date and time You initially set the date and time when you create the management group using the Management Groups Clusters and Volumes wizard If necessary you can edit these settings later 1 Select the management group 2 Select the Time tab to bring it to the front 3 Click Time Tasks and select Edit Date Time Time Zone 4 Change the date and time to the correct date and time for that time zone e In the Date group box set the year month and day e In the Time group box highlight a portion of the time and increase or decrease it with the arrows You may also type in t
364. nted on various platform models Table 9 Information in the RAID setup report This item Device Name Device Type Device Status Subdevices Virtual RAID devices Describes this The disk sets used in RAID The number and names of devices varies by platform and RAID level The RAID level of the device The RAID status of the device The number of disks included in the device For example in an NSM 160 RAID10 displays a Device Type of RAID10 and subdevices as 4 If you are using the VSA the only RAID available is virtual RAID After installing the VSA virtual RAID is configured automatically if you first configured the data disk in the VI Client HP LeftHand Networks recommends installing VMware ESX Server on top of a server with a RAIDS or RAID6 configuration Devices configured in RAIDO As illustrated if RAIDO is configured the physical disks are combined into a single RAID disk except for the NSM 260 In the NSM 260 with RAIDO configured each physical disk operates as a separate RAIDO disk disks 1 striped RAID device Figure 15 RAIDO an NSM 160 60 Storage euooasie 7 gt eT Te Figure 16 RAIDO on an NSM 260 DL 380 Disks 1 striped RAID device Figure 17 RAIDO on a DL380 IBM x3650 Disks 1 striped RAID device Figure 18 RAIDO on an IBM x3650 Devices configured in RAID10 If RAID10 is configured on storage nodes the physica
365. nts for active passive sicvsznsinaidanihaniannnasatiniabaeshaioanpehsinedniusiannbennsbhisupeaninsaubelaw elon 95 Which physical interface is preferred scceeseeeeceeseeeeeceeeeeeeseeeeeesecenseeenseeeneeeseeensreens 95 Summary of NIC status during failover sis asucisedacace suv sisn ype dia an oh law ta vaaa wiuda wean weaivnet ota deidebes 95 Example network configurations with active passive ccseceeeeceeeeseeeteecnseeeeseeeeteeeeneeenaeees 96 How link aggregation dynamic mode works ciiuticcscssnercnssdaiiedanosendenantadanyelonseaniediawbdenviesaunvnns 97 Requirements for link aggregation dynamic mode ccceeesceeeeeeeeeseeeeeeeseeeeesseeeentseeeesaes 97 Which physical interface is preferred 6 cvicsiviaiebioninnsitatinadatiaiiaine aniibva eniivadiidarceaieedainouientanbianys 97 Which physical interface is active ceceseeeeeseeeecenceecnseecereecsaeeeeaeeseaeeeneeecneeeenseeseaeeeeas 97 Summary of NIC states during failover sciusiacsessadeannarhawesativecaninidasiedasitawnpheaiinhiaieackuy bawweaslesends 98 Example network configurations with link aggregation dynamic mode c cceeseeeeteeeeteees 98 How adaptive load balancing works 2 scvs naveaeswawvesaanive doigevisionde Grwtdaiis Gea Guiuavecaaunia isons 98 Requirements for adaptive load balancing cccesssccesesceeeesceeeeeeeeeeeeeeeseeeeeesseeeeesaes 98 Which physical interface is preferred s csuiinsieisieiaipenwnebuutelaiiiansiunioinsneenseiupe
366. o Formatted Devices Available Figure 7 Disk enclosure not found as shown in Details tab When powering off the NSM 4150 be sure to power off the two components in the following order tL 2 Power off the system controller from the CMC as described in Powering off the storage node on page 48 Manually power off the disk enclosure When you reboot the NSM 4150 use the CMC as described in Rebooting the storage node on page 48 This process reboots only the system controller Rebooting the storage node 1 Select a storage node in the navigation window and log in 2 Click Storage Node Tasks on the Details tab and select Power Off or Reboot 3 In the minutes field type the number of minutes before the reboot should begin Enter any whole number greater than or equal to O If you enter O the storage node reboots shortly after you complete Step 5 EY NOTE If you enter O for the value when rebooting you cannot cancel the action Any value greater than O allows you to cancel before the reboot actually takes place 4 Select Reboot to perform a software reboot without a power cycle 5 Click OK The storage node starts the reboot in the specified number of minutes The reboot takes several minutes 6 Search for the storage node to reconnect the CMC to the storage node once it has finished rebooting See Finding the storage nodes on page 37 Powering off the storage node 1 2 3
367. o use advanced features For example if you wanted to configure 3 storage nodes in two locations to use with Remote Copy you license the storage nodes in both the primary location and the remote location EY NOTE If you remove a storage node from a management group the license key remains with that storage node See for more information about removing storage nodes from a management group Registering available storage nodes for license keys Storage nodes that are in the Available Nodes pool are licensed individually You license an individual storage node on the Feature Registration tab for that node The Feature Registration tab displays the following information e The storage node feature key used to obtain a license key The license key for that storage node if one has been purchased 320 Registering advanced features The license status of all the advanced features Submitting storage node feature keys 1 Yo a wp In the navigation window select the storage node from the Available Nodes pool for which you want to register advanced features Select the Feature Registration tab Select the Feature Key Right click and select copy Use Ctrl V to paste the feature key into a text editing program such as Notepad Go to https webware hp com to register and generate the license key Entering license keys to storage nodes When you receive the license keys add them to the storage nodes 1 In the
368. of all the storage nodes in the management group 1 In the navigation window select the management group for which you want to register advanced features 2 Select the Registration tab The Registration tab lists purchased licenses If you are evaluating advanced features the time remaining in the evaluation period is listed on the tab as well Detais Remote Snapshots Time Registraton Registration information You have 23 days 23 hours and 23 minutes left in your evaluation period After the evaluation period volumes and snapshots attached to clusters containing icense violations wil become unavailable Management group version is 8 0 00 Manager auto upgrade is on Communications prot Q Node Golden 2 Provisioning server version is 8 1 00 0018 0 Management server version is 81 00 0018 ersion is 6 Customer information Customer Support ID Customer Name Contact Name Contact Email Figure 162 Registering advanced features for a management group 1 Click Registration Tasks and select Feature Registration from the menu The Feature Registration window lists all the storage nodes in that management group Feature Registration Se Registration is required Go to https ebware hp com to register and receive your license keys Have your license entitlement certificate and the feature key s provided below available Use Edit License Key to add edit the lic
369. og file as a text file 167 The Log Files tab lists two types of logs Log files that are stored locally on the storage node displayed on the left side of the tab Log files that are written to a remote log server displayed on the right side of the tab EY NOTE Save the log files that are stored locally on the storage node For more information about remote log files see Using remote log files on page 168 oe a oa Select a storage node in the navigation window and log in Open the tree below the storage node and select Hardware Select the Log Files tab To make sure you have the latest data click Log File Tasks and select Refresh Log File List Scroll down the list of Choose Logs to Save and select the file or files you want to save To select multiple files use the Ctrl key Click Log Files Tasks and select Save Log Files The Save window opens Select a location for the file or files Click Save The file or files are saved to the designated location Using remote log files Use exa remote log files to automatically write log files to a computer other than the storage node For mple you can direct the log files for one or more storage nodes to a single log server in a remote location The computer that receives the log files is called the Remote Log Target You Adding a af Sh oS 168 must also configure the target computer to receive the log files remote log Select
370. oice Use the Following IP Ad dress and press Enter The IP Address Netmask and Gateway list opens for editing Tab to each field and enter the appropriate information Tab to OK and press Enter After a short pause another message opens that displays the new IP address Record this IP ad dress for later use Gateway is a required field If you do not have a gateway enter 0 0 0 0 8 Tab to OK and press Enter A confirmation message opens 9 Press Enter The network interface is now configured wait a few seconds The Available Network Devices window opens 10 On the Available Network Devices window tab to Back and press Enter to return to the Configuration Interface menu 11 Tab to Log Out and press Enter The Configuration Interface entry window is displayed again 12 Press Ctrl Alt to get the cursor back from the Console Finishing up with VI Client 1 In the VI Client Information Panel click the Summary tab 2 In the General section on the Summary tab verify that the IP address and host name are correct and that VMware Tools are running EY NOTE If VMware Tools show out of date then they are still running correctly The out of date status is not a problem VMware tools are updated with each SAN iQ software upgrade 3 In the Inventory panel right click the Failover Manager and select Rename 198 Using specialized managers 4 Change the name of the Failover Manager to match
371. olaris UFS VMWare VMFS Block systems and file systems Operating systems see hard drives both directly connected DAS and iSCSI connected SAN as abstractions known as block devices arbitrary arrays of storage space that can be read from and written to as needed Files on disks are handled by a different abstraction the file system File systems are placed on block devices File systems are given authority over reads and writes to block devices iSCSI does not operate at the file system level of abstraction Instead it presents the iSCSI SAN volume to an OS such as Microsoft Windows as a block device Typically then a file system is created on top of this block device so that it can be used for storage In contrast an Oracle database can use an iSCSI SAN volume as a raw block device 233 Storing file system data on a block system The Windows file system treats the iSCSI block device as simply another hard drive That is the block device is treated as an array of blocks which the file system can use for storing data As the iSCSI initiator passes writes from the file system the SAN iQ software simply writes those blocks into the volume on the SAN When you look at the CMC the used space displayed is based on how many physical blocks have been written for this volume When you delete a file typically the file system updates the directory information which removes that file Then the file system notes that the blocks whi
372. olumes The following concepts are important when setting up clusters and servers in the SAN iQ software e Virtual IP addresses on page 335 e ISNS server on page 336 e iSCSI load balancing on page 336 e Authentication CHAP on page 337 e iSCSI and CHAP terminology on page 338 e About HP LeftHand DSM for MPIO on page 340 Number of iSCSI sessions For information about the recommended maximum number of iSCSI sessions that can be created in a management group see Configuration summary overview on page 174 Virtual IP addresses A virtual IP VIP address is a highly available IP address which ensures that if a storage node in a cluster becomes unavailable servers can still access a volume through the other storage nodes in the cluster Your servers use the VIP to discover volumes on the SAN The SAN uses the ign from the iSCSI initiator to associate volumes with the server A VIP is required for a fault tolerant iSCSI cluster configuration using VIP load balancing or SAN iQ HP LeftHand DSM for MPIO When using a VIP one storage node in the cluster hosts the VIP All I O goes through the VIP host You can determine which storage node hosts the VIP by selecting the cluster then clicking the iSCSI tab Requirements for using a virtual IP address e For standard clusters not multi site clusters storage nodes occupying the same cluster must be in the same subnet address range as the VI
373. om version 8 0 on you will be able to upgrade the Failover Manager 1 Remove the Failover Manager from the management group 2 Uninstall the Failover Manager as described in Uninstalling the Failover Manager for VMware Server or Player on page 194 3 Install the new version of the Failover Manager Configure the name and IP address 5 Add the new Failover Manager to the management group Using Failover Manager with VMware Server or VMware Player Installing and configuring the Failover Manager Install the Failover Manager from the HP LeftHand VSA CD or download it from the HP LeftHand Networks web site Failover Manager configuration When the Failover Manager is installed it is automatically configured as follows 190 Using specialized managers To auto start in the event either the VMware Console or host server reboots e The virtual network adapters under the Failover Manager are configured as Bridged network ad apters After the Failover Manager is installed it is started in the VMware Console After it is started you use the Configuration Interface to set the IP address To install the Failover Manager Install the Failover Manager onto a separate server on the network A CAUTION Do not install the Failover Manager on the HP LeftHand Storage Solution since this would defeat the purpose of the Failover Manager Using the HP LeftHand Management DVD 1 Using the HP LeftHand Management DVD
374. om which to ping Enter the IP address you want to ping and click Ping If the server is available the ping is returned in the Ping Results window If the server is not available the ping fails in the Ping Results window Configuring the IP address manually Use the TCP IP Network category to add or change the IP address for a network interface 1 Select a storage node and open the tree below it 2 Select TCP IP Network category and click the TCP IP tab 3 Select the interface from the list for which you want to configure or change the IP address 4 Click Edit 5 Select IP address and complete the fields for IP address Subnet mask and Default gateway 6 Click OK 7 Click OK on the confirmation message 8 Click OK on the message notifying you of the automatic log out EY NOTE Wait a few moments for the IP address change to take effect 9 Log in to the newly addressed storage node Using DHCP A DHCP server becomes a single point of failure in your system configuration If the DHCP server goes offline then IP addresses may be lost 91 A CAUTION If you use DHCP be sure to reserve statically assigned IP addresses for all storage nodes on the DHCP server This is required because management groups use unicast communication To set IP address using DHCP 1 Select from the list the interface you want to configure for use with DHCP 2 Click Edit 3 Select Obtain an address automatically using the DHC
375. on from one of two compact flash cards located on the front of the storage node The storage node boot configuration information is mirrored between the two compact flash cards If one card fails or is removed the system can still boot If you remove and replace one of the cards you must activate the card to synchronize it with the other card EY NOTE There must always be at least one active flash card in the storage node If you are upgrading the SAN iQ software a dual boot device storage node must contain both flash cards Replacing a dedicated boot device NSM 160 If a compact flash card fails first try to activate it on the Boot Devices window If the card fails repeatedly replace it with a new one You can also replace a boot flash card if you have removed the original card to store it as a backup in a remote location A CAUTION A flash card from one storage node cannot be used in a different storage node If a card fails replace it with a new flash card NSM 4150 If a boot hard drive fails you will see an alert Replace it with a new drive The boot device drives support hot swapping and do not require activation Removing a boot flash card NSM 160 NSM 260 Before you remove one of the boot flash cards from the storage node deactivate the card in the CMC 1 On the Boot Devices window select the flash card that you want to remove 53 2 Click Deactivate The flash card status changes to Inac
376. on them These storage nodes then run managers much like a PC runs various services 17 Functions of managers Managers have the following functions e Control data replication Note managers are not directly in the data path e Manage communication between storage nodes in the cluster e Re synchronize data when storage nodes change states e Coordinate reconfigurations as storage nodes are brought up and taken offline One storage node has the coordinating manager You can determine which storage node is the coordinating manager by selecting the management group then clicking the Details tab The Status field at the top shows the coordinating manager Managers and quorum Managers use a voting algorithm to coordinate storage node behavior In this voting algorithm a strict majority of managers a quorum must be running and communicating with each other in order for the SAN iQ software to function An odd number of managers is recommended to ensure that a majority is easily maintained An even number of managers can get into a state where no majority exists one half of the managers do not agree with the other one half This state is known as a split brain and may cause the management group to become unavailable For optimal fault tolerance in a single site configuration you should have 3 or 5 managers in your management group to provide the best balance between fault tolerance and performance The maximum supported n
377. ondiaontealnctanatunstactnelanseuesantenuaa unawareness 34 Setting Naming CONVENTIONS sey cssiecoldscentivo nnacsddesradseuecveadevwee guddevedsdeuderiae bottad tabaci va Oo ved 34 Changing naming conventions 5 sxcscnsatacaincianeionadond ancecynacisaneantantcacesdaraeaaonntaomeerense 34 Creating storage by using the Getting Started Launch Pad ccccescceeeesceeeeeeeeeeeneececeseeeenseeeeenaes 37 Prerequisites cts atrnccsunuceeesboren ou sanges nie seomenauteaaieas E E a E 37 Finding the storage nodes cxincnsincovvennsenianvledbnlneuelawicinieids weGriounwSientveiivealinlea te ekevacbewine 37 Configuring storage nodes sc csiwireatdatoncesosantarsatnnedeaenstoxguondasoundendrondraieatesnitnanetnesaiencrencdine 37 Configure storage node categories csi22 i2hossesusupanaseinndebawedbede canes doviredabiadaupneaiuabuentetnewneees 38 Creating a volume using the wizard ccccsssecsesseeeeeseeeeeeeeeeccseeeecsaeeeessceecesseeeeesaeeseeteeeees 38 Enabling server access to volumes a siciiisssnecansiiua Wicenilbantd le vaedaviealiusteeinsdakaiieinsiwlindasibaeevciswintouele 39 Continuing with the SAN iQ software ccccssceeeeneeeeeeeeeeeeeeeseeeeeeeaeeeecaeeesesaeeesenseeeseeteeeneaees 39 Finding storage nodes on an ongoing basis 55ca0s iesscnzdnedraniaunalssddnorauhiytaniseaniieonbendantancipsnuibbannsan 39 Turn off Auto Discover for storage nodes ccccsscceeeeseeeeeeeeeeeeseeeeeseeecenaceeseseeeetseeeeesaees 39 Troublesho
378. one Operation cceceeseceeeceeesceeeeeestneeeeeeeeeseeeeeeeeeaes 244 114 Delete multiple snapshots from the volumes and snapshots node ceseeeeeeteeees 257 Tek Rolling bake se volume aa eset vin A vais a 259 116 New volume with shared clone point cccccceeeeesseeeceeeeeeeeeeeeeeeeteeeeecennteeeeeensaaees 260 117 How SmartClone volumes clone points and shared snapshots appear in the CMC 264 118 Duplicate names on duplicate datastores in ESX Server cccccceceseseteeeeeeetteeeeeeeenees 266 119 Example of using a base name with 10 SmartClone volumes cceceeeeteeetteeetreeees 268 120 Rename SmartClone volume from base name cescesesseeeseeceseeeeeeeeteeeseseesneeesteeenes 269 121 Programming cluster with 10 SmartClone volumes 1 clone point and the source VO MLITNS ented eae ae taste ck Nerd shew Vora A Gece tenentsetu S a 270 122 Changing one SmartClone volume changes all associated volumes and snapshots 27 123 SysAdm cluster now has the 10 SmartClone volumes 1 clone point and the source VOUE ao E E eam trcen een er tee eu eat a Suey Sot aioe NA E 271 124 Navigation window with clone point ssseeseseeesssrssssessseressterrsterssteresserssstrssstesseeee 273 125 Clone point appears under each SmartClone volume cccceeeeseeeeetteeeeetneeeenneeeees 274 126 Navigation window with shared snapshots cssscceceeeeeeeeereeeeeneeeeenneeeceeneeeeeneeeeee 275 12
379. one volume the replic ation level for all replicated volumes automatically changes Naming convention for SmartClone volumes A well planned naming convention helps when you have many SmartClone volumes Plan the naming ahead of time since you cannot change volume or snapshot names after they have been created You can design a custom naming convention when you create SmartClone volumes Naming and multiple identical disks in a server Mounting multiple identical disks to servers typically requires that serves write new disk signatures to them For example VMware ESX servers require resignaturing be enabled and will automatically name duplicate datastores Most servers allow the duplicate disks to be renamed 10 20 65 6 Mware Infrastructure Client File Edit View Inventory Administration Plugins Help Conese g Q P amp w Inventory Scheduled Tasks Events Administration Maps Consolidation e Fg Hosts amp Clusters srvr 129 lab lefthandnetworks com Mware ESX Server 3 5 0 64607 E f Samurai as ARS TAS Goo ea SB aT ea SMI pata ore at pos B E srvr 129 lab lefthandne MESCIUETNUTOS ESS eee Configuration Quen es it nemesis Eear Hardware Storage R E thels clone E vsacb hinkin Identification Device Capacity Free Type GD Vsa It Memory snap 00000002 test 1 vmhba32 8 0 1 14 75 GB 5 43GB vmfs3 E Win2k3 template 1 gt Storage storagel 1 ymbhba0 0 0 3 949 25 GB 682 49 GB vm
380. onment CAUTION VSA You cannot clone a VSA after it is in a management group You must clone a VSA while it is in the Available Nodes pool When you create a management group you must add the first administrative user This user has full administrative permissions Add additional users later See Adding a new administrative user page 123 You can use an NTP server or manually set the date time and time zone for the management group You should know the configuration you want to use before beginning the wizard See Chapter 5 A VIP is required for each cluster VIPs ensure fault tolerant server access to the cluster and enable iSCSI load balancing See Configure virtual IP and iSNS for iSCSI page 210 A management group must have an optimum number of managers running The Management Groups Clusters and Volumes wizard attempts to set the proper number of managers for the number of storage nodes you use to create the management group The storage nodes that are running managers must have static IP addresses or reserved IP addresses if using DHCP That is the IP address must not change while that storage node is a manager Getting there You create a management group using the Management Groups Clusters and Volumes wizard Access the wizard in any of the following ways From the Getting Started Launch Pad by selecting the Management Groups Clusters and Volumes wizard See Creating storage by using the Getting St
381. oose how many copies of data you want to keep on the cluster e Replication priority allows you to choose whether data availability or data redundancy is more important in your configuration Replication level Four replication levels are available depending upon the number of available storage nodes in the cluster The level of replication you choose also affects the Replication Priority you can set Table 46 Setting a replication level for a volume For this number of copies 1 copy of data in the cluster No replica is created 1 copy of data in the cluster No replica is created 2 copies of data in the cluster One replica created With this number of available stor Select this replication age nodes in level cluster e None e None 2 2Way e None 3 e 2Way 3 Way e None h e 2Way More than 3 i 3Way e 4Way 1 copy of data in the cluster No replica is created 2 copies of data in the cluster One replica created 3 copies of data in the cluster Two replicas created 1 copy of data in the cluster No replica is created 2 copies of data in the cluster One replica created 3 copies of data in the cluster Two replicas created e 4 copies of data in the cluster 3 replicas created How replication levels work When you choose 2 Way 3 Way or 4 Way replication data is written to either 2 3 or 4 adjacent storage nodes in the cluster The system calculates the actual amount of storage
382. options Would network bonding help me e Capacity expansion Should add more storage nodes and e Data placement Should this volume be on my SATA or SAS cluster The performance data do not directly provide answers but will let you analyze what is happening and provide support for these types of decisions These performance statistics are available on cluster volume and storage node basis letting you look at the workload on a specific volume and providing data like throughput average I O size read write mix and number of outstanding I Os Having this data helps you better understand what performance you should expect in a given configuration Storage node performance data will allow you to easily isolate for example a specific storage node with higher latencies than the other storage nodes in the cluster Prerequisites You must have a cluster with one or more storage nodes and one or more volumes connected via iSCSI sessions e All storage nodes in the management group must have SAN iQ software version 8 0 or later in stalled The management group version on the Registration tab must show 8 0 A server must be accessing a volume to read data write data or both Introduction to using performance information The Performance Monitor can monitor dozens of statistics related to each cluster The following sections offer some ideas about the statistics that are available to help you manage your SAN effectively Thes
383. or a loss of quorum Definitions Terms used in this chapter Virtual manager A manager which is added to a management group but is not started on a storage node until a failure in the system causes a loss of quorum The virtual manager is designed to be used in specific system configurations which are at risk of a loss of quorum Failover Manager A specialized manager running as a VMware guest operating system that can act as a quorum tie breaker node when installed into a 3rd location in the network to provide for automated failover failback of the Multi Site SAN clusters Regular manager A manager which is started on a storage node and operates according to the description of managers found in Managers overview on page 171 Manager any of these managers Failover manager overview The Failover Manager is a specialized version of the SAN iQ software designed to run as a virtual appliance in a VMware environment The Failover Manager participates in the management group as a real manager in the system however it does quorum operations only not data movement operations It is especially useful in a Multi Site SAN configuration to manage quorum for the multi site configuration without requiring additional physical hardware in a site Failover Manager requirements Static IP address or a reserved IP address if using DHCP Bridged connection through the VMware Console or assigned network in VI Client Server on which to i
384. or each file system you plan to mount Then grow each file system independently Planning how many volumes For information about the recommended maximum number of volumes and snapshots that can be created in a management group see Configuration summary overview on page 174 237 Planning volume types e Primary volumes are volumes used for data storage e Remote volumes are used as targets for Remote Copy for business continuance backup and recov ery and data mining migration configurations See the Remote Copy User Manual for detailed information about remote volumes A SmartClone volume is a type of volume that is created from an existing volume or snapshot SmartClone volumes are described in Chapter 15 on page 263 Guide for volumes When creating a volume you define the following characteristics Table 52 Characteristics for new volumes Volume char acteristic Basic Tab Configur able for Primary or What it means Remote Volume Volume Name The name of the volume that is displayed in the CMC A volume name is from Both 1 to 127 characters and is case sensitive The volume name cannot be changed You can enable and customize a default naming convention for volumes See Setting naming conventions on page 34 for more information Description Both Optional A description of the volume Size The logical block storage size of the volume Hosts and file systems operate as
385. orage Category Description Usable space Space available for storage after RAID has been configured Provisioned space Amount of space allocated for volumes and snapshots Amount of space consumed by volume or snapshot data on this storage node This value never decreases though the actual space available may grow and shrink as Used space data is manipulated through a file system if one is configured on the volume For more information see Measuring disk capacity and volume size on page 233 GHP LeftHand Networks Centralized Management Console Joe Detais Use Summary Volume Use Node Use iSCSI Sessions Remote Snapshots iscsi Name Raw RAD Usable Provisioner d Used Space Configuration Space Space Denver 3 4 08 TB some storage space is stranded RAD 50 2 2318 6 37 GB 142GB Denver A 27278 RAIDS 22378 6 37 GB 141GB Denver 2 27278 RAIDS 22378 6 37 GB 142GB Total Storage Nodes 9 63 TB 66978 191368 4 26 GB 1 Denver 3 with stranded storage space Figure 111 Stranded storage in the cluster Measuring disk capacity and volume size All operating systems that are capable of connecting to the SAN via iSCSI interact with two disk space accounting systems the block system and the native file system on Windows this is usually NTFS Table 51 Common native file systems Os File System Names Windows NTFS FAT Linux EXT2 EXT3 Netware NWFS S
386. orage node speed and duplex settings Storage Node Setting Speed Duplex Switch SettingSpeed Duplex Auto Auto Auto Auto 1000 Full 1000 Full 100 Full 100 Full 100 Half 100 Half 10 Full 10 Full 10 Half 10 Half 108 Managing the network EY NOTE The VSA does not support changing the speed and duplex settings Requirements e These settings must be configured before creating NIC bonds e If you change these settings you must ensure that BOTH sides of the NIC cable are configured in the same manner For example if the storage node is set for Auto Auto the switch must be set the same e If you edit the speed or duplex on a disabled or failed NIC the new setting will not be applied until the NIC is enabled or connectivity is restored Best practice Change the speed and duplex settings while the storage node is in the Available Nodes pool and not in a management group To change the speed and duplex 1 In the navigation window select the storage node and log in 2 Open the tree and select TCP IP Network 3 Select the TCP Status tab in the tab window 4 Select the interface you want to edit 5 Click TCP IP Status Tasks and select Edit 6 Select the combination of speed and duplex that you want 7 Click OK A series of status messages displays Then the changed setting displays in the TCP status report EY NOTE You can also use the Configuration Interface to edit the TCP speed and duplex See Settin
387. order from newest to oldest Requirements for application managed snapshots For single snapshots you can create application managed snapshots Application managed snapshots use the VSS Provider to quiesce VSS aware applications before creating the snapshot The following are required for application managed snapshots SAN iQ version 8 0 or later CMC or CLI latest update HP LeftHand P4000 Solution Pack specifically the HP LeftHand P4000 VSS Provider latest update installed on the application server refer to the HP LeftHand P4000 Windows Solution Pack User Manual Management group authentication set up for the VSS Provider refer to the HP LeftHand P4000 Windows Solution Pack User Manual Application on the server that is VSS aware Server set up in SAN iQ with iSCSI connection see Chapter 17 on page 289 Creating an application managed snapshot using SAN iQ is the same as creating any other snapshot However you must select the Application Managed Snapshot option For information about creating snapshots see Creating a snapshot on page 248 248 Using snapshots Creating snapshots for volume sets The snapshot creation process for application managed snapshots differs only when an application has associated volumes Associated volumes are two or more volumes used by an application volume set For example you may set up Exchange to use two volumes to support a StorageGroup one for mailbox data and one for logs T
388. ormal Yes HDT722525DLA380 VDB41AT4C234TS amp SATA 3 0GB 232 89 GB HDT722525DL4380 YDB41AT4C23LP4 SATA 3 0GB 232 59 GB HDT722525DLA380 VDB41AT4C23444 SATA 3 0GB 232 89 GB B 3 Active normal Yes B 4 Active normal Yes Disk Setup Tasks Figure 40 Viewing the Disk Setup tab in an NSM 160 74 Storage Qi Qe On pe Figure 41 Diagram of the drive bays in the NSM 160 Viewing disk status for the NSM 260 For the NSM 260 the disks are labeled 1 through 12 in the Disk Setup window and correspond with the disk drives from left to right 1 2 3 4 top row 5 6 7 8 middle row 9 10 11 12 bottom row and top to bottom when you are looking at the front of the NSM 260 For the NSM 260 the columns Health and Safe to Remove will respectively help you assess the health of a disk and whether or not you can replace it without losing data agement Console x RAID Setup if Disk Setup Boot Devices Health Safe to Remo Model Serial Number Class Capacity l normal Yes HDS725050 KRYNOSZAH SATA 3 0GB 465 66 GB normal Yes HDS725050 KRYNOSZAH SATA3 0GB 465 66 GB normal Yes HDS725050 KRYNOSZAH SATA 3 0GB6 465 66 GB normal Yes HDS725050 KRYNO3ZA4H SATA3 0GB 465 66 GB normal Yes HDS725050 KRVNOSZAH SATA 3 0GB 465 66 GB normal Yes HDS725050 KRYNOSZAH SATA 3 0GB 465 66 GB normal Yes HDS725050 KRVNOSZAH SATA 3 0GB 465 66 GB normal Yes HDS725050 KRVNO3ZAH SATA3 0GB 465 66 GB normal Yes
389. ort a hot spare RAID50 on the NSM 4150 RAID6 RAID50 combines distributing data blocks across disks in a RAIDS set and striping data across multiple RAIDS5 sets RAID5O combines data redundancy RAID5 with the performance boost of striping RAIDO The total capacity of the NSM 4150 in RAID50 is the combined capacity of each RAIDS set in the storage node For RAID50 the NSM 4150 is configured with three RAIDS sets If the disks are 750 GB the total capacity for that NSM 4150 equals 9 TB 12 x the single disk capacity RAID6 may be thought of as RAIDS with dual parity The dual parity of RAID6 provides fault tolerance from two drive failures in each of two RAID sets Each array continues to operate with up to two failed drives RAID6 significantly reduces the risk of data loss if a second hard disk drive fails while the RAID array is rebuilding Parity and storage capacity in RAID6 In RAID6 data is striped on a block level across a set of drives as in RAID5 but a second set of parity is calculated and written across all the drives in the set RAID6 provides extremely high data fault tolerance and can withstand multiple simultaneous drive failures A RAID6 implementation has the storage capacity of N 2 drives A storage node configured with RAID6 has a total available storage capacity of 66 of the total system capacity For example in a 9 TB system 6 TB is available for storage and 3 TB is used for overhead oer Disk 0 Disk
390. oting Storage nodes not found c ccccecccccesceeeeseeeeeeseeeceseeeseeeeeseeteeeeseeeeees 39 Changing which storage nodes appear in the navigation window csccceseeeeteeeeteeeteeees 40 Configuring multiple storage nodes snia dina nessonsdennciuaddaccupinnhensuenoanedacbensiumesbeeanwpiennnidomencdelacnieia 40 2 Warana Win One DOTEE henr 43 Storage node configuration categories c ccccessseeeesseeeeeneeeeseeeeeceseeeecsseeeceaeeesenseeesesteeeneteeenes 43 Storage node configuration category definitions ccccccccceeceeececeeseceeeeeceeeeeeeeeeseeeeeteeeesses 43 Storage Node Tasks aint sareichencaist itracldsicns sno deataaieg anes meceeneetearen canada se au aeeea re E aoe 43 Working with the storage node sicaacsesesnssacetadcataceceiueeneddveinceynaeveeaiernsamiesaeelbpiandelareteaadanined 44 Logging in to and out of storage nodes jc wninvaninanmatneicunmeaiimen ui amean esau mee 44 Automatic log M resecie enep a EA NE EE RNa 44 logging out of a storage node c cccspisrnnnmeniarinertniaareeridaimiian ae Ohseeienoneend 44 Changing the storage node hostname cccssiesessiesntedectseecsadaderesssvedsestianceasseandeaatoneretesday 44 Locating the storage node in a rack NSM 260 DL 320s NSM 2120 DL 380 and HP LeftHand BS S cstnaci ee seceet adBae ssecceseeices ach Sens tiraes ca dessts hedenas austboarteemncdiss 45 Backing up and restoring the storage node configuration c cccceeseeeeeseeeeeeeee
391. our initiator is not on the list do not enable load balancing For more information about iSCSI load balancing see iSCSI load balancing on page 336 A CAUTION Using a non compliant iSCSI initiator with load balancing can compromise volume availability during iSCSI failover events 6 If you want to use iSCSI load balancing and your initiator is compliant select the check box to enable load balancing 290 Controlling server access to volumes 7 Select the authentication method either CHAP not required or CHAP required For more information see Authentication CHAP on page 337 8 In the Initiator Node Name field enter the ign string Open your iSCSI initiator and look for the string there You can copy the string and paste it into the field For more information see iSCSI and CHAP terminology on page 338 9 If you are using CHAP complete the fields necessary for the type of CHAP you intend to configure as shown in Table 66 Table 66 Entering CHAP information in a new server For this CHAP Mode Complete these fields e CHAP name bway CHAP e Target secret minimum of 12 characters e CHAP name 2 way CHAP e Target secret minimum of 12 characters e Initiator secret minimum of 12 characters must be alphanumeric 10 Click OK The server connection displays in the management group in the navigation window You can now assign this server connection to volumes giving the server acces
392. ovisioned for volumes Amount of space allocated for volumes For fully provisioned volumes this is the size x the replication level For thinly provisioned volumes the amount of space allocated is determined by the system Provisioned for snapshots Amount of space allocated for snapshots and temporary space if required This value is zero until at least one snapshot has been created If all snapshots are deleted this value returns to zero Not provisioned Amount of space remaining in the cluster that has not been allocated for storage This value decreases as volumes and snapshots are created or as thinly provi sioned volumes grow Total space Combined space available in the cluster for storage volumes and snapshots 229 Category Description Max provisioned space Total space that volumes and snapshots can grow to fill Note In the case of overprovisioning this value can exceed the physical capacity of the SAN The used space decreases when you delete volumes snapshots or temporary space from the SAN The Cluster Summary Used Space can also decrease when a volume is moved Deleting files or data from client applications does not decrease the used space Volume use summary The Volume Use window presents detailed information about the volume characteristics that affect the utilization of the cluster The table lists the volumes and snapshots and the space and utilization totals for the clu
393. ow opens Add Statistics Choose the statistics that you want for each object or group of objects then add them to the Added Statistics table below Select Statistics Select Object All Connected Volumes and Snapshots FA FinanceServer Connected Yolumes and Snapshot G Finance All Nodes Select Statistics All Statistics Selected Statistics from List Add Statistics Added Statistics Name Statistic Figure 157 Add Statistics window 4 From the Select Object list select the cluster volumes and storage nodes you want to monitor Use the CTRL key to select multiple objects from the list 5 From the Select Statistics options select the option you want e All Statistics Adds all available statistics for each selected object Selected Statistics from List Lets you select the statistics you want from the list below The list is populated with the statistics that relate to the selected objects Use the CTRL key to select multiple statistics from the list 6 If you selected the Selected Statistics from List option select the statistics you want to monitor Click Add Statistics The statistics you selected are listed in the Added Statistics list If you selected a statistic that is already being monitored a message displays letting you know that the statistics will not be added again 8 Click OK 311 Viewing statistic details In addition to what you see in a table row
394. p Exchange Shut Down Group Figure 87 Notification of taking volumes offline 1 Stop server or host access to the volumes in the list 2 Click Shut Down Group The management group shuts down and disappears from the CMC Start the management group back up When you are ready to start up the management group simply power on the storage nodes for that group 1 Power on the storage nodes that were shut down 2 Use Find in the CMC to discover the storage nodes When the storage nodes are all operating properly the volumes become available and can be reconnected with the hosts or servers 185 Restarted management group in maintenance mode In certain cases the management group may start up in maintenance mode Maintenance mode status usually indicates that either the management group is not completely restarted or the volumes are resynchronizing When the management group is completely operational and the resynchronizing is complete the management group status changes to normal mode Some situations which might cause a management group to start up in maintenance mode include the following A storage node becomes unavailable and the management group is shut down while that storage node is repaired or replaced After the storage node is repaired or replaced and the management group is started up the management group remains in maintenance mode while the repaired or replaced storage node is resynchronizing
395. p tab Select the disk in the list to power off Click Disk Setup Tasks and select Power Off Disk Click OK on the confirmation message Physically replace the disk drive in the storage node See the hardware documentation for the storage node Manually powering on the disk in the CMC When you must insert the new disk into the storage node the disk must be powered on from the Storage category Disk Setup tab Until the disk is powered on it is listed as Off or Missing in the 83 Status column and the other columns display dotted lines like this Figure 61 shows a representation of a missing disk in a storage node danagement Console RAD Setup Disk Setup Health Safe to Remove Model Serial Number Class Capacity Inactive normal Yes COMPAQ B D210F2SK SCSI320MB 33 92 GB Inactive normal Yes COMPAQ B D212X4CK SCSI320MB 33 92 GB Inactive normal Yes COMPAQ B 3KTOM8550 SCSI 320MB 33 92 GB normal Yes COMPAQ B D2104HBK SCSI320M6 33 92 GB Inactive normal Yes COMPAQ B D2108K3K SCSI 320MB 33 92 GB Figure 61 Viewing a power off or missing disk In the navigation window select the storage node in which you replaced the disk drive Select the Storage category in the tree Click the Disk Setup tab Select the disk in the list to power on Click Disk Setup Tasks and select Power On Disk Click OK on the confirmation message Goop SP gt Volume restriping After the di
396. pace Space a Acministration Denver 3 27278 RAID 5 2 2318 1 85 GB 62 25 MB tee Denver 1 2 721B RAIDS 2 23 1B 1 66 GB 56 5 MB FF pertormence Monto P 2ewer 2 27218 RAIDS 223 1B 1 66 GB 7 25 MB T Te Storage Nodes 3 _ Total Storage Nodes 9 53 TS 66978 56B 176MB Denver 3 Denver 1 Denver 2 l Volumes 1 and Sry Why doesnt Used Space decrease 0 Alerts Remaining Date Time Hostname JIP Address Alert Message Figure 110 Viewing the Node Use tab Table 50 Information on the Node Use tab Category Description Name Hostname of the storage node Total amount of disk capacity on the storage node The raw space column also shows the effect of putting storage nodes of different capacities in the same cluster For ex ample in Figure 111 on page 233 Denver 3 shows the raw space value in bold and the note that some storage space is stranded Stranded storage occurs when storage Raw space nodes in a cluster are not all the same capacity Storage nodes with greater capacity will only operate to the capacity of the lowest capacity storage node in the cluster The remaining capacity is considered stranded and the Raw Space column shows a bolded value for the higher capacity storage nodes The stranded storage space can be re claimed by equalizing the capacity of all the nodes in the cluster RAID configuration RAID level configured on the storage node 232 Provisioning st
397. parate switches Which physical interface is preferred Because the logical interface uses both NICs for data transfer neither of the NICs in an Adaptive Load Balancing bond are designated as preferred Which physical interface is active When the Adaptive Load Balancing bond is created if both NICs are plugged in both interfaces are active If one interface fails the other interface continues operating For example suppose Motherboard Port and Motherboard Port2 are bonded in an Adaptive Load Balancing bond If Motherboard Port1 fails then Motherboard Port2 remains active Once the link is fixed and Motherboard Port1 is operational it becomes active again Motherboard Port2 remains active Table 21 Example adaptive load balancing failover scenario and corresponding NIC status Example failover scenario NIC status 1 Adaptive Load Balancing bond0 is created e BondO is the master logical interface Motherboard Port and Motherboard Port2 are both e Motherboard Port1 is Active active e Motherboard Port2 is Active 2 Motherboard Port1 interface fails Because Adaptive Motherboard Port status becomes Passive Load Balancing is configured Motherboard Port2 con Failed tinues operating e Motherboard Port2 status remains Active Motherboard Portl resumes Active status 3 Motherboard Port1 link failure is repaired Moiherboard Pon2 remains Reine Summary of NIC states during failover Table 22 shows the sta
398. passive bond ccccceesceccceeeeeneeeeceeenneeeeeeesueeeeeeeeseseeeeeeeeaas 102 71 Verifying interface used for SAN iQ communication c cccecceceesceeeeeteeeeeeeeeeeeseees 103 72 Viewing the status of an active passive bond cccceccceeescceeeseeeeeeseeeenseeeeeeeeeenaees 103 73 Viewing the status of a link aggregation dynamic mode bond cccecceeeeeteeeeetteeees 104 74 Searching for the unbonded storage node on the network ccccceceseeceesteeeentteeeetees 105 75 Verifying interface used for SAN iQ communication 0 cccccceseeeeeseeeeesteeeenteeeeees 106 76 Selecting the SAN iQ software network interface and updating the list of managers 116 77 Viewing the list of manager IP addresses cccscccceescceeeneeeeeseeeeeaeeeeeaeeeseeeeenaaes 117 78 Opening the hardware information window cccseecceseeeeeeeeeeesseeeenseeeeeeseeensaes 151 79 Viewing the hardware information for a storage node ccceeceeceeeeeeeeeseeeeeeteeeeeeaees 151 80 Failover manager in the available nodes pool ccccessseeceseeeeeeeeeeeeeenteeeeesnttteeeees 173 81 Virtual manager added to a management group ceeeeeeeeeeeeeeeeeeeceseeeteeeeeeeetttaeeees 173 82 Configuration summary is created when the first management group is configured 174 83 Understanding the summary Graphs dcss corewers weer nd erapantesanasueersk emer 176 84 Warning when items in the management group are r
399. pecialized managers 11 Working with clusters Within a management group you create subgroups of storage nodes called clusters A cluster is a grouping of storage nodes from which you create volumes Volumes seamlessly span the storage nodes in the cluster Think of a cluster as a pool of storage You add storage to the pool by adding storage nodes You then carve volumes and snapshots out of the pool Before creating a cluster make sure you are familiar with the iSCSI information in Chapter 22 on page 335 Clusters and storage node capacity Clusters can contain storage nodes with different capacities However all storage nodes in a cluster operate at a capacity equal to that of the smallest capacity storage node Prerequisites e Before you create a cluster you must have created a management group Creating additional clusters When you create a management group you create the first cluster in that management group Use the following steps to create additional clusters in existing management groups Prerequisites An existing management group Atleast one storage node in the management group that is not already in a cluster Number of storage nodes in clusters For information about the recommended maximum number of storage nodes that can be added safely to a cluster see Configuration summary overview on page 174 or Chapter 9 on page 171 To create additional clusters 1 Log in to the management group for whic
400. performing Name ExchServer2 mutual CHAP Targets must also be configured with this initiator secret Description Enter a secure secret To authenticate targets using CHAP click Secret to specify a CHAP secret iSCSI Securty X Allow access via iSCSI Enable load balane o cing information on compliant initiators Enabling load balancing on non compliant intiators can compromise volume avai To function correctly load balancing requires that the cluster virtual IP be configured Authentication O CHAP not required To configure IPSec Tunnel Mode addresses click Tunnel Tunnel icrosott Igallagh corp Jefthandnetworks com Figure 173 Adding an initiator secret for 2 way CHAP shown in the MS iSCSI initiator A CAUTION Without the use of shared storage access host clustering or clustered file system technology allowing more than one iSCSI application server to connect to a volume concurrently without cluster aware applications and or file systems in read write mode could result in data corruption EY NOTE If you enable CHAP on a server it will apply to all volumes for that server Best practice In the Microsoft iSCSI Initiator target and initiator secrets are not displayed Keep a separate record of the iSCSI Initiator CHAP information and the corresponding server information About HP LeftHand DSM for MPIO If you are using the S
401. placing the disk you must power on in the CMC the newly replaced disk 81 Replacing disks in RAIDO configurations In RAIDO configurations you must power off in the CMC the faulty or failed disk before you physically replace the disk in the chassis After physically replacing the disk you must power on in the CMC the newly replaced disk See Best practice checklist for single disk replacement in RAIDO on page 82 A CAUTION If you remove a disk from a RAIDO configuration all the data on the storage node will be lost The Best Practice Checklist describes how to prepare for a single disk replacement in RAIDO Preparing for a disk replacement Use this section if you are replacing a single disk under the following conditions You know which disk needs to be replaced through SAN iQ monitoring e When viewed in the Disk Setup tab the Drive Health column shows Marginal replace as soon as possible or Faulty replace right away e RAID is still on though it may be degraded and a drive is inactive Use the instructions in Replacing disks appendix on page 327 for these situations e If RAID has gone off e If you are unsure which disk to replace The instructions in the appendix include contacting Customer Support for assistance in either identitying the disk that needs to be replaced or for replacing more than one disk the sequence in which they should be replaced To prepare for disk replacement How you prepar
402. planning for SAN improvements 300 using Performance Monitor 297 workload characterization example 298 SAN iQ DSM for MPIO 335 saving diagnostic reports 145 history of one variable 144 log files of management group configurations 184 log files for technical support 167 log files of storage node configurations 46 monitored variable log file 143 144 Scaling factor changing 315 scheduled snapshots 254 pausing or resuming 256 requirements for 254 scripting evaluation 319 backing out of 320 turning off 319 turning on 319 searching for storage nodes 39 searching for storage nodes 29 security administrative 17 1 of storage resources 7 selecting alerts to monitor 136 server access SmartClone volumes 267 Servers adding to management groups 290 assigning to volumes and snapshots 292 293 deleting 292 editing 29 editing assignments to volumes and snapshots 294 prerequisites for 290 prerequisites for assigning to volumes 293 servers access to volumes and snapshots 289 defined 32 DNS adding 112 editing IP or domain name 113 removing 14 iSNS adding 210 NTP 120 editing 120 preferred not preferred 120 set ID LED 45 setting IP address 91 local bandwidth 183 RAID rebuild rate 69 setting date and time 119 for management group 19 overview 119 procedure 119 121 refreshing for management group 19 setting time zone 22 time zone on storage node 119 121 122 with NTP
403. ple volumes that share a common snapshot called a clone point They share this snapshot data on the SAN SmartClone volumes can be used to duplicate configurations or environments for widespread use quickly and without consuming disk space for duplicated data Use the SmartClone process to create up to 25 volumes in a single operation Repeat the process to create more volumes or use the CLI to create larger quantities in a single scripted operation What are SmartClone volumes SmartClone volumes can be created instantaneously and are fully featured writable volumes The only difference between regular volumes snapshots and SmartClone volumes is that SmartClone volumes are dependent on the clone point that is the snapshot they are created from Additionally they may minimize space used on the SAN For example you create a volume with a specific OS configuration Then using the SmartClone process you create multiple volumes with access to that same OS configuration and yet you only need a single instance of the configuration Only as additional data is written to the different SmartClone volumes do those volumes consume additional space on the SAN The space you save is reflected on the Use Summary tab in the Cluster tab window described in Cluster use summary on page 228 Multiple SmartClone volumes can be individually managed just like other volumes SmartClone volumes can be used long term in production environments Examples of co
404. preferred NIC resumes operation When the preferred NIC resumes operation data transfer resumes on the preferred NIC Link Aggregation Dynamic Mode The logical interface uses both NICs simultaneously for data transfer This configuration increases network bandwidth and if one NIC fails the other continues operating normally To use Link Aggregation Dynamic Mode your switch must support 802 3ad 92 Managing the network A CAUTION Link Aggregation Dynamic Mode requires plugging both NICs into the same switch This bonding method does not protect against switch failure Adaptive Load Balancing ALB The logical interface balances data transmissions through both NICs to enhance the functionality of the server and the network Adaptive Load Balancing auto matically incorporates fault tolerance features as well Best practices e Adaptive Load Balancing is the recommended bonding method as it combines the benefits of the increased transmission rates of 802 3ad with the network redundancy of Active Passive Adaptive Load Balancing does not require additional switch configurations e Verity and or change the Speed Duplex Frame Size and Flow Control settings for both interfaces that you plan to bond Link Aggregation Dynamic Mode does not protect against switch failure because both NICs must be plugged into the same switch Link Aggregation Dynamic Mode provides bandwidth gains because data is transferred over both NICs simu
405. r use the configuration backup file to restore the configuration of the failed storage node to the replacement node You may also need to manually configure RAID network routes and if you apply the same configuration backup file to more than one storage node a unique IP address You must also complete the manual configuration before adding the replacement storage node to the management group and cluster 1 In the navigation window select the storage node from the Available Nodes pool 46 Working with storage nodes gt Oo ON AH 11 12 Click Storage Node Tasks on the Details tab and select Back Up or Restore Click Restore In the table select the storage node you want to restore You may select multiple storage nodes to restore from the table Select the radio button Install file on selected storage nodes one at a time Recommended Click Browse to navigate to the folder where the configuration backup file is saved Select the file to restore and click Open Backup File Review the version and description to be sure that you are restoring the correct file Click Install When the restoration is complete the Save to File and Close buttons on the Install Status window become enabled To save a log file of the restore operation before rebooting click Save to File Click Close to finish restoring the configuration The storage node reboots and the configuration is restored to the identical configuration as that
406. r Tasks and select Edit Cluster If there are no storage nodes in the management group available to add to the cluster the Add Nodes button will be greyed out Click Add Nodes Select one or more storage nodes from the list Click OK Click OK again in the Edit Clusters window A confirmation message opens describing the restripe that happens when a storage node is added to a cluster Click OK to finish adding the storage node to the cluster Removing a storage node from a cluster You can remove a storage node from a cluster only if the cluster contains sufficient storage nodes to maintain the existing volumes and their replication level See Guide for volumes on page 238 for details about editing volumes 1 In the Edit Cluster window select a storage node from the list 2 Click Remove Nodes In the navigation window that storage node moves out of the cluster but remains in the management group 3 Click OK when you are finished EY NOTE Changing the order of the storage node list causes a full cluster restripe Changing or removing the virtual IP Anytime you add change or remove the virtual IP address for iSCSI volumes you are changing the configuration that servers are using You should re balance the iSCSI sessions after making the change Preparing servers Quiesce any applications that are accessing volumes in the cluster Log off the active sessions in the iSCSI initiator for those volumes 212
407. r provided by DHCP will remain on the DNS tab You can remove or change this DNS server Getting there 1 In the navigation window select a storage node and log in 2 Open the tree and select the TCP IP Network category 3 Select the DNS tab Adding the DNS domain name Add the name of the DNS domain in which the storage node resides 1 Click on DNS Tasks and then select Edit DNS Domain Name 2 Type in the DNS domain name 3 Click OK when you are finished Adding the DNS server Add up to three DNS servers for use with the storage node 112 Managing the network Click on DNS Tasks and then select Edit DNS Server Click Add and type the IP address for the DNS server Click OK Repeat Step through Step 3 to add up to three servers aS SES Use the arrows on the Edit DNS Servers window to order the servers The servers will be accessed in the order they appear in the list 6 Click OK when you are finished Adding domain names to the DNS suffixes Add up to six domain names to the DNS suffix list also known as the look up zone The storage node searches the suffixes first and then uses the DNS server to resolve host names On the DNS tab click DNS Tasks and select Edit DNS Suffixes Click Add to display the Add DNS Suffixes window Type the DNS suffix name Use the domain name format Click OK Repeat Step throughStep 4 to add up to six domain names Click OK when you are finished O M fF SY gt Editin
408. ray ens e Inactive is part of an array and on but not participating in RAID e Off or removed e Hot spare for RAID configurations that support hot spares Drive health is one of the following e Normal Health e Marginal predictive failure status indicating replace as soon as possible e Faulty predictive failure status indicating don t wait to replace Safe to Remove Model Indicates if it is safe to hot remove a disk The model of the disk Serial Number The serial number of the disk Class The class type of disk for example SATA 3 0 GB Capacity The data storage capacity of the disk Verifying disk status Check the Disk Setup window to determine the status of disks and to take appropriate action on individual disks when you are preparing to replace them Viewing disk status for the NSM 160 The disks are labeled 1 through 4 in the Disk Setup window Figure 40 and correspond with the disk drives from left to right 1 through 4 as shown in Figure 41 For the NSM 160 the columns Health and Safe to Remove will respectively help you assess the health of a disk and whether or not you can replace it without losing data Sex Management Console RAID Setup Disk Setup Boot Devices Disk Status Health Safe to Remove Model SerialNumber _ Class_ _ _ Capacity 1 Active normal Yes HDT722525DLA380 YDB414T4C237D4 SATA 3 0GB 232 89 GB m 2 Active n
409. re EEE EEE EEA EE a a sea ESENE EEEN Ean 257 Rolling back a volume to a snapshot or clone point ssnnsessnsnensseeensssssetetsssrentresseeesssssrreessseenne 257 Newin release 8 0 oo sesicasereay cndedeite wala ceree naiiai gae test rrai op arria E EEEE AE a Eoi 257 Requirements for rolling back a volume cisiccoicseoinaxpivesdinmenesneadsavainiddenbineananentainenarnanenteaeaasen 258 Prereg isite ssr snas aai E E A E i 258 Rolling back a volume from a snapshot or clone point ccccccceseeeeseeeeeeteeeseeeeeeeeteeeeeeteeeees 258 Choosing a roll back strategy is sninsvcenavadninardasassietssncecansuaeqoanneniaenaananinanonarmatinuenies 259 Continue with standard roll back 5 519 050 papaenarasnaucen terra rents vantenminaanioannieaciierheouoeughenty 259 Create a new SmartClone volume from the snapshot ccccsccceeseeeeeeteeeeesseeeeeteeeeneaees 259 Cancel the roll back operation as acsnniu cee vastac evant oie nbiediond urns eal nema andenmaeaeed 260 Deleting a Snapshot ayia ci ecursecenszasnecatninnnnyeniinantivaniieasannatenarviargienanedstunistamacesananacimetnatnaans 261 Prerequisites acenion e bende neneseedevoed oecedesaannocanegadeducnnludanlencndyoeeeen cakes 261 Delete A eS nS NF sranr are tacos a aeneaivadee ya daedicinende onertemaqnanalta na erameat ni tuunndeantnaaeenaanian 261 be et ce Se e I eae nee ne E A nen et cn ee ene N 263 Overview of SmartClone volumes c cccessescesesecesensceceeceecseaeeeceeeeee
410. report details This section contains more information about the Hardware Information report for the following platforms e Table 34 on page 152 e Table 35 on page 157 e Table 36 on page 162 e Table 37 on page 163 Table 34 Selected details of the Hardware report for the NSM 160 NSM 260 and VSA This term Means This NSM 160 NSM 260 VSA Last refreshed Date and time re X port created Hardware informa Date and time re x tion port created Name or hostname Hostname of the 7 p storage node IP address IP address of the i i storage node Full version num ber for storage node software hig hode golt Also lists any X X patches that have been applied to the storage node Support Key is used by a Technic al Support repres Support Key entative to log in x to the storage node Not avail able for the Demo version 152 Reporting This term Means This NSM 160 NSM 260 VSA Information about NICs in the stor age node includ ing the card num ber manufacturer description MAC address IP ad dress mask gate NIC data way and mode X X X Mode shows manual auto dis abled Manual equals a static IP auto equals DHCP and disabled means the inter face is disabled Information about DNS if a DNS server is being X X X used providing the IP address of the DNS servers DNS data Information about RAM memory in the storage node including as neces Memor
411. resources needed if the replication level is greater than None 2 way replication A cluster with 4 storage nodes is configured for 2 Way replication There have been 4 writes to the cluster Figure 103 illustrates the write patterns on the 4 storage nodes 223 Figure 103 Write patterns in 2 Way replication 3 Way replication A cluster with 4 storage nodes is configured for 3 Way replication There have been 4 writes to the cluster Figure 104 illustrates the write patterns on the 4 storage nodes Figure 104 Write patterns in 3 Way replication 4 Way replication A cluster with 4 storage nodes is configured for 4 Way replication There have been 4 writes to the cluster Figure 105 illustrates the write patterns on the 6 storage nodes Figure 105 Write patterns in 4 Way replication 224 Provisioning storage Replication priority Set the replication priority according to your requirements for data availability or data redundancy for the volume Redundancy Mode Choose the redundancy mode if you require that the volume be replicated in order to be available The redundancy mode ensures fault tolerance Availability Mode Choose the availability mode which is the default if you want your data to be available even if it cannot be replicated The availability mode ensures that data may remain available to servers even if a storage node becomes unavailable Table 47 Storage node availability and volume access by r
412. ring failover with Adaptive Load Balancing cc cccesseeeeeteeeeeteeeeseees 99 Status of and information about network interfaces cescceeseeeeseeeeeeeeeeeeeetseeeneeenaees 107 Setting storage node speed and duplex settings c ccccesseceeeeeeeeeeeeeeeseeeeesseeeenaes 108 Using default administrative groups iites acinsscasehnnhancwneesulneanscaniacsnrdoatraunenennedsnsdinnetawe 125 Descriptions of group Permissions ccscccceesscceseseessecseeeeeeceseeeeeeceseeeenseeesensaeeees 126 Types of alerts for active monitoring Socicnacnrunddaettenc ad iui snaaeaniosudvauentndewedancediaucdsaneasrdes 137 List of monitored Vang bles xesisiantaieroaaneaahiiaaneduakeiaureoanuyseriaireiatieleuateieamsanent 138 List of hardware diagnostic tests and pass fail criteria for NSM 160 and NSM 260 146 List of hardware diagnostic tests and pass fail criteria for DL 380 DL 320s NSM 2120 HP LeftHand P4500 and HP LeftHand P4300 oo eee ceeeecceececceccccceccceeeseeeuseeeaneenneens 147 23 24 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 List of hardware diagnostic tests and pass fail criteria for IBM x3650 eeeeeeeeeees 148 List of hardware diagnostic tests and pass fail criteria for VSA ceecseeeeteeeeeteeeeeeeees 149 List of hardware diagnostic tests and pass fail criteria for Dell 2950 NSM 2060 and PAS I
413. rm Means this HP LeftHand P4500 HP LeftHand P4300 NIC data Information about NICs in the storage node card number manu facturer descrip tion MAC ad dress IP ad dress mask gateway and mode Mode shows manu al auto dis abled Manual equals a static IP auto equals DHCP and disabled means the inter face is dis abled DNS data Information about DNS if a DNS server is being used providing the IP address of the DNS serv ers IP address of the DNS servers Memory Information about RAM in the storage node includ ing values for total memory and free memory in GB CPU This section of the report con tains details about the CPU including mod el name or manufacturer of the CPU clock speed of the CPU and cache size 158 Reporting This term Means this DL 380 DL 320s HP LeftHand P4500 HP LeftHand P4300 Stat Information about the CPU CPU seconds shows the num ber of CPU seconds spent on user tasks kernel tasks and in idle state Machine uptime is the total time the storage node has been run ning from ini tial boot up Backplane in formation This part of the report delivers selected inform ation about the backplane LEDs LED sup port and id LED Drive info For each drive reports the model serial number and capacity Drive status For each drive reports the status
414. rm that password Click OK to finish Adding group membership to a user 1 NOUR WN Log in to the management group and select the Administration node Select a user in the Users table Click Administration Tasks in the tab window and select Edit User Click Add in the Member Groups section Select the groups to which to add the new user Click OK Click OK to finish editing the administrative user Removing group membership from a user I an wr Log in to the management group and select the Administration node Select a user in the Users table Click Administration Tasks in the tab window and select Edit User In the Member Groups section select the group from which to remove the user Click Remove Click OK to finish editing the administrative user Deleting an administrative user 1 2 3 Log in to the management group and select the Administration node Select a user in the Users table Click Administration Tasks in the tab window and select Delete User 124 Administrative users and groups 4 Click OK EY NOTE If you delete an administrative user that user is automatically removed from any administrative groups Managing administrative groups When you create a management group two default administrative groups are created Use these groups and or create new ones Default administrative groups The two default administrative groups and the permissions granted to those groups are listed
415. rmasmdderdcwtaieiaboaancsiancandwaccdecdnsneanantanmmnmionnesanteioatmantelay 89 Best practices when changing network characteristics ccccseessceeeeeeeeceeeeeeeeseeeeeeeeeenseeeeenees 89 Getting MI Re hc sosirea ae ade ac cea tree cS Ah a acetate A E A E apenas 90 The TCP IP Be reunion in Sieg a ns Dv ei apse shoe asaiplinh 90 Identifying the network interfaces cssccceeseeesseeeeeeeeeaeeseneeeeeeceeeecnteeeeeeeseaeeeaeesneeeenreeeteees 90 Pinging am IP address arene e re mer en me ee ae ane acer meer O vet me nr ne ee er ne 91 To ping an IP address ay sa cszadtorocsansitramroancctnpinaaatonedierar toad teormnaaneaaetgeeecandaonaesaiey 91 Configuring the IP address manually ic ssiasasimeaivat opi doieeuseznnneldeiwheghoeadonsiaaleenabneaWibtanesenebidanbaulebaonSeie 91 OE DIAG e E E A E A TE A T E E uconancsncecnee tape nastantenetes 91 Tecan IP address using DACP rresten Vaden a a a a 92 Configuring network interface BOMndS i2csceaasconsssenooeaieandeessseuvvaecenevanneaususansedwntddaswaneeraninmeantienesencnss 92 Besti Dra ChiCes eici onon aE a E O E E EA 93 NIC bonding and speed duplex frame size and flow control settings cccceeseeeeeeeeeees 93 How active passive works ails isteicvucaiceictioishsan lida bs nissan deivsihelyn bevallen janie Sins Rani eam vndanieseeeinsaleiSeaeSlohe 94 Physical and logical interfaces onc ces ciweanepennneiedteheseoiauesneh const eeanetonnduan dane aeaneambeetenmannten 94 Requireme
416. roup on page 177 e Creating additional clusters on page 209 e Creating a volume page 239 e The effect of snapshots on cluster space on page 226 Using snapshots You create snapshots from a volume on the cluster At any time you can roll a volume back to a specific snapshot You can mount a snapshot to a different server and recover data from the snapshot to that server You can also create a SmartClone volume from a snapshot Snapshots can be used for these cases Source for creating backups Data or file system preservation before upgrading software e Protection against data deletion e File level restore without tape or backup software 245 Source volumes for data mining test and development and other data use Best practice use SmartClone volumes See Chapter 15 on page 263 Single snapshots versus scheduled snapshots Some snapshot scenarios call for creating a single snapshot and then deleting it when it is no longer needed Other scenarios call for creating a series of snapshots up to a specified number or for a specified time period after which the earliest snapshot is deleted when the new one is created snapshots created from a schedule For example you plan to keep a series of daily snapshots for one week up to five snapshots After creating the sixth snapshot the earliest snapshot is deleted thereby keeping the number of snapshots on the volume at five Guide for snapshots Review Pla
417. roup Manager shows Normal then a manager is running and needs to be stopped Stopping a manager 1 To stop a manager right click the storage node in the navigation window and select Stop Manager When the process completes successtully the manager is removed from the Status line in the Storage Node box and the Manager changes to No in the Management Group box 2 If you stop a manager the cluster will be left with an even number of managers To ensure the cluster has an odd number of managers do one of these tasks Start a manager on another storage node or e Add a virtual manager to the management group by right clicking on the management group name in the navigation window and select Add Virtual Manager Repair the storage node Prerequisite If there are non replicated volumes that are blinking you must either replicate them or delete them before you can proceed with this step You see the message shown in Figure 166 in this case 328 Replacing disks appendix Power Off Disk 5 x The following volumes and snapshots will lose data if you power off disk 5 This table is updated every 5 seconds Name Type Status Replication Level Contributing Factors iSCSI Session v2 500 NONE Volume Normal None not replicated No Tasks v The volumes are not replicated or in a data process e g Deleting Restriping or Resyncing and powering off this disk will cause the above volumes and snapshots to lose d
418. roups Click Administration Tasks in the tab window and select Edit Group Administrative groups can have e Different levels of access to the storage node such as read write and e Access to different management capabilities for the storage node such as creating volumes When you create a group you also set the management capabilities available to members of a group The default setting for a new group is Read Only for each category Click the permission level for each function for the group you are creating See Table 26 for a description of the permission levels Click OK to finish Adding users to an existing group 1 2 3 Log in to the management group and select the Administration node Click Administration Tasks in the tab window and select Edit Group Click Add in the Users section The Add Users window opens with a list of administrative users Select one or more users to add to the group Click Add Click OK to finish creating a new group Removing users from a group l oP o Log in to the management group and select the Administration node Click Administration Tasks in the tab window and select Edit Group Select one or more users to remove from the group Click Remove Click OK to finish Deleting administrative groups Delete all users from a group before you delete the group 1 wR WN Log in to the management group and select the Administration node Click Administration Tasks in the tab win
419. rs to register and purchase a license for the advanced features you want to continue using The advanced features are listed below Multi Node Virtualization and Clustering clustered storage nodes that create a single pool of storage e Managed Snapshots recurring scheduled snapshots of volumes e Remote Copy scheduled or manual asynchronous replication of data to remote sites e Multi Site SAN automatic synchronous data mirroring between sites Evaluating advanced features Advanced features are active and available when you install and configure your system 30 Day evaluation period When you use any feature that requires registration a message opens asking you to verify that you want to enter a 30 day evaluation period Centralized Management Console X WARNING You are about to use an unlicensed feature You may evaluate this feature without a license for up to 30 days If you do not register the feature within that time any volumes or snapshots created using this feature will become unavailable Click OK to continue with the operation Click Cancel to exit the operation without making changes Figure 159 Verifying the start of the 30 day evaluation period During this evaluation period you may configure test and modify any feature At the end of the 30 day evaluation period if you do not purchase a license key then all volumes and snapshots associated with the feature become
420. ry For example 3 storage nodes in cluster c3 are closer to the cluster recommended maximum for storage nodes than the 43 iSCSI sessions are to the maximum recommended iSCSI sessions for a management group Eile Find Configuration Summary i The Configuration Summary provides a roll up summary of the configuration for volumes snapshots and storage nodes inthe SAN In addition this summary also provides guidance for recommended maximum configurations that are sate zf for performance and scalability Exceeding the recommended maximums may result in volume availability issues under certain failover and recovery scenarios Name ay SS E amp S Volumes amp Snapshots H lt iscsi Sessions 43 EN Storage Nodes in Management Group H Es Storage Nodes in Cluster c H Sy Storage Nodes in Cluster c2 Ooa 2 LES storage Nodes in Cluster c3 EES 3 1 The items in the management group are all within optimum limits The display is proportional to the optimum limits Figure 83 Understanding the summary graphs Configuration warnings When any item nears a recommended maximum it turns orange and remains orange until the number is reduced to the optimal range See Figure 84 176 Working with management groups Eile Find Tasks TA Getting Started Available Nodes 3 Eb a CIS1 Servers 4 Administration Configuration Summary zf The Config
421. s firmware updates and other product resources HP websites For additional information see the following HP websites http www hp com http www hp com go storage htto www hp com service_ locator 27 http www hp com support manuals e htto www hp com support downloads Documentation feedback HP welcomes your feedback To make comments and suggestions about product documentation please send a message to storagedocsFeedback hp com All submissions become the property of HP 28 About this guide 1 Getting started Welcome to the SAN iQ software and the CMC Use the CMC to configure and manage the HP LeftHand Storage Solution This product guide provides instructions for configuring individual storage nodes as well as for creating volumes snapshots remote copies and storage clusters of multiple storage nodes Using the CMC Use the CMC to e Configure and manage storage nodes e Create and manage clusters and volumes Auto discover When you open the CMC the first time it automatically searches the subnet for storage nodes Any storage nodes it discovers appear in the navigation window on the left side of the CMC If no storage nodes are found the Find Nodes Wizard opens and takes you through the steps to discover the storage nodes on your network Disable the Auto Discover feature by clearing the check box on the Find by Subnet and Mask window For more information see Finding the s
422. s assigning to servers 292 293 comparing the load of two 302 editing server assignments 294 formatting for use 295 iSCSI 337 and CHAP 337 logging on to 295 setting as persistent targets 295 volumes Access Volume wizard 39 adding 239 changing clusters 242 descriptions 241 replication levels 242 changing size 242 controlling server access to 289 creating SmartClone 276 creating using the Wizard 38 defined 32 deleting 243 editing 240 making unavailable priority volume available 242 mounting file systems on 237 overview 237 planning 222 237 size 221 type 238 prerequisites for adding 237 deleting 243 244 258 261 replication level 223 replication priority 225 requirements for adding 238 changing 240 restriping 84 rolling back 257 application server requirements 258 type primary 238 remote 238 volumes and snapshots availability 51 VSA cloning 178 disk status 77 frame size 89 hardware diagnostics 149 hardware report 152 network interface 89 NIC bonding 89 NIC flow control 89 RAID levels and default configuration 55 RAID rebuild rate 69 reconfiguring RAID 70 recreate disk 77 speed duplex 89 storage server overloaded 214 virtual RAID and data safety and availability 69 W warnings check Safe to Remove status all storage nodes in a cluster operate at a capacity equal to that of the smallest capacity 209 changing RAID erases data 70 cloning VSA 178 de
423. s gt Volume gt New SmartClone or Tasks gt Snapshot gt New SmartClone Select the desired volume or snapshot from the list that opens e In the navigation window select the cluster and volume or snapshot from which to create a SmartClone volume 3 Right click on the volume or snapshot and select New SmartClone Volumes If you are creating a SmartClone volume from a volume click New Snapshot to first create a snapshot of the volume For more information see Creating a snapshot on page 248 If you are creating a SmartClone volume from a snapshot you do not create another snapshot 276 SmartClone volumes 5 Next you select the following characteristics Base name for the SmartClone volumes e Type of provisioning Server you want connected to the volumes and e Appropriate permission 6 In the Quantity field select the number of SmartClone volumes you want to create 277 7 Click Update Table to populate the table with the number of SmartClone volumes you selected NewSmartClone Volumes i R Original Volume Setup Management Group TrainingOS Volume Hame C Snapshot Hame C _SCsnap SmartClone Volume Setup Base Name C class Server No Server v Quantity Max of 25 D s Provisioning Thin v Permission ReadArite w Update Table Permission v Server Name v J Provisioning Thin v No Server SmartClone Volume Name C _SCsnap_1
424. s 999 hours which is about 41 days 1 2 From the Performance Monitor window click D to start the export In the Log File field enter the name of the file By default the system saves the file to the My Documents folder Windows or your home directory Linux with a file name that starts with Performance and includes the cluster name along with the date and time To select a different location click Browse Set the Sample Every fields to the value and units you want for the sample interval Set the For Duration Of fields to the value and units you want for the monitoring period Click Add Statistics The Add Statistics window opens 315 10 11 From the Select Object list select cluster volumes and storage nodes you want to monitor Use the CTRL key to select multiple objects from the list From the Select Statistics options select the option you want All Statistics Adds all available statistics for each selected object Selected Statistics from List Lets you select the statistics you want from the list below The list is populated with the statistics that relate to the selected objects Use the CTRL key to select multiple statistics from the list If you selected the Selected Statistics from List option select the statistics you want to monitor Click Add Statistics The statistics you selected are listed in the Added Statistics list Click OK The File Size field displays an estimated file size b
425. s in the CMC or through the Configuration Interface which is accessed through storage node s the serial port as described in Chapter 23 on page 341 Table 13 Identifying the network interfaces on the storage node Ethernet interfaces Where labeled Labeled as one of the these In the TCP IP Network configuration category ethO eth Motherboard PortO Motherboard Port1 in the CMC e G4 Motherboard Portl G4 Motherboard Port2 e TCP IP tab Motherboard Port1 Motherboard Port2 e TCP Status tab For bonded interfaces In the Configuration Interface available through the storage node s serial port On the label on the back of the storage node BondN or BondO Intel Gigabit Ethernet Broadcom Gigabit Ethernet EthO Eth1 Represented by a graphical symbol similar to the symbols below e 881002 af 90 Managing the network Pinging an IP address Because the SAN should be on a private network you can ping target IP addresses from a storage node using the CMC You can ping from any enabled interface listed on the TCP IP tab You can ping any IP address such as an iSCSI server or an SNMP monitor server To ping an IP address 1 2 3 4 Select a storage node and open the tree below it Select the TCP IP Network category Select TCP IP Tasks menu and select Ping from the menu Select which Network Interface to ping from if you have more than one enabled A bonded interface has only one interface fr
426. s is presented to application servers When using Repair Storage Node a ghost storage node acts as a placeholder in the cluster keeping the cluster intact while you repair or replace the storage node Describes all the icons used in the CMC Items tab displays the icons used to represent virtual items displayed in the CMC e Hardware tab displays the icons that represent the physical storage units Hardware reports display point in time statistics about the performance and health of the storage node its drives and configuration The hostname on a storage node is the user definable name that displays below the storage node icon in the network window It is also visible when the users browse the network LED lights on the physical storage node so that you can find that node in a rack NSM 260 only Improves iSCSI performance and scalability by distributing iSCSI sessions for different volumes evenly across storage nodes in a cluster Term License keys Link Aggregation Dynam ic Mode Log Files Management group Managers MIB Monitored variables Network window NTP Parity Overprovisioned cluster point in time consistent snapshot Preferred Interface Quorum RAID Device RAID Levels Definition A license key registers a storage node for add on applications Each storage node requires its own license key A type of network bonding in which the logical interface uses both NICs simu
427. s to change back to the original configuration See Stripe When a storage node goes down and writes continue to a second storage node and the original store comes back up the original storage node needs to recoup the exact data captured by the second storage node Replaces the original volume with a read write copy of a selected snapshot New for release 8 0 The new volume retains the same name When you initially set up a storage node using the Configuration Interface the first interface that you configure becomes the interface used for the SAN iQ software communication An application server that you set up in a management group and then assign volumes to it to provide access to those volumes Shared snapshots occur when a clone point is created from a newer snapshot that has older snapshots below it in the tree All the volumes created from the clone point will display these older snapshots that they share as well as the clone point SmartClone volumes are space efficient copies of existing volumes or snapshots They appear as multiple volumes that share a common snapshot called a clone point They share this snapshot data on the SAN A fixed version of a volume for use with backup and other applications Term snapshot set SNMP Traps Storage Server Stripe Target Secret Thin Provisioning Temporary Space Time Zone Trap Community String Unicast Virtual IP Address Virtual Machine Virtual Manag
428. s to the volumes For more information see Assigning server connections access to volumes on page 292 Editing server connections You can edit the following fields for a server connection e Description e Load balancing e CHAP options You can also delete a server connection from the management group For more information see Deleting server connections on page 292 A CAUTION Editing a server may interrupt access to volumes If necessary or if the server is sensitive to disconnections stop server access before editing a server 1 In the navigation window select the server connection you want to edit 2 Click the Details tab 3 Click Server Tasks and select Edit Server 291 4 Change the appropriate information If you change the Enable Load Balancing option a warning message opens when you finish filling out this window After changing the iSCSI load balancing configuration you have to log your servers off then log them back on to the volumes A CAUTION If you change the load balancing or CHAP options you must log off and log back on to the target in the iSCSI initiator for the changes to take effect Click OK when you are finished If you have changed the Enable Load Balancing option you must log servers off the volumes This may entail stopping the applications disconnecting them reconnecting the applications to the volumes and then restarting them Load Balancing Setting Change
429. sacveserersaidesannvileleriiesnniaya iaamanaasalaepedadienlde vuchaucewuiiredstdoaes 187 Deleting a management group siceasudameaetannnumenmies GiesGidn dua aero eees 187 PIETOQUISITGS tirei aa contarsereauscveacsiaavanadaay seed E E E tages E EEE TEETE es 187 Delete the management group saesitsquscorisarnnidenpiacanansorivagtiorearaninananiaeeminnenenantn 187 Setting the management group Version cccccceeeesseeeeeeeneeeeeeeeeeseeeeeeseeeneeeeeeneneaaeeeeeesnaeeees 188 19 Using spacalized Managers sc cassnucicansndleatiansiestiansiaetiasdicssenieneaewws 189 Definihions essene a a a rra a E A E das 189 Failover manager overview scccsrichuicacnns iaiehdnnedannatennedeniddentadgnddonianvntaabideabdlanntannaenwalantsdebiadonadabs 189 Failover Manager requirements ccsecccceeeeeeeeeeeeeeenneeeeeceennaeeeeeecnuaaeeeceseeeeeeeeseeeneeeesenes 189 Minimum system requirements for using with VMware server or player seeceeeteeeeeeees 189 Minimum system requirements for using with VMware ESX Server ccseceeeeteeeereeetreeees 190 Planning the virtual network configuration ccciizioszisewenivreiooaendnedbsnsbeasssadhennbdhehaneekaniedabbeneweaniebbes 190 Upgrading the 7 0 Failover Manager ccccecseesceceeceeeeneeeeeeennceeeeeseseeeeeesseteeeeeeeee 190 Using Failover Manager with VMware Server or VMware Player ccccceeseeeeeeeeeeeeeeeeeeeeenaeeeeeees 190 Installing and configuring the Failover Manager
430. sasecsaesenasnsaaonessensnnneeoseecnns 212 Preparing servers asi o voccavocnnn nun a e EA E A bata wits on weeded onddays 212 Changing the virtual IP address sca s cscrcpen ta cacsnceeeqeaebecasentecrcogcentpeieatamaceaaisnndaubcnineqeans 213 Removing the virtual IP address lt sacisccteias Sosescbieaieanbed ius ld ai alabsauiedlawbbaeedaaceBanaheidamanabaa hands 213 Fnishing Up srme eee E E EN R E E A 213 Changing or removing an ISNS server oon osiess ied nstasmneatintiense diac nyiudiiinantivniaondandlinsh daninlanabteince 213 Preparing clients cros tees mzaakaen lea nenea E A e a A O n 213 CHENG ING AN ISNS Server inisicegintnnatosinicodshalid anmnpbavelioukianieegitieniods Mu a E i E 213 Deleting an ISNS SEVER ic getres tsps cash catiacaeeteaciaacacanchiatca see neater ena kaarre moms nentaRy 213 Finishing Up aleve snl ols i SMa ocala we en ia pth aNobii La wy 214 Troubleshooting a cluster c svasdioncsaaeqes Seaconaaeanamce anand ian Aer eh ipeanoneoracan pucacun anaes 214 Auto Performance Protection s isccazingainotiaasettane dna asic hanes tainacdabeannnpieosedintnnaniacwndbaononninienaseias 214 Auto Performance Protection and the VSA ccccccccceeseceeseseeeeeeceeeeeeeeeeseeeeeneeeeenseeeeeaes 214 Auto Performance Protection and other clusters c ccccssceceeesseeeeeeeeeeeeeeeesseeeeesseeeensaes 215 Repairing a storage node 526 p brapscsd scan vusminsdetewandeitnagesuapetepledeatnndiosaaudiqetoep eg a terancennars 216 Prerequisi
431. scales sadn i esp isla li Dida anew Vol la waa iv ibe 132 Installing the LeftHand networks MIB cccsssrsssrseessnsenedersrnsennnersarenannenenecensctonsecenenseontiieve one 132 Disabling the SNMP GQ ih swiscnisnninusodiriiiron ia e e a iaagunabapiandee 132 Disabling SNMP crescci e aS R E ete eet E A ieee 132 Adding SNMP IOP seai e aN Rata aisle S EAEE EE E 133 Frereguisiie semene ao aeaa N a E E E E aeea 133 Enable SNMP HOPS agave tanie ised tate acts annan e a A 133 Removing trap recipi nis scscreenraineie atieausiepanus made eeiendia auch ER EEEE EREE s 133 Disabling SNMP Bea ie sri el eat ON a dated SE 134 E a ER E elena ac aac canned 135 Active monitoring GNEMIEW 2 acccneGaniuasauninaiiesiendeieaanteaauiadanmienean Aeeplanninabs tm incmenaonatarede 135 Using alerts for active monitoring scandssescinddoncronvaansdcetastehonsennnueseabesdaiendbuirankneantel Abakeddmveagnvecsasboanbloneben 135 Getting Nere ip ces eactech toys ccae r a N O 136 Selecting aleris to MOnIOP siirsi ase eiie E a E eet 136 Adding variables to monitor ahs tech sncm ce tcne st naetenias tata Sah aetmaeeere naauenaeate 136 Editing a monitored variable scissioni i ia 137 Removing a variable from active monitoring ceecceeeseeeeceecseeeeeeeeseeeeeeaeesteeeeeneeenteeestes 138 List of monitored variables sissseisiisniars tias ui oriin ieii o a i E E EE EAEE 138 Setting CMC notification of alerts ccs ssccasatsmendenegeonreesinninlediedanaadeenaneandaipranoaaenaatuaera
432. sduidbbsiuebnaieiie 99 Which physical interface is active ciavessimeutvenaniensnceaseceasnensneesrnnennnnehnteasanennnneoandenaaensest 99 Summary of NIC states during faileVerxiawcicesconateanserasecar lt usdie ce vissabptainos vie websaiwiaiedinlomesaisnie 99 Example network configurations with adaptive load balancing csssseeerteeereeesteeeeees 100 Creating a ING Gene s2caviz sssasiaieiannvsdunesaiamnanensasitosan beau a a a e aa 100 FrereguisileS merrer E E E ne naemungeeenteereas 100 Bond guidelines sssi ae eae ene ea E E meen nen coy 100 Creating the bond camusoeteqarnuasnmasnunaeenn seg ma E E E A E aE 101 Verify communication setting for new bond sescceesceesseeeeeeeeeeeeeeeseaeeeteeeenseesereeeeaeees 102 Viewing the Status of a NIC Bond csnsssmisssnnensncennseensnensreesnavenethunes eden edentvingnrnanehonediananton 103 De Tes tier NIC bond pois veratrine viaanaa ta ube oats en a E databace hehalb dda vat 104 Verify communication setting after deleting a bond cceeseeeeteeeeeeeeeeeeeseeeesneeteneeeeaeees 105 Disabling a network TS NIC Sa 2 cidade amon bina wel aoa sabe 106 Disabling a network interface sy tesicsadsceunciansinentadeanevesmmnenmncianstinensiantennmeiamndecnaecentcapmnconteten 106 If the storage node is in a management Group seceeeeeeeesseeeeeeeseeceseeeeseetsaeeteneeeseeeees 107 Configuring a disabled interface lt z lt caaascaccinascancasaacietuenahecaniearesesucdcaneita mdderea
433. sent Processors 1 lt gt VMware Server 1 0 2 c Figure 89 VMware Console opens with Failover Manager installed and registered The Failover Manager then automatically powers on Local host VMware Server Console JOKE File Edit View Host VM Power Snapshot Windows Help m gt 8 Oe go 1 l x E FailoverMgr FailoverMgr boot Loading vmlinuz ATE cates EUNE EN Loading Anitrds E O EE RAE EE booting the kernel Type in start and hit enter at the login prompt none login _ VMware Server 1 0 2 P oooga Figure 90 Failover Manager boots up 1 At the system login prompt click in the window type start and press Enter To get the cursor back from the VMware Console press Ctrl Allt The Configuration Interface Login opens 2 Press Enter to open the Configuration Interface main menu 3 Tab to Network TCP IP Settings and press Enter The Available Devices window opens 192 Using specialized managers 10 11 Make sure ethO is selected and press Enter The Network Settings window opens Network Settings Specify the network settings for the unknown port Be sure the ethernet cable is plugged into the selected port Hostname Disable Interface Obtain IP address automatically using DHCP Use the following IP address IP Address 16 6 14 87 Mask 255 255 8 8 Gateway 6 6 6 6 CANCEL Figure 91 Set
434. shot temporary space see Convert the temporary space on page 253 236 Provisioning storage 13 Using volumes A volume is a logical entity that is made up of storage on one or more storage nodes It can be used as raw data storage or it can be formatted with a file system and used by a host or file server Create volumes on clusters that contain one or more storage nodes Before creating volumes plan your strategies for using the volume how you plan to use it its size how servers will access it and how you will manage backups of the data whether through Remote Copy or third party applications or both Volumes and server access After you create a volume assign it to one or more servers to provide access to volumes by application servers For detailed information see Chapter 17 on page 289 Prerequisites Before you create a volume you must have created a management group and at least one cluster For more information see the following e Chapter 9 on page 171 e Creating additional clusters on page 209 Planning volumes Planning volumes takes into account multiple factors e How many volumes do you need e What type of volume are you creating primary or remote e What size do you want the volume to be e Do you plan to use snapshots Do you plan to use data replication e Do you plan to grow the volume or to keep it the same size EY NOTE If you plan to mount file systems create a volume f
435. sired and click OK Remove ghost storage node Remove the ghost storage node after the data is rebuilt The data is rebuilt on the storage node when two conditions are met e The repaired storage node s disk usage matches the usage of the other storage nodes in the cluster and e The status of the volume and snapshots goes back to Normal The ghost IP address showing outside the cluster can now be removed from the management group 333 1 Right click the ghost IP address and select Remove from Management Group 2 If you have adjusted reduced the Local Bandwidth Priority of the Management Group while the data was being rebuilt change it back to the original value At this point the disk s in the storage node are successfully replaced the data will be fully rebuilt on that storage node and the management group configuration like number of managers quorum local bandwidth and so on will be restored to the original settings Finishing up 1 Contact Customer Support for an RA number 2 Return the original disks for failure analysis using the pre paid packing slip in the replacement package Put the RA number on the package as instructed by Customer Support 334 Replacing disks appendix 22 iSCSI and the HP LeftHand Storage Solution The SAN iQ software uses the iSCSI protocol to let servers access volumes For fault tolerance and improved performance use a VIP and iSCSI load balancing when configuring server access to v
436. size 242 Insight Manager 129 installing Failover Manager 191 installing SNMP MIB 132 interface administrative users in 343 configuration 341 configuring network connection in 343 connecting to 341 deleting NIC bond in 344 resetting NSM to factory defaults 345 IP addresses accessing SNMP by 130 changing iSNS server configuring for storage node 9 NTP server 120 removing ISNS server 213 using DHCP BOOTP 91 359 iSCSI and CHAP 337 and fault tolerance 335 and iSNS servers 336 and virtual IP address 335 as block device 233 authentication 337 changing or removing virtual IP 212 CHAP 337 clusters and VIP 335 configuring CHAP 291 338 load balancing 290 336 load balancing and compliant initiators 290 336 logging on to volumes 295 performance 336 setting up volumes as persistent targets 295 single host configuration 339 terminology in different initiators 338 volumes and 337 iSCSI initiators configuring virtual IP addresses for 210 iSNS Server and iSCSI targets 336 iSNS server adding 210 iSNS server changing or removing IP address L layout of disks in platforms 74 license information 324 license keys 320 licensing icons 318 Lines changing the color of in the Performance Monitor 314 changing the style of in the Performance Monitor 314 displaying or hiding in the Performance Monitor 314 highlighting 315 Link Aggregation Dynamic Mode bond 97 active interface 97 durin
437. sk is powered on RAID goes to Normal Volumes start restriping on the entire storage node Note that there may be a delay of up to a couple of minutes before you can see that volumes are restriping Replacing a disk in a non hot swap platform IBM x3650 Complete the checklist for the RAID1 10 or RAID5 level disk replacement Then follow the procedures below A CAUTION IBM x3650 You must always use a new drive when replacing a disk in an IBM x3650 Never reinsert the same drive and power it on again In non hot swap platforms running RAID1 10 or 5 you must power off in the CMC the faulty or failed disk before you physically replace the disk in the chassis After physically replacing the disk power on in the CMC the newly replaced disk Manually power off the disk in the CMC for RAID1 10 and RAID5 You first power off in the CMC the disk you are replacing Powering off a single disk in RAID1 10 or RAID5 causes RAID to run in a degraded state 1 In the navigation window select the storage node in which you want to replace the disk 84 Storage Select the Storage category Select the Disk Setup tab Select the disk in the list to power off Click Disk Setup Tasks and select Power Off Disk Click OK on the confirmation message oy SS E Physically replace the disk drive in the storage node See the hardware documentation for the storage node Manually powering on the disk in the CMC When you must insert the new d
438. snnssneeeeeiseresessteestssterstrsttstttrsttssesttrsstssttrstrnstnsterstrssrestensennt 57 RAIDO ieissa a a A N scehb ce ubiacamieataalabact 57 F171 EI E AE E E E E 57 Storage Capacity in RAID IZ TO orenian aa a aaa aa 57 RAID5 RAID5 spare or RAIDSO 000 0 cece eeeeeeeceeeeeceeeeceeceeececeeeaeeneaaeeaeeeaeeeeeeeeeeseeeeeeeeeeereeess 58 Parity and storage capacity in RAIDS or 5 Spare sic jcscntcsensoindeascteancsenneninuvennnmlanheinnetes 58 RAIDS and hot spare disks xcs ce cecc hata setadestcccsaneteedoabamenonmieneuemenebecrunniedarnemameennntas 58 RAID5O0 oi the NSM 4150 coset on ocenenia e a elves ati alata 59 RAIDO a settee a eee ee eee E eee geen 59 Parity and storage capacity in RAIDS si iciasczadwuicoinunsiss taspiasdsdanpuanbaasiulauthbaniannianeaaubiias 59 Drive failure and hot swapping in RAID6 sescceeseeeeeeeeseceeeeeeeneeeeseeeneeeeneeeeereeeeneees 59 Explaining RAID devices in the RAID setup report cceseeeeceeereeeesnreeeeeneeeceeeeesenieeeseneeeeenaeerees 59 RAID devices by RAID Bye nsscisroieci reip a aE ten Ea E E EE EEEE 60 Ware RAID devices caeeaaeo a a a nabenaanhaibiets 60 Devices configured in RAIDO lt csmssassinposiwanaianetnnainansudaeonesias emnciontentncetasbatciintucluandueduentorss 60 Devices configured in RAID10 amp cccnnassivunchaun semeeemnsnasqunanewacuennehans decay sunrsd taameantarsuannantants 61 Devices configured in RAID5 societal cea taal site heti Se eS ace eseae ot wan i c
439. speed at which the system can rebuild data X X 166 Reporting This term Means this Dell 2950 NSM 2060 NSM 4150 Statistics Information about the O S RAID for the storage node Boot device statistics Status information about the boot device status ca pacity in MB driver version me dia used for device and mod el Controller cache items Information about the RAID controller card and Battery Backup Unit BBU including model number serial number cache status battery status hardware version and firm ware version Power supply Shows the type or number of power supplies Power supplies Status information about the power supplies Sensors Shows for the hardware listed the status real measured value minimum and maximum values Using hardware information log files The log files that contain hardware information are always saved on the individual storage node You may want to save those log files to another computer This way if the storage node goes off line the log files will be available This section explains how to save a hardware information log file to a txt file on the local storage node or a remote computer See these sections Saving log files on page 167 e Using remote log files on page 168 Saving log files If Technical Support requests that you send a copy of a log file use the Log Files tab to save that l
440. ss 121 replication levels 242 replication priority 242 snapshots 250 thresholds in a snapshot 250 user password 24 volume descriptions 241 volume size 242 CHAP l way 337 2 way 337 editing 29 iSCSI 337 requirements for configuring 291 338 terminology in different initiators 338 using 337 volumes and 337 characteristics of SmartClone volumes 267 checklist for disk replacement 82 choosing a RAID configuration 57 clearing items in navigation window 40 statistics sample data 312 client access to volumes using Access Volume wizard 39 clients adding SNMP 130 clone See SmartClone volumes clone a volume 265 clone point and shared snapshots 274 deleting 285 utilization of 282 clone point clustering managers 7 Clusters comparing the load of two 301 308 clusters adding 209 adding storage node 21 capacity 21 1 changing volumes 242 data replication levels 223 defined 32 deleting 219 editing 211 overview 209 prerequisites for 209 removing storage nodes from 212 repairing storage node in 216 troubleshooting 214 CMC see Centralized Management Console communication interface for SAN iQ communication 116 community strings reserved on the DL 320s 130 reserved on the DL 380 130 Compliant iSCSI Initiators 290 336 compliant iSCSI initiators 336 configuration categories storage node defined 43 storage nodes 43 configuration file backing up management group configuratio
441. ssecccecceeeeeneeeeeesenneeeeeeeeseeeeeseeenseeeeenees 246 G id for Snapshots sisigeg eaa Gabonese nena E E E a A 246 Planning Snapshots ct eases Sat ew ce tea ace ota ac ne paced eaten added eA Raa IER 246 Source volumes for tape backups nciccuvinentd auteur aueren nie tanaleromenmcu nan 247 Best OiCeliCe sie g2ces2 acavtatts coe EEE ETE EEEE E satan EE EENES 247 Data preservation before upgrading software ccccccesscecsecceeeeeeeecseeeeeesceeeeeseeeneeeeesaaes 247 Best Practice siesena iaeia eea ae rena oetara aaee heda eiaa gulch EEE EEE Pa TEGE 247 Automated EOS dics suieuindsoiadan ienn a E a a E a 247 Best Practice sisse rrene hanne aene Ai EEO EERE n E a EE EEE E E Tara riie 247 Planning how many snapshots sssnnnseeeenneeseeeeeesssetttsseeeetessssrttssssstttssrrettessseetessssserresent 247 Creating a snapshot aha cuacaarssaong emenncsiatentorcceuidperieen adnate N ENDER Ei EEEE EENEN EENEN EEDE EESE EEEE NEES 248 Requirements for application managed snapshots cccccccceeeeeeceeneceeeeeeeeeeeeeseseeeseneeeeenes 248 Creating snapshots for volume sets i061 aactecagesansesnsnnceorsaesvancanseatseasinacireeseebaeaaieeciacanty 249 Editing CONGISNGOR ariran on r e E N E E ES 250 Mounting or accessing a snapshot v 05 s400 0s cesvntenseecasotargntaic ents Uanaugrdaatinraabenshonsate recatand efadieleranddense 250 Mounting the snapshot on a host casssspaciedeiantieuieesenauimnetisaruciauanied oowamema nannies 250 Making an
442. stem your data will be corrupted or lost When you increase the size of the volume on the SAN you must also increase the corresponding volume or LUN on the server side Increasing the volume size in Microsoft Windows After you have increased the volume size on the SAN you must next expand the Windows partition to use the full space available on the disk Windows Logical Disk Manager the default disk management program that is included in any Windows installation uses a tool called Diskpart exe to grow volumes from within Windows Diskpart exe is an interactive command line executable which allows administrators to select and manipulate disks and partitions This executable and its corresponding documentation can be downloaded from Microsoft if necessary Follow the steps below to extend the volume you just increased in the SAN 234 Provisioning storage Launch Windows Logical Disk Manager to rescan the disk and present the new volume size Open a Windows command line and run diskpart exe List the volumes that appear to this host by typing the command list volume ew eS Select the volume to extend by typing select volume where is the corresponding number of the volume in the list 5 Enter extend to grow the volume to the size of the full disk that has been expanded Notice the asterisk by the volume and the new size of the volume The disk has been extended and is now ready for use All of the above operations
443. ster Table 49 Information on the Volume Use tab Category Description Name Name of the volume snapshot or cluster Size of the volume or snapshot presented to the server In the case of snapshots the size is Size automatically determined and is set to the size of the parent volume at the time the snapshot was created Replication Choices include None 2 Way 3 Way or 4 Way Snapshots inherit the replication level of level the parent volume Volumes can be either full or thin provisioned Snapshots are always thin provisioned unless you are viewing a fully provisioned snapshot in SAN iQ software version 6 6 or earlier The Provisioning Type column also details space saving options for the different types of volumes and snapshots you can create on the SAN as shown in Figure 108 on page 231 The space calculations take into account both the type of volume and the replication level of the volume or snapshot Use this information to help you manage space use on the SAN es e Thin provisioning saves space on the SAN by only allocating a fraction of the configured Provisioning volume size Therefore the space saved on the SAN is reflected in this column As data type is added to the volume thin provisioning grows the allocated space You can expect to see the space saved number decrease as data on the volume increases e Full provisioning allocates the full amount of space for the size of the volume Reclaimable space is the amount
444. ster you want The Performance Monitor window opens In the toolbar change the Sample Interval value In the toolbar select the Sample Interval Units you want The Performance Monitor starts using the new interval immediately To change the sample interval and time zone 1 2 In the navigation window log in to the management group Select the Performance Monitor node for the cluster you want The Performance Monitor window opens Click Performance Monitoring Tasks and select Edit Monitoring Interval The Edit Monitoring Interval window opens In the Sample Every fields enter the interval and select the units you want Select Local or Greenwich Mean Time Click OK The Performance Monitor starts using the new interval and time zone immediately Adding statistics You can change the monitored statistics for the Performance Monitor as needed To limit the performance impact on the cluster you can add up to 50 statistics The system maintains any changes you make to the statistics only for your current CMC session It reverts to the defaults the next time you log in to the CMC For definitions of the available statistics see Understanding the performance statistics on page 306 ls 2 310 In the navigation window log in to the management group Select the Performance Monitor node for the cluster you want The Performance Monitor window opens Monitoring performance i Click M The Add Statistics wind
445. storage nodes from management group 187 snapshots 245 volumes adding 237 deleting 243 244 258 261 primary interface NICs 116 primary volumes 238 protocols DHCP 91 provisioning storage 22 and space allocation 222 best practices 222 Q quorum and managers starting virtual manager to recover 205 stopping managers 82 364 RAID and data replication 67 as data replication 67 benefits 57 changing RAID erases data 70 configurations 57 configurations defined 57 configuring 55 56 default configuration on storage nodes 55 definitions 57 device 59 device status 56 levels and default configuration for Dell 2950 55 levels and default configuration for NSM 160 55 levels and default configuration for NSM 2060 55 levels and default configuration for NSM 260 55 levels and default configuration for NSM 4150 55 levels and default configuration for DL380 55 levels and default configuration for HP LeftHand P4300 55 levels and default configuration for HP LeftHand P4500 55 levels and default configuration for IBM x3650 55 managing 56 Parity in RAID5 58 planning configuration 67 procedure for reconfiguring 7 1 rebuild rate 69 rebuilding 86 reconfiguring 70 reconfiguring requirements 70 replacing a disk 83 85 86 replication in a cluster 68 requirements for configuring 70 resyncing 214 setting rebuild rate 70 status 7 status and data redun
446. system requirements for Failover Manager on ESX Server for Failover Manager on VMware Server or Player T Tab window 33 TCP speed and duplex frame size 110 status 107 status tab 107 TCP IP tab 90 technical support HP 27 saving log files for 167 service locator website 27 temporary space deleting 253 for read write snapshots 235 253 making application managed snapshot available after converting 250 251 252 thresholds capacity management and snapshot 226 changing for a snapshot 250 requirements for changing in snapshots 255 time editing NTP server 120 NTP servers preferred not preferred 120 selecting NTP 120 setting with NTP 120 without NTP 121 synchronizing on storage nodes 254 zone setting on storage node 119 121 122 time remaining on evaluation period 318 Time zone changing for the Performance Monitor 309 time zone setting 122 Toolbar Performance Monitor window 304 toolbar SmartClone map view 280 trap recipient removing 133 traps disabling SNMP 134 editing SNMP recipient 133 enabling SNMP 133 SNMP 133 troubleshooting network settings to find Failover Manager 195 startup and shutdown options 194 369 troubleshooting clusters repair storage node 214 slow I O 214 type See volumes U updating hardware information report 150 manger IP addresses 117 upgrading storage node software upgrading software copying to the storage node 50 copying upgrade files
447. t the storage node may be inaccessible if you continue Click OK 106 Managing the network If the storage node is in a management group If the storage node for which you are disabling the interface is a manager in a management group a window opens which displays all the IP addresses of the managers in the management group and a reminder to reconfigure the application servers that are affected by the update Configuring a disabled interface If one interface is still connected to the storage node but another interface is disconnected you can reconnect to the second interface using the CMC See Configuring the IP address manually on page 91 If both interfaces to the storage node are disconnected you must attach a terminal or PC or laptop to the storage node with a null modem cable and configure at least one interface using the Configuration Interface See Configuring a network connection on page 343 TCP status Review the status of the TCP interfaces Change the speed and duplex frame size and NIC flow control of an interface These changes can only take place on interfaces that are not in a bond EY NOTE You cannot change the speed duplex frame size or flow control of a VSA The TCP status tab Review the status of the network interfaces on the TCP Status tab Table 23 Status of and information about network interfaces Column Description Name Name of the interface Entries are e bon
448. t Edit to display the Date and Time Configuration window Check each field on this window to set the time for all storage nodes in this management group Click Next to create a cluster Create cluster and assign a VIP The following steps are for creating a standard cluster If you are creating a Multi Site cluster see Creating Multi Site Clusters and Volumes in Chapter 2 of the HP LeftHand P4000 Multi Site HA DR Solution Pack User Manual 1 oo S 2 SP Select Standard Cluster then click Next Type a cluster name in the Create a Cluster window From the list select the storage nodes to include in the cluster Click Next to assign Virtual IPs Add the VIP and subnet mask Click Next to create a volume and finish creating the management group Create a volume and finish creating management group Enter a name replication level size and provisioning type for the volume Click Finish After a few minutes a summary window opens listing details of the new management group cluster and volume 179 Click Close A message opens notifying you that you must register This registration is required to use advanced features such as multi node clusters and Remote Copy For more information about registering advanced features see Chapter 19 on page 317 Click OK The navigation window displays the new management group cluster with storage nodes and volumes As a last step back up the configuration data of the entire managem
449. t file Saving and editing your customer information This section explains how to save your customer profile registrations and licensing information If you have this saved as a text file and lose a storage node it can help in the rebuild of a new storage node Make a customer information file for each management group in your system First create or edit your customer profile Save the file to a computer that is not part of your storage system Editing your customer information file Occasionally you may want to change some of the information in your customer profile If your company moves or contact information changes for example i 2 1 In the navigation window select a management group Click on the Registration tab to open that window Click Registration Tasks and select Edit Customer Information from the menu Fill in or change any of the information on this window Click OK when you are finished Saving your customer information Be sure you have filled in the customer profile window correctly before saving this file In addition to the customer information the file you save contains registration and licence key information Save a customer information file for each management group in your storage system 1 2 324 In the navigation window select a management group Click the Registration tab Click Registration Tasks and select Save Information to File from the menu In the Save window nav
450. t initiator information s csjssessssceceneasevessnmesebyeserdasuasesedgsnaggaeenysavecaatas 336 169 Viewing compliant iSCSI initiators lt c 020 25catscethaassctuneevarnctatoneds ndatiods vada evervaerenencae 337 170 Differentiating types of CHAP x93 scovetocr couse aesnictnes ciautidon neva iden yuaav ata ex aisiank tae 337 171 Viewing the MS iSCSI initiator to copy the initiator node name cccceeesseeeeeeestees 339 172 Configuring iSCSI shown in the MS iSCSI initiator for a single host with CHAP 339 173 Adding an initiator secret for 2 way CHAP shown in the MS iSCSI initiator 6 340 22 Tables 4 4 O 0 AN BD oO FP W YN 4 KR U N 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 Default names provided 3c senhndancexrnagrn weiiuodabliihe ueatamnnedwiastonanaoaaanineapwneeasonsinadbuoden 34 Example of how default names work i o siprisceniwbnehtosntauro amnion 35 Numbering conventions with no defaults enabled ccccceeeeceeeeeceeseeeeeeeteeeeeseeees 36 Dedicated boot devices by storage node cceccccesccceessecceenseeeeenseecesseeeeseaeeeesneeeenses 51 Boof device SUS siraan ae e A E R E E E EEEE REE 52 RAID levels and default configurations for storage nodes cccccceeeseseeeeseeeeeeseeeeeseees 55 Status and color definitions ansidnic canis donilaniaoiinennn daa tabainiuniedouabevadonbbapsmieuaagueabesanaies 56 Storage capacity of RAIDS sets in storag
451. t to retain snapshots and the ca pacity in the cluster If you want to retain lt n gt snapshots the cluster should have space for lt n 1 gt It is possible for the new snapshot and the one to be deleted to coexist in the cluster for some period of time If there is not sufficient room in the cluster for both snapshots the scheduled snapshot will not be created and the snapshot schedule will not continue until an existing snapshot is deleted or space is otherwise made available Plan scheduling and retention policies The minimum recurrence you can set for snapshots is 30 minutes The maximum number of snapshots scheduled and manual combined you can retain is 50 snapshots per volume There are practical limits to the number of snapshots that a particular SAN can support and still maintain adequate performance For information on optimum configuration limits performance and scalability see Configuration summary overview on page 174 254 Using snapshots Creating schedules to snapshot a volume Editing You can create one or more schedules to snapshot a volume For example your backup and recovery plan might include three schedules one schedule for daily snapshots retained for seven days the second schedule for weekly snapshots retained for four weeks the third schedule for monthly snapshots retained for five months Table 57 Characteristics for creating a schedule to snapshot a volume ltem Description
452. tClone volumes from that snapshot which is then called the clone point You can select the 265 provisioning method when creating SmartClone volumes See Chapter 12 on page 221 for a complete discussion of volume and snapshot characteristics and space planning e The space required for the volumes created using the SmartClone process is the same as for any other volumes on the SAN SmartClone volumes can have schedules to snapshot a volume and remote snapshot a volume just like other volumes so the space requirements for SmartClone volumes should take into account the space needed for their local and remote snapshots Number of SmartClone volumes Plan the total number of SmartClone volumes you intend to create as part of your space requirements Note that you can create up to 25 SmartClone volumes as one operation in the HP LeftHand Centralized Management Console and then repeat the process to create the desired number of SmartClone volumes Use the CLI to create larger quantities of SmartClone volumes in a single operation e Thin or Full Provisioning The type of provisioning you select affects the amount of space required on the SAN just as it does for regular volumes e Replication level The replication level of the source volume must be retained when creating SmartClone volumes though you can change the replication level after the SmartClone volumes are created However if you change the replication level for any SmartCl
453. te Copy User Manual Removing statistics 312 removing administrative users from a Group 27 dedicated boot devices 53 DNS server 114 domain name from DNS suffixes list 1 14 ghost storage node after the data is rebuilt 333 monitored variables 138 old log files 169 SNMP trap recipient 133 storage nodes from cluster 212 storage nodes from management groups 187 prerequisites for 187 users from administrative groups 127 virtual manager 207 Repair Storage Node replacing a disk 80 repair storage node 216 prerequisites 216 repairing volumes making unavailable priority volume available 242 replacing dedicated boot devices 53 disks 80 replication changing levels for volumes 242 data 223 level for volumes 223 levels allowed in clusters 223 RAID vs volume replication 67 repairing storage node in cluster with replication 216 requirements for setting levels 238 replication priority and fault tolerance for volumes changing 242 changing priority 242 requirements for setting 239 365 reporting overview 144 reports 135 active 135 details of Hardware Information report 152 diagnostic 145 disk 72 disk setup for RAID 73 generating 150 hardware 144 Hardware Information 150 RAID setup 59 saving Hardware Report to a file 151 storage node statistics 150 requirements system for Failover Manager on ESX Server 190 Requirements for configuring CHAP for iSCSI 291 338 requirements for adding man
454. te Volume Below option e Edit Volume and changing a remote snapshot to a primary volume Connect the iSCSI sessions to the new target volume Launch Windows Logical Disk Manager Bring the disk online Open the system event log and find the IDs for the disks you are working with The disks will have new disk IDs The log will show errors for the disks along with the IDs the cluster was expecting to see for each disk Open a Windows command line and run diskpart exe List the disks that appear to this server by typing the command list disk Select the disk you are working with by typing select disk where is the corresponding number of the disk in the list Display the options set at the disk level by typing detaildisk If the disk is listed as read only change it by typing att disk clear readonly The details show the expected ID for each disk If the server is running Windows 2003 refer to Microsoft KB 280425 for how to change the disk IDs On Windows 2008 and later change the disk ID to the expected ID by typing uniqueid disk ID expected_ID where expected_ID is the corresponding number of the disk in the list Select the volume you are working with by typing select volume where is the corresponding number of the volume in the list Display the volume s attributes typing att vol The volume will show that it is hidden read only and shadow copy Change these attributes by typing att vol clear readonly hi
455. tem would scale it down to 40 0 using a scaling factor of 0 01 If the statistic value is smaller than 10 0 for example 7 5 the system would scale it up to 75 using a scaling factor of 10 The Scale column of the statistics table shows the current scaling factor If needed you can change the scaling factor For example if you are looking at similar items you might change the scaling factor to change the emphasis on one item From the statistics table on the Performance Monitor window select the scaling factor you want from Scale drop down list for the statistic you want to change The line moves up or down the graph based on the new scaling factor If the line is at the very top or bottom of the graph the scaling factor is too large or too small to fit on the graph It is possible for more than one line to be pegged to the top or bottom of the graph in this way resulting in one or more lines being hidden behind another line Set the Scale back to Auto to display the line Exporting data You can export performance statistics to a CSV file or save the current graph to an image file Exporting statistics to a CSV file You can export performance statistics to a CSV file You select which statistics to export They can be different from the statistics you are currently monitoring You also select the sample interval and the duration of the sampled data for export Typical durations are from 10 minutes to 24 hours The maximum duration i
456. tes oseanen n a behets e GE S O a Aa O AE 216 Deleting a cluster hanes banarenasbvteah a ea oh va tesco tee ess naan canna nea mano maaan Re AARGE RIN 219 PD ee en Laci cl eee eee ee een ee een ees nnn er erry cer oreereree cre v errrernerer 221 Understanding how the capacity of the SAN is used c ccccseseeeesteeseeseeeeseeeeeeeseeeeeeeeeenneeens 22 Provisioning storage ae sene a E a E R R neweas tenaheatabees 221 Best Practices ccscaicebisanctsanstanastipacdienueseddawndebubanieabatmenddadetdaned A a A E a 222 Provisioning volumes 3 asacaauainnincanenhoaspnnscnnpianmetarnaticaans enrsd xcus sent seanemn aan icomnenan ian hamtiatiiaiangeemine 222 Full SAGES IST IAL FAY orto zap iis a Uv Sans ton a esa ein BEG ia ei a 222 TAIN IproviSiOnINg eika e A O Canes E nenire Serene 222 Best practice for setting volume size 1 i 2 12enspdeanzahundinenediaatedeesaiivecaiublaienannpieanniuainsnielacnnes 222 Planning dota replication csssenieasstacaetecpmrsis ea tbencntete merece ane pce tates eeicj ae mnananeeacnenmemels 223 Replication level ru sacs ites asta Seals Sain abn a Ca write sibel eed Slais oi O Ea 223 How replication levels work scicc nanecuencereqcaiprassisaanwscaceramaaienraenadrde enema aeaingeennecs 223 Replication priority esctvecvesnveaelv gohan a aha bach lama anand ea eaaleveciopestauleaia cea depasaatndaneetagiyts 225 Best practice for setting replication levels and redundancy modes s cceeseeeeteeeeseeeenes 225
457. tes of Motherboard Port and Motherboard Port2 when configured for Adaptive Load Balancing Table 22 NIC status during failover with Adaptive Load Balancing Failover Status Status of Motherboard Port1 Status of Motherboard Port2 Normal Operation Preferred NoStatus ActiveData Preferred NoStatus ActiveData g p Transfer Yes Transfer Yes i HEE se rt a on Preferred NoStatus Passive Preferred NoStatus ActiveData adond VEIAS Failed Data Transfer No Transfer Yes f Preferred NoStatus ActiveData Preferred NoStatus ActiveData Motherboard Port1 Restored Transfer Yes Transfer Yes 99 Example network configurations with adaptive load balancing A simple network configuration using Adaptive Load Balancing in a high availability environment is illustrated NSM 260 NSM 260 Storage Cluster A Figure 68 Adaptive Load Balancing in a two switch topology Creating a NIC bond Follow these guidelines when creating NIC bonds Prerequisites Verify that the speed duplex flow control and frame size are all set properly on both interfaces that are being bonded These settings cannot be changed on a bonded interface or on either of the supporting interfaces For detailed instructions about properly configuring these settings see Managing settings on network interfaces on page 108 Bond guidelines Create a bond on a storage node before you add the storage node to a management group e Create bonds of 2 interfaces
458. the case of a controlled power down by a UPS See Chapter 16 Sample scripts are available from the Customer Resource Center 184 Working with management groups Shutting down a management group also relates to powering off individual storage nodes and maintaining access to volumes See the command line documentation the Cliq User Manual installed in the documentation directory of the program files Prerequisites e Disconnect any hosts or servers that are accessing volumes in the management group e Wait for any restriping of volumes or snapshots to complete Shut down the management group 1 Log in to the management group that you want to shut down 2 Click Management Group Tasks on the Details tab and select Shut Down Management Group 3 Click Shut Down Group The management group shuts down and disappears from the CMC If volumes are still connected to servers or hosts After you click Shut Down Group a confirmation window opens listing volumes that are still connected and that will become unavailable if you continue shutting down the management group Shut Down Management Group X Shutting down this management group will cause the following connected volumes to go offline volume or Sna Server initiator Node N Chap Name Gateway Conn Initiator IP Port Identifier DW HoLogFiles Q ExchLogs ign 1991 05 c0 10 0 61 16 10 0 11 167 38 400001370000 Are you sure you want to shut down management grou
459. the host name if desired Your Failover Manager is ready to use 5 Minimize your VI Client session Next use the Find function to discover the Failover Manager in the CMC and then add the Failover Manager to a management group Uninstalling the Failover Manager from VMware ESX Server 1 Remove the Failover Manager from the management group 2 Power off the Failover Manager virtual machine in the VI Client 3 Right click the powered off Failover Manager and select Delete from Disk Troubleshooting the Failover Manager on ESX Server Use the following solutions to possible issues you encounter with the Failover Manager on an ESX Server Table 42 Troubleshooting for ESX Server installation Solution Close your CMC session In the VI Client power off the Failover Manager Right click and select Delete from Disk You want to reinstall the Failover Manager Copy fresh files into the virtual machine folder from the downloaded zip file or distribution media Open the VI Client and begin again The CMC displays the IP address of a node if it can be found You cannot find the Failover Manager with Open a VI Client session and select the Summary tab for the the CMC and cannot recall its IP address node you want The IP address and DNS name are displayed in the General information section In Linux If the installer does not start automatically Run CMC_Installer bin again In the VI Client If your cursor is miss
460. the tab window The tab window displays information about an item selected in the navigation window on the Details tab as well as tabs for other functions related to that item For example Viewing the three parts of the CMC on page 30 shows the tabs that display when a management group is selected in the navigation window Tab window conventions Tab windows have certain similarities e Tabs Each tab in the tab window provides access to information and functions that relate to the element selected in the navigation window For example when a cluster is selected in the navig ation window the tabs contain information and functionality that relate specifically to clusters such as usage information about volumes and storage nodes in that cluster and iSCSI sessions connected to the volumes Lists When presented with a list such as a list of storage nodes as seen in a management group Details tab you may select an item in the list to perform an action on Lists and right click Right click on an item in a list and a drop down list of commands appropriate for that item appears e Tasks buttons At the bottom of a tab window the tasks button opens a menu of commands available for the element or function of that tab EY NOTE If you change the default size of the CMC application on your screen the blue Task button at the bottom left of the Tab window may be obscured Scroll the tab window with the scroll bar to bring th
461. ting Started Detaits iscsi Sessions Remote Snapst st Configuration Summary ay Exchange volume is Servers 1 Hame 2 Administration D HactrsLogs Stes i Description Virtual Manager ELS ExchLogs Cluster ExchLogs Performance Monitor Status Normal Storage Nodes 4 A Volumes 1 and Snapshot 3 Type Primary Size 5GB Replication Level 2 Way Provisioned Space 10GB Utilization R Figure 96 2 way replicated volume on 2 site cluster Adding a virtual manager 1 Select the management group in the navigation window and log in 2 Click Management Group Tasks on the Details tab and select Add virtual manager A confirmation message opens 204 Using specialized managers 3 Click OK to continue The virtual manager is added to the management group The Details tab lists the virtual manager as added and the virtual manager icon appears in the management group Z HP LeftHand Networks Centralized Management Console c Getting Starten Details Remote Snapshots I Time i Registral E Configuration Summary es Available Nodes 1 Group i Exchange Hame Exchange Virtual veno iH 2 Gis Servers 1 Status Normal Coordinating manager an Administration Sites a Denver 2 TEP pd ee Name IP Address Model RAID Denver 3 10 0 6117 DELL2950 Norma Denver 1 10 0 6116 DELL2950 Norma Denver 2 10 0 60 32 NSM4150 Norma Special Manager Virtual Ma
462. ting the host name and IP address Tab to the choice that reflects how you want to set the IP address for the Failover Manager If you use DHCP to assign the IP address be sure to reserve that IP Tab to OK and press Enter The Modify Network Settings confirmation window opens Tab to OK and press Enter to confirm the change After a minute or less the IP address window opens Available Network Devices C eth 1 Network Settings The IP Address of this Storage Server has been set to 16 6 14 87 i i i i i i i i Figure 92 Confirming the new IP address Make a note of the IP address and press Enter to close the IP address window The Available Network Devices window opens Tab to Back and press Enter The Configuration Interface main menu opens Tab to Log Out and press Enter The Configuration Interface login window opens Click File gt Exit to close the VMware Console 193 Uninstalling the Failover Manager for VMware Server or Player Use the SAN iQ Management Software DVD to uninstall the Failover Manager 1 2 Remove the Failover Manager from the management group Insert the SAN iQ Management Software DVD into the CD DVD drive and click Install on the opening window Click Failover Manager The Failover Manager installation wizard opens Click through the wizard selecting Uninstall from the Repair or Uninstall window when it opens The Failover Manager is removed from the server
463. tion Pass criteria Fail criteria NSM 160 NSM 260 Generate 3ware Dia gnoelies Re Gansiesacivensal 1e eee was port for ana report successtully gen The report was not generated a 7 lysis contact erated Customer Support Tests the ability of the BBU Capacit BBU to hold a charge The BBU can The BBU failed to Test Pact y BBUs weaken over time hold an accept hold an accept X A failure indicates itis able charge able charge time to replace the BBU Table 30 List of hardware diagnostic tests and pass fail criteria for DL 380 DL 320s NSM 2120 HP LeftHand P4500 and HP LeftHand P4300 i HP HP i Description S criter Fail criteria DL380 DL320s LeftHand LeftHand P4500 P4300 Fan Test Checks the status Fan is nor Fan is faulty X X X X of all fans mal or missing Checks the status Supbly is Supply is Power Test of all power sup a faulty or X X X X norma hie plies missing T Checkethe lt tanig C P Toure Temperahire emperat is within is outside of all temperature X X X X ure Test normal oper normal oper sensors f f ating range ating range Checks the status Cache of the disk control Cache is Cache is cor X X X X Status ler caches normal rupt The BBU is Cache BBU Checks the status normal and ai i Ta of the battery not char cnarging X X X X Status ked h ___ testing or backed up cache ging or test ing faulty Disk Status Checks for the All disk One or more Test prese
464. tion about RAM memory in M the storage node the total amount emory of memory in GB and the total amount of free memory in GB 162 Reporting This term Means this IBM x3650 Stat Information about the CPU CPU seconds shows the number of CPU seconds spent on user tasks kernel tasks and in idle state Machine uptime is the total time the storage node has been running from initial boot up Drive info For each drive reports the model serial number and capacity of the drive Drive status RAID For each drive reports the status health and temperature Information about RAID Rebuild rate RAID Rebuild Rate is a percentage of RAID card throughput Unused devices Any device which is not participat ing in RAID This includes e Drives that are missing e Unconfigured drives Drives that were powered down e Failed drives rejected by array w IO errors Drives that are rebuilding e Hot spare drives Statistics Information about the RAID for the storage node Unit number Identifies devices that make up the RAID configuration including Type of storage BOOT LOG SANIQ DATA e RAID level O 1 5 Status Normal Rebuilding Degraded Off e Capacity e Rebuild statistics complete time remaining x lt Sensors Does not apply to this platform Table 37 Selected details of the hardware Report for Dell 2950 NSM 2060 and
465. tive It is now safe to remove the card from the storage node 3 Power off the storage node Remove the flash card from the front of the storage node Replacing and activating a new boot flash card NSM 160 NSM 260 If you replace a boot flash card in the storage node you must activate the card before it can be used Activating the card erases any existing data on the card and then synchronizes it with the other card in the storage node Insert the new flash card in the front of the storage node Power on the storage node Log in to the storage node On the Boot Devices window select the new flash card Click Activate oOo FS p gt The flash card begins synchronizing with the other card When synchronization is complete Active displays in the Status column 54 Working with storage nodes 3 Storage Use the Storage configuration category to configure and manage RAID and individual disks for storage nodes Configuring RAID and managing disks For each storage node you can select the RAID configuration the RAID rebuild options and monitor the RAID status You can also review disk information and for some models manage individual disks RAID as a storage requirement RAID must be configured for data storage HP LeftHand Networks physical storage nodes come with RAID preconfigured The VSA comes with RAID preconfigured if you have first configured the data disk in the VI Client as described in the VSA Q
466. to monitor and click Next 4 Specify the frequency for monitoring the variable and click Next 136 Reporting 5 For each threshold listed select the type of alert you want to receive Table 27 Types of alerts for active monitoring Type of alert Where alerts are sent For more information To the alert window of the CMC and the Alerts tab See Using the alert win CMC alerts in Reporting dow on page 34 SNMP t To the SNMP trap community managers You must See Adding SNMP raps have configured the storage node to use SNMP traps on page 133 To specified email addresses Type the email ad Email dresses to receive the notification separated by See Adding variables to monit iii commas Then configure Email Notification on the or on page 136 Email tab EY NOTE To save time while setting up active monitoring select all variables then click Set Notifications This setting applies the same email address and other alert settings to all storage nodes Then if you need to customize alert actions for a particular variable you can edit that variable 6 Click Finish when you have configured all the threshold items in the list Editing a monitored variable For the selected variable you can change the frequency of monitoring and the notification routing of the alert 1 Select the Alert Setup tab 2 Select the variable you want to edit 3 Click Alert Setup Tasks and select Edit Monitored Variable T
467. to use snapshots and the schedule and retention policy for schedules to snapshot a volume Snapshots record changes in data on the volume so calculating the rate of changed data in the client applications is important for planning schedules to snapshot a volume EY NOTE Volume size provisioning and using snapshots should be planned together If you intend to use snapshots review Chapter 14 on page 245 Managing capacity using volume size and snapshots How snapshots are created When you create a snapshot of a volume the original volume is actually saved as the snapshot and a new volume the writable volume with the original name is created to record any changes made to the volume s data after the snapshot was created Subsequent snapshots record only changes made to the volume since the previous snapshot Snapshots are always created as a thin provisioned space no matter whether its original volume is full or thin provisioned 226 Provisioning storage Volume size and snapshots One implication of the relationship between volumes and snapshots is that the space used by the writable volume can become very small when it records only the changes that have occurred since the last snapshot was taken This means that less space may be required for the writable volume Over time you may find that space allocated for snapshots becomes larger and the volume itself becomes relatively small Schedules to snapshot a volume and
468. torage node to an existing cluster Add a new storage node to an existing cluster to expand the storage for that cluster EY NOTE Adding a storage node to a cluster causes a restripe of the data in that cluster A restripe may take several hours or longer Adding a new storage node is not the same as replacing a repaired storage node with a new one If you have repaired a storage node and want to replace it in the cluster see Repairing a storage node on page 216 Prerequisite Add the storage node to the management group that contains the existing cluster Storage nodes and cluster capacity Be certain that the capacity of the storage node you add to the cluster matches or is close to the capacity of those already in the cluster All storage nodes in a cluster operate at a capacity equal to that of the smallest capacity storage node If you add a storage node with a smaller capacity the capacity of the entire cluster will be reduced While you can mix storage nodes with different RAID levels in a cluster note that the capacity limitation applies to the available capacity as determined by RAID not the raw disk capacity 211 Example If you have three storage nodes two of which have a capacity of 1 terabyte and one of which has a capacity of 2 TB all three storage nodes operate at the 1 TB capacity Adding storage to a cluster 1 2 7 ann bw Select the cluster in the navigation window Click Cluste
469. torage nodes on page 37 The CMC The CMC is divided into three sections 29 A HP LeftHand Networks Centralized Management Console File Find Tasks Help E Getting Started Hyt Configuration Summary Details Remote Snapshots Time Registration os hand Group GHGs Servers 1 I 22 Administration B S Sites Status Normal Coordinating manager Boulder 1 2 of 3 managers running Ed Virtual Manager r Special Manager None Quorum 2 Local Bandwidth 4 MB sec peo Hame ExchangeHQ S ExchLogs zz it Performance Monitor Name IP Address Model RAID Status RAID Configu Software Ve Manager _ Special Mane Storage Nodes 3 ra ad S Denver 1 10 0 6116 DELL2950 Normal RAIDS 8 0 00 1643 0 Normal RT S Denver 3 10 06117 DELL2950 Normal RAIDS 8 0 00 1643 0 Normal Denver 2 Q Denver 2 10 0 6032 NSM4150 Normal RAIDSO 8 0 00 1659 0 Normal Denas Denver 4 10 0 6024 NSM4150 Normal RAIDSO 8 0 00 1659 0 Normal L Volumes 2 and Snapshots See a Lu ae os G BkupLogs 0 L HadtrsLogs 0 Clusters S ExchLogs Nodes Management Gro 995 Alerts Remaining Date Time Hostname IP Address Alert Message 09 26 20 LF FOM 10 0 15 1 Manager up MultiSite Storage Server NS2060 02 Status storage dow 09 26 20 P FOM 10 0151 Management Group MultiSite Storage Server NS 2060 01 Status storage
470. ts 250 SNMP trap recipient 133 volumes 240 enabling NIC flow control 111 SNMP Traps 133 establishing network interfaces 90 ethO and eth1 90 ethernet interfaces 90 358 evaluating add on applications 317 Remote Copy 318 backing out of 318 scripting 319 backing out of 320 example scenarios for using SmartClone volumes 264 Exporting performance data 315 performance statistics to a CSV file 315 F Failover Manager 173 and Multi Site SAN 173 configuring 190 overview 89 requirement for using 190 requirements for 189 troubleshooting 194 using in Multi Site SAN 189 fault tolerance 335 network interface bonding 92 quorum and managers 72 replication level for volumes 223 replication priority for volumes 225 stopping managers 82 Faults isolating 299 feature registration 180 Feature Registration tab 320 321 features of Centralized Management Console 29 file systems 233 mounting on volumes 237 find Failover Manager on network 195 finding SNMP MIB 132 finding storage nodes 37 Auto Discover 29 on the network 39 first storage node 178 flow control 111 Formatting volumes for use 295 frame size NIC 109 frame size VSA 89 frames editing size 110 full permissions 126 G gateway session for VIP with load balancing 336 Getting Started Launch Pad 37 ghost storage node 216 removing after data is rebuilt 333 Gigabit Ethernet 90 See also GBe Glossary for SAN iQ soft
471. tup Tasks and select Set Threshold Actions Click the check boxes to specify where you want the alert to be communicated e If you check SNMP Trap you must enable SNMP in that category of the storage node See Enabling SNMP agents on page 129 e If you check Email you must provide an email address to send to and you must set up SMTP addresses as well Click OK Viewing and saving alerts Any time that an actively monitored variable causes an alert the alert is logged by the storage node If the CMC is open alerts display in the Alert window on the CMC main window If the CMC is not open these alerts are still logged and you can view them the next time you open the CMC Click a storage node gt Alerts gt Alert Log File tab EY NOTE Alerts category gt Alerts Log File under a storage node displays the most recent alerts up until the alert list reaches 1 MB in size To view alerts older than those displayed on the Alerts tab save the Alerts log on the Alert Log Files tab 1 2 3 4 Select a storage node in the navigation window and log in Open the tree below the storage node and select Alerts Select the Alert Log File tab Click Refresh to make sure you view the most current data Saving the alert log of all variables 1 Perform the tasks in Viewing and saving alerts on page 143 That is select a storage node gt Alerts gt Alert Log Files tab To save the list of alerts click Alert Log
472. tus Health Safe to Rem wI 0 Active normal Yes E 1 Active normal Yes 2 Active normal Yes 3 Active normal Yes o BE Active normal Yes B 5 Active normal Yes O B 6 Active normal Yes B T Active normal Yes o Ms Active normal Yes B 9 Active normal Yes o B 10 Active normal Yes B 11 Active normal Yes 12 Active normal Yes Q B 13 Active normal Yes _ E 14 Hot spare normal Yes 1 Mirrored disk pair 1 2 Mirrored disk pair 2 3 Mirrored disk pair 3 4 Mirrored disk pair 4 5 Mirrored disk pair 5 6 Mirrored disk pair 6 7 Mirrored disk pair 7 8 Hot spare Figure 25 Initial RAID10 setup of the NSM 4150 amp NOTE The initial disk setup shown above for the NSM 4150 may change over time if you have to replace hot swap disks Devices configured in RAID5 If RAIDS is configured the physical disks are grouped into one or more RAIDS sets 63 NSM disks RAID 5 device Figure 26 RAIDS set in an NSM 160 NSM disks RAID 5 device DL 380 disks RAID 5 device DL 320s NSM 2120 and 1 4 7 10 HP LeftHand P4500 Disks Disk layout in the HP LeftHand P4300 RAID 5 device Figure 31 RAID5 set in a IBM x3650 64 Storage Disk Status Health Safe to Ri 0 Active normal Yes 1 Active normal Yes B 2 Active normal Yes B 3 Active normal Yes B 4 Active normal Yes Active normal Yes 1 R
473. two volumes DB1 and Log1 and compares their total throughput You can see that Log averages nearly 18 times the throughput of DB1 This might be helpful if you want to know which volume is busier 299 Performance Monitor Denver 3 27 00 PM 3 27 30 PM 3 28 00 PM 3 28 30 PM 3 29 00 PM 3 29 30 PM 3 30 00 PM 3 30 30 PM 3 31 00 PM Mountain Standard Time America Denver _Average VY 33 202 829 867 26 110 580 594 37 434 308 576 31 534 983 265 Auto 0 000001 Y Boer Bexchserver Through Total B s 1 810 709 447 1 374 796 040 2 307 250 794 1 777 909 155 Auto 0 00001 Y Figure 146 Example showing throughput of two volumes Activity generated by a specific server example This example shows the total IOPS and throughput generated by the server ExchServer 1 on two volumes Performance Monitor Denver 3 34 30 PM 3 35 00 PM 3 35 30 PM 3 36 00PM_ 3 36 30 PM 3 37 00 PM 3 37 30 PM_ 3 38 00 PM 3 38 30 PM 3 39 00 F Mountain Standard Time America Denver ati Unts ge f CE DB K ExchServer 1 Throughput Total B s 67 008 279 920 40 648 929 936 70 164 064 452 66 247 379 36 Ao 000001 v v Eg Logt R ExchServer 1 Throughput Total B s 3 097 850 895 2 945 602 862 3 725 859 873 3 146 344 720 Auto 0 00001 B81 Bexchserver 1 IOPS Total os 1 022 465 620 223 1070819 1 010 855 Auto 0 01 Y Wi bout G ExchServer 1 IOPS Total lOs B7BASI 389571 454817 384 075 Figure 147 Example showing activity gen
474. uick Start Guide The descriptions of RAID levels and configurations for the various storage nodes are listed in Table 6 Table 6 RAID levels and default configurations for storage nodes Model Preconfigured for Available RAID levels NSM 160 RAID5 0 10 5 5 spare NSM 260 RAID5 0 1 5 5 spare DL380 RAIDS 0 10 5 DL320s NSM 2120 RAIDS 10 5 6 IBM x3650 RAIDS 0 10 5 Dell 2950 RAID10 or RAIDS 10 5 cannot change RAID level NSM 2060 RAID10 or RAIDS 10 5 cannot change RAID level HP LeftHand P4300 NSM 4150 VSA HP LeftHand P4500 RAIDS 50 Virtual RAID if data disk is con figured in the VI Client first RAIDS 5 6 10 10 50 RAID virtual 10 5 6 Getting there 1 In the navigation window select a storage node and log in if necessary 2 Open the tree under the storage node and select the Storage category 55 I HP LeftHand Networks Centralized Management Console E Getting Started ily 5 RAID Setup Disk Setuy zt Configuration Summary j p Eis Available Nodes 2 RAID Status E Normal RAIDS Medium Device Name F Alerts RAID Configuration E Hardware RAID Rebuild Rate Priority amp ns 60 24 EHA PrimarySites TCPAP Network ideviccissicOd idisc RAID Device Type 5 Normal Device Status Subdevices 8 Figure 10 Viewing the storage configuration category for a storage node Columns in the RAID Setup tab s
475. uienasielawsdbeneelanses 331 Restarting A MANAQET eceeeeeeceececceeceecececeeeeeenaeeeaeecseeseseeeeseeeeeeeeeceeeeeeeeeceeeeeeeeeeeeaqaneaea 332 Add repaired nodeto cluster aio ines susdaiwuisaivsdsisaueinwisesnldbd wus lindyiialivids eo i a a asinine 332 Rebuild volume dota cusrveotsannneceed acsiemiareneainireranananenaeianiaaniatiaa rela aban tunnel me edhaad 333 Controlling server access ee ee ee Re ne ee ee ee ee renerne 333 Change local bandwidth priority siscassacwnszersavterersardvensensareoansnantiad eueseaeveseelaniannaaimned 333 Remove ghost storage node sites ipsiscavuvcusivutiseib ns ian sin ils alvin initio idle Dn sSiiddl bd aipeloi Me Cavieabiohavy 333 Frais STIMU aee da aietvenonaana a uc nesnnaiaaenssan oncom tenn aoen ares aan sean aaa 334 22 iSCSI and the HP LeftHand Storage Solution cccccceceeeeeeeseeeees 335 Numb r of iSCSI SERS IONS i 24 siaphatsundospcaisinvsiescdnohinibiob ae aa i aa 335 Virtual IP addresses secannieentmnnateanhumaqnasa debe aaieunseenta eenaicunemvicand nanorbiade koeianatanemidinraaeanapeanboies 335 Requirements for using a virtual IP address x caccicsadscansnniarssnnsviavisantininiailnateaavenclenasadsveadacnbveanin 335 IONS SEVE eeneg son cnis usastongu A E 336 iSCSiload balancing ssiunnnen anireo iiie E Eee EEE AE EEEE 336 Reguiremienis aiiceatvesn secs tacaccen se A a E E E R 336 Compliant iSCSI initiators sissies oe dala E E EE E 336 Authentication CHAP neracieco a a AEE E
476. uisites Stop any applications that are accessing the volume and log off all associated iSCSI sessions New in release 8 0 Deleting a volume automatically deletes all associated snapshots except those that are clone points or shared snapshots as part of a SmartClone volume configuration Releases before 8 0 required all associated snapshots to be deleted manually before deleting the volume To delete the volume 1 In the navigation window select the volume you want to delete The Volume tab window opens 2 Click Volume Tasks and select Delete Volume A confirmation window opens 3 Click OK The volume is removed from the cluster 243 To delete multiple volumes 1 2 5 244 In the navigation window select Volumes and Snapshots HP LeftHand Networks Centralized Management Console Getting Started It Configuration Summary 85 Exchange iS Servers 1 Y Administration Sites 7 Performance Monitor torage Nodes 3 D LogsAdmin 2 LogsFinance 2 E Volumes 2 and Snapshots 4 Name Description Status Replication Le Provisioned Sp tization Provisioning Created AdminSna Administrative Normal Tem 2 Way 16B Thin 08 08 2008 0 AdminSna Administrative Normal Tem 2 Way 168 Thin 08 08 2008 0 Finances Finance grou Normal Tem 2 Way 1GB Coe Thin 08 08 2008 0 Finances Finance grou Nor
477. umber of managers is 5 See Table 38 Table 38 Managers and quorum Number of Number for Fault toler Explanation Managers aquorum ance If the manager fails no data control takes place This arrange 1 None ment is not recommended Even number of managers not recommended except in 2 2 None specific configurations Contact Customer Support for more information 3 2 High If one manager fails 2 remain so there is still a quorum Note 2 managers are not fault tolerant See above Even number of managers not recommended except in 4 3 High specific configurations Contact Customer Support for more information 5 3 High If one or two managers fail 3 remain so there is still a quorum Regular managers and specialized managers Regular managers run on storage nodes in a management group The SAN iQ software has two other types of specialized managers Failover Managers and Virtual Managers described below For detailed information about specialized managers and the how to use them see Chapter 10 on page 189 172 Working with management groups Failover managers The Failover Manger is used in 2 node and in Multi Site SAN configurations to support automatic quorum management in the SAN Configuring a Failover Manager in the management group enables the SAN to have automated failover without requiring a regular manager running on a storage node A Failover Manager runs as a virtual machine on a VMware Server or o
478. umes or snapshots you must first stop any applications that are accessing the volumes and log off any iSCSI sessions that are connected to the volumes Deleting the clone point You can delete a clone point if you delete all but one volume that depend on that clone point After you delete all but one volume that depend on a clone point the clone point returns to being a standard snapshot and can be managed just like any other snapshot For example in Figure 138 you must delete any four of the five C class_x volumes before you can delete the clone point BHP LeftHand Networks Centralized Management Console Joke Golika Statai Details ISCSI Sessions Remote Snapshots Assigned Servers Snapshots Schedules Map View Configuration Summary is Available Nodes 1 Volume Ei TrainingOS Name Citok 4 BHE Servers 1 G cots H s Administration Description stes iae Z EH Programming luster rogramming E Pertormance Monitor Status Normal Storage Nodes 3 Volumes 6 and Snapshots 1 Type Primary Created by Manual HE o Size 56B Created 08 05 2008 05 10 45 PM MDT US c scsnap E class 1 1 Replication Leve None Replication Priority Availabilty i S C _5Csnap Provisioned Space 512 MB Provisioning Thin H caciass_2 1 S c _SCsnap Utilization 0 E caciass_3 1 LS c scsnap Target Information GG C class 4 1 iSCSI Name tF C class_5 1 ign 2003 10 com Jefth
479. unication tab itralized Management Console Jog Terme TCP Status DNS Routing Communication SANAQ Interface bondo Communications Mode Multicast Off Unicast On IP Addresses 10 0 2818 10 0 28 25 Select SANAQ Interface X Lo Figure 71 Verifying interface used for SAN iQ communication 3 Verify that the SAN iQ communication port is correct Viewing the Status of a NIC Bond You can view the status of the interfaces on the TCP Status tab Notice that in the Active Passive bond one of the NICs is the preferred NIC In both the Link Aggregation Dynamic Mode bond and the Adaptive Load Balancing bond neither physical interface is preferred Figure 72 shows the status of interfaces in an active passive bond Figure 73 shows the status of interfaces in a Link Aggregation Dynamic Mode bond tralized Management Console J cle TCPAP TCP status DNS Routing Communication Name Description SpeediMethod DuplexiMethod Status Frame Size NIC Flow Con Preferred lbondO Logical Failo Auto Auto Auto Auto Active Default Disabled Motherbo Broadcom C Unknown A Unknown A Passive Fail Default Disabled Yes Motherbo Broadcom C 1000Mbs A Full Auto Active Default Disabled 1 Preferred interface Figure 72 Viewing the status of an active passive bond 103
480. unning No action is required e Rebuilding A new disk has been inserted in a drive bay and RAID is currently rebuilding No action is required e Degraded RAID is degraded A disk may have failed or have been removed from its bay For hot swap platforms NSM 160 NSM 260 DL380 DL320s NSM 2120 Dell 2950 NSM 2060 NSM 4150 HP LeftHand P4500 HP LeftHand P4300 simply replace the faulty inactive uninitialized or missing disk For non hot swap platforms IBM x3650 you must add a disk to RAID using Storage gt Disk Setup tab if you are inserting a replacement disk e Off Data cannot be stored on the storage node The storage node is offline and flashes in the navigation window e None RAID is unconfigured Managing disks Use the Disk Setup tab to monitor disk information and perform disk management tasks as listed in Table 11 72 Storage A CAUTION The IBM x3650 does not support hot swapping disk drives Hot swapping drives is NOT supported for RAID O on any platform Table 11 Disk management tasks for storage nodes Disk setup function Available in model Monitor disk information Power on or off a disk Add a disk to RAID Getting there All IBM x3650 e NSM 260 use only for adding capacity arrays to a chassis 1 In the navigation window select a storage node 2 Select the Storage category in the tree below it 3 Select the Disk Setup tab Reading the disk r
481. uration Summary provides a roll up summary of the configuration for volumes snapshots and storage nodes f in the SAN In addition this summary also provides guidance for recommended maximum configurations that are safe for performance and scalability Exceeding the recommended maximums may result in volume availability issues under certain failover and recovery scenarios S Volumes amp Snapshots H lt iSCSI Sessions H Storage Nodes in Management Group Storage Nodes in Cluster c Storage Nodes in Cluster c2 Storage Nodes in Cluster c3 1 Volumes and snapshots are nearing the optimum limit One cluster is nearing the optimum limit for storage nodes Figure 84 Warning when items in the management group are reaching safe limits Configuration errors When any item exceeds a recommended maximun it turns red and remains red until the number is reduced See Figure 85 File Find Tasks TA Getting Started Help Available Nodes 3 ray CJS1 Servers 4 Administration Configuration Summary N The Configuration Summary provides a roll up summary of the configuration for volumes snapshots and storage nodes if inthe SAN In addition this summary also provides guidance for recommended maximum configurations that are safe zf for performance and scalability Exceeding the recommended maximums may result in volume availability issues under zf certain failo
482. uration you want for the volumes and snapshots Planning your storage configuration requires understanding how the capacity of the SAN is affected by the RAID level of the platforms and the features of the SAN iQ software For example if you are provisioning storage for MS Exchange you will be planning the number and size of volumes you need for the databases and the log files The capacity of the cluster that contains the volumes and snapshots is determined by the number of storage nodes and the RAID level on them Understanding how the capacity of the SAN is used The capacity of the SAN is a combination of factors The first factor is the clustered capacity of the storage nodes which is determined by the disk ca pacity and the RAID level configured on the storage nodes See Planning the RAID configuration on page 67 The second factor is the effect of the replication level of the volumes and snapshots See Planning data replication on page 223 The third factor is the snapshot configuration including schedules and retention policies See Managing capacity using volume size and snapshots on page 226 The fourth capacity factor is the impact of using Remote Copy as part of your backup and recovery strategy Copying data to a remote cluster using remote snapshots and then deleting that data from the application cluster allows you to free up space on the application cluster more rapidly See the chapter Understand
483. ure 131 on page 280 and Figure 132 on page 280 In Figure 117 you can see a regular volume with 3 snapshots on the left and on the right a regular volume with 1 SmartClone volume 1 clone point and 2 shared snapshots f amp Volumes 2 and Snapshots 3 EH a Volumes 6 and Snapshots 3 I BkupLogs 0 E 6 C _SCsnap HaetrsLogs_SS_3 S C _snap2 amp HadetrsLogs_SS_2 I C _snap1 HagtrsLogs_SS_1 C class_1 3 6 C _SCsnap S C _snap2 I C _snap1 Regular volumes and SmartClone volumes with clone snapshots points and shared snapshots Figure 117 How SmartClone volumes clone points and shared snapshots appear in the CMC Example scenarios for using SmartClone volumes The following examples are just a few of the most typical scenarios for using SmartClone volumes Deploy multiple virtual or boot from SAN servers You can save significant space in environments with multiple virtual or boot from SAN servers that use the same base operating system A server s operating system takes up considerable storage but does not change frequently You can create a master image of the operating system on a volume and prepare it for duplication Then you can create large quantities of SmartClone volumes from that master image without using additional storage capacity Each SmartClone volume you create from the master image is a full read write version of the operating system and has all the same management features as a regular HP Left
484. ured for the SAN networks Open the VMware Server Console Select the Failover Manager virtual machine in the Inventory list From the menu select Host gt Virtual Network Settings Click the Automatic Bridging tab Ensure that the checkbox under Automatic Bridging is checked Click Add in the Excluded Adapters section A list of network adapters opens Add all adapters in the list except the adapter identified in step 1 Click OK when you are finished Using the Failover Manager on VMware ESX Server A CAUTION Do not install the Failover Manager on the HP LeftHand Storage Solution since this would defeat the purpose of the Failover Manager Installing the Failover Manager on VMware ESX Server Install the Failover Manager from the HP LeftHand Management DVD or obtain the Failover Manager as a zip package downloaded from the HP LeftHand Networks web site When you install the Failover Manager for the first time you will need to Start the VMware Infrastructure Client VI Client Transfer or upload the virtual machine to the ESX Server Add the Failover Manager to inventory Power on the Failover Manager Set the IP address and host name of the Failover Manager EY NOTE By default ssh and scp commands are disabled for the root user To enable access see the VMware documentation for ESX Server Basic Administration Using the HP LeftHand Management DVD 1 2 Using the HP LeftHand Management DVD
485. user and or create new ones Default administrative user The user who is created when you create a management group becomes a member of the Full Administrator group by default Adding a new administrative user Add administrative users as necessary to provide access to the management functions of the SAN iQ software 1 Log in to the management group and select the Administration node 2 Click Administration Tasks in the tab window and select New User 3 Type a User Name and Description 4 Type a password and confirm that password 5 Click Add in the Member Groups section 6 Select one or more groups to which you want the new user to belong 7 Click OK 8 Click OK to finish adding the administrative user Editing administrative users Each management group has an administration node in the tree below it You can add edit and remove administrative users here Editing administrative users includes changing passwords and group memberships of administrative users 123 Changing a user s description 1 2 3 4 Log in to the management group and select the Administration node Click Administration Tasks in the tab window and select Edit User Change the User Description as necessary Click OK to finish Changing a user s password iP 2 3 4 Log in to the management group and select the Administration node Click Administration Tasks in the tab window and select Edit User Type a new password and confi
486. ver and recovery scenarios Name Co a B ke H G Volumes amp Snapshots lt iscsi Sessions 43 S Storage Nodes in Management Group Ey Storage Nodes in Cluster c Ly Storage Nodes in Cluster c2 L Storage Nodes in Cluster c3 1 Volumes and snapshots have exceeded recommended maximums One cluster remains near optimum limit Figure 85 Error when some item in the management group has reached its limit Creating a management group Creating a management group is the first step in the process of creating clusters and volumes for storage Tasks included in creating a management group are Planning the management group configuration e Creating the management group by using the Management Groups Clusters and Volumes wizard Ensuring you have the proper configuration of managers 177 Guide to creating management groups When using the Management Groups Clusters and Volumes wizard you must configure the characteristics described in Table 40 Table 40 Management group requirements Management group re quirement Configure storage nodes Plan administrative users Plan date and time configur ation Plan Virtual IP Addresses VI Ps Starting a manager Assigning manager IP ad dresses What it means Before you create a management group make sure the storage nodes for the cluster are configured for monitoring alerts and network bonding as best fits your network envir
487. ver to be designated preferred or not preferred EY NOTE A preferred NTP server is one that is more reliable such as a server that is on a local network An NTP server on a local network would have a reliable and fast connection to the storage node Not preferred designates an NTP server to be used as a backup if a preferred NTP server is not available An NTP server that is not preferred might be one that is located elsewhere or has a less reliable connection 4 Click OK The NTP server is added to the list on the NTP tab The NTP servers are accessed in the order you add them and preferred servers are accessed before non preferred servers The first server you add if it is marked preferred has the highest order of precedence The second server you add takes over as a time server if the preferred server fails Editing NTP servers Change whether an NTP server is preferred or not 1 2 3 4 A 120 Select an NTP server in the list Click Time Tasks and select Edit NTP Server Change the preference of the NTP server Click OK The list of NTP servers displays the changed NTP server in the list NOTE To change the IP address of an NTP server you must remove the server no longer in use and add a new NTP server Setting the date and time Deleting an NTP server You may need to delete an NTP server e If the IP address of that server becomes invalid e If you no longer want to use that server e If you wa
488. ving a storage node from a management group on page 187 Deleting a management group on page 187 Starting and stopping managers After adding the storage nodes to the management group start managers on the additional storage nodes in the management group The number of managers you start depends upon the overall design of your storage system See Managers overview on page 171 for more information about how many managers to add Starting additional managers 1 2 In the navigation window select a storage node in the management group on which to start a manager Click Storage Node Tasks on the Details tab and select Start Manager Repeat these steps to start managers on additional storage nodes Stopping managers Under normal circumstances you stop a manager when you are removing a storage node from a management group You cannot stop the last manager in a management group If you stop a manager 181 that compromises fault tolerance the management group displays the icon indicating an item needs attention HP LeftHand Networks Centralized Management Console Jog sd Getting Started Details Remote Snapshots Time Registration E Configuration Summary EH Available Nodes 1 Group Golden 1 Name B Exchange y E Servers 0 Status Normal Coordinating manager Denver 1 2 of 2 managers running 3 managers or 2 managers and es Administration a virtua
489. ware and LeftHand SAN 347 glossary SmartClone volumes 264 Graphical Legend Hardware tab 33 group name editing 126 groups administrative default groups 125 deleting administrative 127 groups administrative 125 H hardware diagnostics 144 list of diagnostic tests 145 tab window 144 hardware information log files 167 hardware information report 144 150 details 152 expanded details 150 generating 150 saving to a file 151 updating 150 Hardware tab in graphical legend 33 help obtaining 27 Highlighting lines 315 host names access SNMP by 130 changing 44 resolution 44 host storage node for virtual IP address 335 hot spares in RAIDS 58 hot swap 59 RAID degraded safe to remove status 81 HP technical support 27 HP LeftHand P4300 capacity RAID5 58 capacity RAID6 59 disk setup 80 parity in RAID6 59 RAID levels and default configuration 55 HP LeftHand P4500 capacity RAID5 58 capacity in RAID6 59 disk setup 79 disk status 79 drive failure 59 hot swapping 59 in RAID10 61 parity in RAID6 59 RAID levels and default configuration 55 RAID rebuild rate 69 RAID5 set 63 RAID6 66 HP System Insight Manager HP SIM logging into 130 I O performance 214 IBM x3650 capacity RAID5 58 disk arrangement in RAID levels and default configuration 55 icons licensing 318 used in Centralized Management Console 33 identifying network interfaces 90 increasing volume
490. wer Sipp isnoinal Supply is faulty or X supplies missing Temperature is Temperature is out T Checks the status of all temper emperature Test within normal oper side normal operat X ature sensors ating range ing range Cache Status pe ed Cache is normal Cache is corrupt X controller caches The BBU is normal i Cache BBU Status o the status of the bat and not chargingsr The BBU is charging X ery backed up cache testing or faulty testing 148 Reporting Diagnostic Test Description Pass criteria Fail criteria IBM x3650 g P Disk Status Test Checks for the presence of all All disk drives are One or more drives X disk drives present are missing Disk Temperature Checks the temperature of all W Temperature is L temperature is f within normal oper outside normal oper X Test disk drives ating range ating range S M A R T Self Monitoring Analysis and Reporting Technology is implemented in all modern disks A pro Disk SMART gram inside the disk con All drives pass Warning or Failed if Health Test stantly tracks a range of the health t 4 one or more drives X vital characteristics including S9 S fails health test driver disk heads surface state and electronics This in formation may help predict hard drive failures Generate SMART logs for analysis Generates a drive health re The report was suc The report was not X contact Customer port cessfully generated generated Support Ge
491. ximum number of volumes and snapshots that can be created in a management group see Configuration summary overview on page 174 Reviewing SAN capacity and usage You can review detailed information about the capacity of your cluster the volumes it contains and the provisioning of the storage nodes in the cluster This information is presented in a series of tab windows presented at the cluster level 227 E Getting Started bs Configuration Summary Hs Available Nodes 3 Exchange Servers 1 a Administration sites S HP LeftHand Networks Centralized Management Console uow Details Use Summary Volume Use Node Use iSCSISessions Remote Snapshots iscsi Cluster Name ExchLogs Description Status Normal Type Standard Disk Usage Total Space 6852 60 GB Provisioned for Volumes 25 00 GB Hot Provisioned 6812 34 GB Provisioned for Snapshots 15 25 GB Utilization j 16 Storage Node Sites No storage nodes from this cluster are in sites Figure 106 Cluster tab view Cluster use summary The Use Summary window presents information about the total space available in the cluster the amount of space provisioned for volumes and snapshots and how much of that space is currently used by volumes and snapshots Getting Sterted 5 Configuration Summary 8 Exchange T Servers 1 HY Administration H sites oS m HP LeftHand Networ
492. y explanation see Table 53 page 240 For example if you have 2 way replication with 3 storage nodes in the cluster you can change from Availability to Redundancy if all the storage nodes in the cluster are available For a detailed explanation about see Replication priority page 225 Type Individual Whether the volume is primary or remote Provisioning Individual Whether the volume is fully provisioned or thinly provisioned To edit the SmartClone volumes 1 In the navigation window select the SmartClone volume for which you want to make changes 2 Click Volume Tasks and select Edit Volume The Edit Volume window opens See Table 64 on page 283 for detailed information about making changes to the SmartClone volume characteristics 3 Make the desired changes to the volume and click OK If you change a SmartClone volume characteristic that will change other associated volumes and snapshots a warning opens that lists the volumes that will be affected by the change If there are too many volumes to list a subset will be listed with a note indicating how many additional volumes will be affected Deleting SmartClone volumes Any volumes or snapshots that are part of a SmartClone network can be deleted just like any other volumes or snapshots The only exception is the clone point which cannot be deleted until such time that it is no longer a clone point 284 SmartClone volumes A CAUTION Before deleting any vol
493. y n n replication level adjacent storage nodes remains active When the unavailable storage node returns to active status in the cluster then the volume re synchronizes across the replicas Redundancy This setting ensures that the volume becomes unavailable if it cannot maintain 2 replicas For example if 2 way replication is selec ted and a storage node in the cluster becomes unavailable thereby pre venting 2 way replication the volume becomes unavailable until the storage node is again available Default value Primary Primary volumes are used for data storage Remote volumes are used for configuring Remote Copy for business con tinuance backup and recovery or data mining migration Provisioning Primary Default value Full Fully provisioned volumes are the same size on the SAN as the size presented to the application server Thinly provisioned volumes have less space reserved on the SAN than the size presented to the application server As data is stored on the volume the SAN iQ software automatically increases the amount of space alloc ated on the SAN The SAN iQ software allocates space as needed However thin provisioning carries the risk that an application server will fail a write because the SAN has run out of disk space Creating a volume A volume resides on the storage nodes contained in a cluster You can easily create a basic volume or customize the Advanced settings Both options are d
494. y sary the values for X X X total free shared cached memory and numker of buffers This section of the report contains many details about CPU the CPU including X X model name clock speed and cache size 153 This term Means This NSM 160 NSM 260 VSA Stat Backplane informa tion Information about the CPU e CPU seconds shows the num ber of CPU seconds spent on user tasks kernel tasks and in idle state e Machine up time is the total time the stor age node has been running from initial boot up This part of the re port delivers selec ted information about the back plane firmware version serial number and LED information Motherboard in formation This part of the re port delivers selec ted information about the mother board including but not limited to IPMI and firm ware Drive status For each drive re ports the status health and temper ature VSA only Temperature is not reported for the VSA Health normal if a drive is present and powered on Drive Info For each drive re ports the model serial number and capacity RAID Information about RAID X X Reporting This term Means This Rebuild Rate RAID Rebuild Rate is a percentage of RAID card throughput RAID Rebuild Rate is a priority meas ured against other OS tasks Unused Devices Any device which is not partic
495. you can see all of the details for a specific statistic in the table including the statistic definition 1 2 In the navigation window log in to the management group Select the Performance Monitor node for the cluster you want The Performance Monitor window opens Right click a row in the table and select View Statistic Details The Statistic Details window opens with all of the information for the selected statistic that is in the table plus the statistic definition Click Close Removing and clearing statistics You can remove or clear statistics any of the following ways Remove one or more statistics from the table and graph Clear the sample data but retain the statistics in the table Clear the graph display but retain the statistics in the table Reset to the default statistics Removing a statistic You can remove one or more statistics from the table and graph 1 2 In the navigation window log in to the management group Select the Performance Monitor node for the cluster you want The Performance Monitor window opens Right click a row in the table and select Remove Statistics Use the CTRL key to select multiple statistics from the table A message displays confirming that you want to remove the selected statistics Click OK Clearing the sample data You can clear all the sample data which sets all table values to zero and removes all lines from the graph This leaves all of the statistics in the ta
496. ys any data stored on that storage node Requirements for reconfiguring RAID are listed below For VSAs there is no alternate RAID choice so the only outcome for reconfiguring RAID is to wipe out all data Requirements for reconfiguring RAID Changing preconfigured RAID on a new storage node RAID must be reconfigured on individual storage nodes before they are added to a management group If you want to change the preconfigured RAID level of a storage node you must make the change before you add the storage node to a management group Changing RAID on storage nodes in management groups You cannot reconfigure RAID on a storage node that is already in a management group If you want to change the RAID configuration for a storage node that is in a management group you must first remove it from the management group A CAUTION Changing the RAID configuration will erase all the data on the disks 70 Storage To recontigure RAID 1 In the navigation window expand the configuration categories for the storage node and select the Storage category 2 On the RAID Setup tab click RAID Setup Tasks and select Reconfigure RAID 3 Select the RAID configuration from the list 4 Click OK 5 Click OK on the message that opens RAID starts configuring Ef NOTE A storage node may take several hours for the disks to synchronize in a RAID10 or RAID5 50 or RAID6 configuration During this time performance will be degraded When the
497. zed Mana Getting Started st Configuration Summary Available Nodes 1 eee FailoverManager Storage Nodes 3 Volumes 0 and Snapshots 0 Performance Monitor Storage Nodes 3 Volumes 11 and Snapshots 1 6 C _SCsnap EEE C class_1 1 L C _SCsnap I C amp class_2 1 C amp class_3 1 C amp class_4 1 C class_5 1 I C amp class_6 1 I C amp class_7 1 I C aclass_8 1 I C aclass_9 1 I C aclass_10 1 Figure 123 SysAdm cluster now has the 10 SmartClone volumes 1 clone point and the source volume 271 Table 60 shows the shared and individual characteristics of SmartClone volumes Note that if you change the cluster or the replication level of one SmartClone volume the cluster and replication level of all the related volumes and snapshots will change Table 60 Characteristics of SmartClone volumes Shared characteristics Individual characteristics Cluster Name Replication level Description Replication priority Size Type Primary or Remote Provisioning Thin or Full Server EY NOTE Snapshot schedules and remote copy schedules are also individual to a single SmartClone volume Clone point 6 The icon shown above represents the clone point in the navigation window The clone point is the snapshot from which the SmartClone volumes are created The clone point contains the snapshot data that is shared among the multiple volumes Because the
Download Pdf Manuals
Related Search
Related Contents
manuel d`utilisation de la salle des marches acces entreprises appel CCemail Ventam 85 Interlock and Gas Proving System User Guide Document Manager Sitecom USB 2.0 Card reader 51 in 1 Ives 436 User's Manual Gigaset A420 Product Sheet GUIA%DE%INSTALACIÓN% Samsung DVD-P380K Manual de Usuario Copyright © All rights reserved.
Failed to retrieve file