Home
HP P4000 LeftHand SAN Solutions with VMware vSphere Best
Contents
1. Figure 18 Assigning multiple servers to a single volume Choosing datastores and volumes for virtual machines More than one virtual machine can and generally should be stored on each datastore or volume The choice of where to put virtual machines should be driven by space considerations current datastore load the relationships of the VMs to each other and performance In general virtual machines that do not have a relationship to one another should not be mixed on the same P4000 volume however there are always tradeoffs Consolidating VMs becomes mandatory as their numbers increase and larger configurations are demanded P4000 features such as Snapshots Remote Copy and SmartClone volumes are very useful with virtual machines but will always affect all virtual machines on the same volume simultaneously If virtual machines that have no relationship are mixed on a single volume those virtual machines will have to be snapshot rolled back remotely copied and cloned together Performance of virtual machines could also be affected if too many virtual machines are located on a single volume The more virtual machines on a volume the more I O and contention there is for that volume particularly when multiple ESXi hosts run VMs sourced on the same datastore Refer to the section on Storage Distributed Resource Scheduler DRS for the new vSphere 5 feature VMware s vSphere Storage APIs for Array Integration The VM
2. 16 virtual machines will function on a single volume but they might experience degraded performance depending on the hardware configuration and if they are all booted at the same time Volumes with four to eight virtual machines are less likely to affect performance With VMware s vSphere Storage APIs for Array Integration VAAI locking Atomic Test and Set ATS new best practice recommendations suggest that up to ninety 96 virtual machines will function on a single volume Volumes with twenty four 24 to forty eight 48 virtual machines are less likely to impact VM performance These VMs are highly dependent upon I O workload patterns as well as other variables Datastore details will show if hardware acceleration is enabled A decision tree for VM placement is shown in Figure 21 With or without VAAI virtual machine placement will impact storage I O Figure 21 Manual decision tree for determination of VM placement on LUN Should another VM be put on this volume Are SAN snapshots or remote copy not being used or is it acceptable for all the other virtual machines in the datastore volume to be snapshot and restored at the same time Is the disk O of the virtual machine that is already on the proposed datastore volume performing well Is there enough space on the proposed datastore volume or can it be extended Put another VM on the proposed volume datastore Create a new datastore volume or check another on
3. HP P4000 LeftHand SAN Solutions with VMware vSphere Best Practices Technical whitepaper Table of contents Executive SOMA y isere dies eens EE nuevos carpus E EE EEEE E 2 New Feature Challenge se scarenssnatanreecwaecca cet ntseienttaaoncaetactetessnonsasesuaaceceummacsiadsans 3 EPG ds Sl SetWt On VSM be S E 4 Networking for the iSCSI Software Adapter cccccccccccceessseeeeeeeeeeeeeeeeesssasaaeee ees 4 Two neon DOS a S E EAS 5 Four neiwork POTIS sirere were dents pectic anetomasnce te a s anes icdial dence ced AER ETE TERESE FRERES 5 SINGING POOR PIS a e E E E E E ER EE ees 6 More than six network ports ccccccccsssseeeecceeeesseeeeeeeeeeesaeeeeeeeeeesaeeeeeeeeesaaaaeeeeees 7 OOBE OPINE ete eee ee eee see er Ae nee are ei eee ee eee ee eee nee eee 7 Enabling the iSCSisofiware adapief o 220ssce gt lt coseanswehstaadeanesccaavsnderierinannancramnssoreniees 7 HBA connectivity and networking ss snnsssssssnnssssssseesressssererrrsssssrrersssererrssssssreen 7 PMI pa 9 C29 e E AEE seaneo sted 8 Conneching and using 1SC 5 VOICES xscctcsecs cescccnntessdenn dasbosenicantctstantsemmsxeriietad 8 Creating the first iSCSI volume on the SAN cccccccccceeeeesseseeeeeeceeeseeeeeeeaseeeeeeeees 8 Enabling Load Balancing for perfarmaNCE sc cccr2sc0 5 s satanaczscunseseienacmnceeeicereces 9 Discovery of the first iSCSI WONG sssrinin aair iao EEE 9 Discovering additional vol m ES xi o 2heoss rice seannesndchasiedstssdaccan
4. in vSphere 5 and is still 2TB Note that VSAs based upon SAN iQ 9 5 will still support a maximum of 10TB with 5 virtual disks of 2TB each What version of P4000 Software supports VMware vSphere 5 All HP P4000 releases HP SAN iQ Software versions 9 0 and above support vSphere 5 but does not fully support all of the new VAAI primitives For SRM 5 SAN iQ 9 5 is required with the new SRA 5 0 plugin 2 For more intormation For more information on HP P4000 SANs visit www hp com go p4000 For more information on the HP P4000 Virtual SAN Appliance visit www hp com go vsa HP single point of connectivity www hp com storage spock HP Virtualization with VMware www hp com go vmware HP Insight Control management software www hp com go insightcontro HP BladeSystem www hp com go bladesystem HP BladeSystem interconnects www hp com go bladesystem interconnects HP Systems Insight Manager HP SIM www hp com go hpsim HP Virtual Connect Technology www hp com go Vvirtualconnect VMware fault tolerance demo using P4000 SAN hitp h30431 www3 hp com tr_story 21 cb39a5a03 7d7a3050746dc4838b02d5700d31 amp rf bm SAN iQ 9 P4000 VMware VAAI Whiteboard http www youtube com watch v pkhJRwW jXc amp playnext amp list PL4B2F1EE84DAFE90B To help us improve our documents please provide feedback at hitp h20219 www2 hp com ActiveAnswers us en solutions technical_tools_feedback html vmware Share with colleagues in E Become a fan on
5. LUNs is recommended Figure 26 Registering the HP P4000 SAN VASA Provider l Bp ee j ieee eee ee y M eee ee For additional information best practices and software please go to the HP Insight Control site http h18000 www1 hp com products servers management integration html Storage I O Control Storage I O Control Storage I O Control was introduced in vSphere 4 1 to provide I O prioritization of VMs Dynamic allocation of I O queue slots across vSphere hosts Storage O control automatically throttles a VM that is consuming disparate amount of I O bandwidth when configured latency thresholds are reached Storage DRS and Storage O Control both prevent deprecation of service level agreements SLAs while providing fair storage I O distribution across available resources Best practices e Use at least two Gigabit network adapters teamed together for better performance and for failover of the iSCSI connection IOGbE network adapters will also increase performance e Teaming network adapters provides redundancy for networking components such as adapters cables and switches An added benefit to teaming is an increase in available O bandwidth Network teams on HP storage nodes are easily configured in the CMC by selecting two active links and enabling a bond Adaptive load balancing ALB is the most common teaming method used on P4000 storage nodes From the storage node ALB bonding supports receiving data into the storage trom o
6. M NRC U EEEN x Get connected www hp com go getconnected Current HP driver support and security alerts delivered directly to your desktop Copyright 2011 Hewlett Packard Development Company L P The information contained herein is subject to change without notice The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as constituting an additional warranty HP shall not be liable for technical or editorial errors or omissions contained herein Microsoft and Windows are U S registered trademarks of Microsoft Corporation 4AA3 6918ENW Created September 2011
7. P4000 SAN volumes and vSphere 5 datastores can be expanded dynamically If space is running low on a datastore you must first expand the volume on the P4000 SAN To increase the size of the P4000 SAN volume simply edit the volume in the CMC or CLI and increase its size The P4000 SAN immediately changes the LUN size and lays out data across the cluster accordingly without affecting the volumes availability Once the volume has been expanded you can increase the size of the VMFS datastore from within the vSphere client Increasing the size of the datastore is preferred over adding extents Expanding an iSCSI LUN in the CMC Go to the CMC Highlight the volume to be expanded Right click on the volume and click on Edit volume A new window pops up showing the LUN size as 1TB nk WN gt Change the reported size to the desired size and click on ok Figure 9 In this example it is changed to 1 2 TB Figure 9 Edit the Volume Size in the P4000 CMC Het Fel eis fe 0 See a ee Cows oe es ee ee ee Aes Es tT p h Ti i a i ee oe en b i er _ oe a E a a Ci aed qh a KTE pT ol io PMT FLY 6 The reported size shows the expansion size immediately Figure 10 Figure 10 The Reported Size has now changed and is immediately available b i Trep Seren Tepme A are if Sa pee i PJ Cuag Bi Pe L Pee E ay E E H r brewed Cad igh FB Coed ip mre b
8. base SPOCK htip www hp com storage spock e For optimal performance and failover the guest network used by the initiator should have at least dual Gigabit links and be separate from other virtual networks including VMkernel vMotion FT host management and virtual machine public networks Make sure that the guest operating system is using the fastest paravirtualized network driver vmxnet3 NIC driver from VMware tools Make sure that the virtual machine will not be used in conjunction with VMware Site Recovery Manager SRM SRM does not work with volumes connected by guest initiators How many virtual machines should be on a volume or datastore Refer to Choosing datastores and volumes for virtual machines Are jumbo frames supported Jumbo frames IP packets configured larger than the typical 1 500 bytes i e up to 9 000 bytes are supported by all HP P4000 SANs In order for jumbo frames to be effective they must be enabled end to end including in network adapters and switches For most Gigabit iSCSI implementations jumbo frames are not required for good performance 10 Gigabit iSCSI implementations do see significant benefits from jumbo frames What is the maximum size iSCSI volume supported in vSphere 5 and what size vmdk The maximum volume size supported by vSphere 5 is 64TB Note that VMFS 5 is required If upgrading ensure that datastores are running the newer VMFS 5 format The maximum VMDK size has not changes
9. configuration of the Application Aware Snapshot Manager is performed within the HP P4000 CMC Users configure the IP address of the Virtual Center server The Centralized Management Console communicates with the vSphere Virtual Center Server s during the snapshot process Virtual Center quiesces VMs and takes a VMware snapshot VMware tools within each guest VM quiesces applications Once quiesced the HP P4000 SAN performs a hardware based snapshot which contains vSphere VM snapshots when examined thru snapshot manager and represented to vSphere hosts HP P4000 snapshots of Raw Devices P4000 SAN snapshots of vSphere 5 RDMs are supported in exactly the same way as for physical servers either booting from the SAN or accessing LUNs on the SAN Detailed information about P4000 snapshots and how they work for specific applications is available on the P4000 website hitp www hp com go p4000 In vSphere 5 two compatibility modes for RDMs are available Virtual compatibility mode allows an RDM to act exactly like a virtual disk file including the use of VMware based snapshots Physical compatibility mode allows direct access of the LUN device for applications that need lower level direct raw device control RDMs offer some benefits including e Dynamic Name Resolution the identification associates each RDM with the device regardless of physical server changes adapter hardware path changes or device relocation e Distributed File Locking RDM
10. hosts and HP P4000 SANs support 10GbE interfaces and benefit from increased bandwidth 1OGbE network ports offer additional bandwidth over 1GbE interfaces With P4000 SANs supporting 1OGbE interface upgrades throughput has been seen to increase 2 3x depending upon access patterns SAN cluster size data path and bandwidth availability to the hosts Note that in 10GbE networks full mesh backplanes and latency options vary and impact performance Flexible network infrastructure tailors variable bandwidth options to need Generally a SAN s performance is limited by the number of disk spindles dictating IOP characteristics capabilities of the physical storage In scalable storage architectures this is distributed across the P4000 SAN cluster Note that in all previous examples 1GbE could as easily been replaced by faster network bandwidth options and will impact best practice choices It is best practice to balance networking technologies end to end for overall solution performance characteristics For example HP P4000 10GbE solutions are limited with 1GbE switching infrastructure and therefore a bottleneck Enabling the iSCSI sottware adapter If a hardware iSCSI host adapter is not used enable the vSphere 5 iSCSI Software Adapter on each ESXi host The iSCSI Software Adapter is managed from each host s Storage Adapters list Here are some guidelines e Enable the iSCSI Software Adapter on each ESXi host e Copy or write down the iSCSI qualif
11. in the P4000 Multi Site User Guide This allows vSphere 5 clusters to act exactly the same way they do when physically located in a single location HP P4000 Multi Site SANs require virtual IP addresses VIPs in unique subnets When connecting ESXi hosts to a Multi Site SAN each of the virtual IP addresses VIPs of the SAN from each site should be listed in the Dynamic Discovery Send Targets list of the iSCSI Software Adapter Path selection policy for Multi Site SAN volumes should be set to fixed default HP P4000 SmartClone volumes of B P4000 SmartClone volumes may also be used oiia To oaea e ou in a vSphere 5 environment Figure 17 Using a i g HP P4000 SmartClone Technology all the l nS virtual machines stored on a single volume can mana e m be cloned instantly without replicating data a ai SmartClone volumes consume space for data gi f F z changes only from the time the SmartClone M cumme TO T o volume was created SmartClone volumes ff g P are the best way to deploy small quantities of i n cloned golden image VMs or virtual desktops aiii j Manageability and deployment of VDI om environments are best to leverage VMware View implementations with Linked Clones Please refer Figure 17 Map View of HP P4000 SmartClone volumes in the CMC to the HP VDI Reference Architectures j SmartClone volumes can be used seamlessly with any other P4000 software feature including snapshots or Remote Copy SmartC
12. to map the VMware objects VMs Datastores RDMs to the virtual disks on the P4000 SAN Additionally storage provisioning operations create datastore expand datastore create VM from template and clone VM are also supported Figure 25 HP Insight Control for VMware vCenter E iik ED h A ii E p hm hg mrm inam ee to Harus 4 imal ph age od eg a a T l a i rr a be E Seem Bie a a if meee J Beene Hiipa Lo m E F F inh epai ba Cd biss a S mii D Ppen Thema i I kiaiii Hai dma m i a a Pome D a rai Pome Leo of i af Aii LHR ka B T pi tiaras Bel areata fie ai b Bogert beg iP iyara bie Hir JL EuT E Drage Imram na Od ee S By geese merr F rL Ga mur simaj Baheritre p bHan Bi GA a Fle oad e fa ee 6 Hopa Tent PE ikai Eke pl p eg ge F mg fg ERTS m m re eal nemen fehed e 7 PE ee Si F p LE oF a jas eae Ge gi ap at Ged deren EET L E o ha wate Wis oF PEL fiery f EF FEA TID ha Liei TEE i Dah el ae eel F SL Gh pres gem i of he sa a LE De ged if N l se donam ne dae ne ee ee E rp res Sear do Se ewe Gees 2 or pets Gn rrem i Beeps BS a j aj a wore eS eee eee eee a ec fe ed e pons rO 23 24 The VASA interface provides information about the SDRS management capability for LUNs or filesystems A VASA Provider can indicate that the storage device supports SDRS migration for a particular LUN or filesystem A Provider can also indicate whether VMDK file migration between
13. 00 Centralized Management Console iSCSI Sessions Bho eran hn rpm a _ fee Fra Jeet hpr a jpa deci bg r Ci m Discovering additional volumes Additional volumes may be dynamically discovered if so configured by simply choosing the Rescan All LUNS option from the vSphere Client Static Discovery may also manually add each target for the iSCSI Software Adapter or HBAs Disconnecting iSCSI volumes from ESXi hosts The vSphere 5 iSCSI Software Adapter does not have a session disconnect feature Before an iSCSI volume can be disconnected it must first be unassigned from the ESXi host within the CMC otherwise the next Rescan All LUNs will automatically connect to it again if Dynamic Discovery is configured Unassigned volumes are not forcibly disconnected by the CMC Instead the host simply will not be allowed to log in again Rebooting an ESXi host will clear all unassigned volumes iSCSI sessions Individual iSCSI sessions can be reset forcefully from the P4000 CLI with the reset session command Before an iSCSI session is forcibly reset from the CLI all VMs accessing the volume through that session should be powered off or using vMotion reassigned to another host or moved to a different datastore that will continue to be accessible Troubleshooting volume connectivity If new volumes are not showing up as expected try the following troubleshooting steps e Ping the virtual IP address from the iSCSI initiator to ensure basic network
14. MDK virtual disk on VMFS 5 is 2TB 512 bytes e Conversion to VMFS 5 can not be undone VMFS 5 New versus Upgraded Datastore Differences e Upgraded volumes will continue to use the previous file block size which may be larger than the unified block size of IMB Data movement copy operations between datastores with different block sizes do not leverage VAAI storage offload acceleration Upgraded volumes will continue to use 64KB instead of 8KB subblock sizes Upgraded volumes will continue to have a 30 720 file limit e Upgraded to VMFS 5 volumes will continue to use Master Boot Record MBR partition types until the volume is grown above 2TB 512 bytes when it will automatically convert to GUID partition types GPT without impact to online VMs e Upgraded volumes will continue to have a partition starting on sector 128 versus newly created volumes starting on sector 2 048 This may impact best practice disk alignment benefiting P4000 SAN performance Storage Distributed Resource Scheduler Storage DRS Storage DRS is a new feature providing smart VM placement across storage by making load balancing decisions which are based upon current I O performance and latency characteristics and space capacity assessments Essentially day to day operational effort is decreased either by automatically or recommending best practice implementation to configurable tolerance levels A new datastore cluster object Figure 22 aggregates a collection of da
15. Ms created with the eager zeroed thick disks option which is not the default e Hardware assisted locking Provides a new efficient mechanism to protect metadata for VMFS cluster file systems improves scalability of large vSphere host farms sharing a datastore and is more efficient than the previous SCSI reserve and release mechanism iSCSI volumes are shared across multiple vSphere hosts in a cluster Improved efficiency in sharing volumes across the cluster directly impacts the number of VMs a volume can hold increases the size of the volume to support the increased VM count and increases the number of hosts within a cluster sharing access to the same LUN Time to Complete Figure 20 VAAI performance improvements with HP P4000 SAN 1400 w out offload e Full copy clone Microsoft Windows 1200 2008 R2 40GB VM VAAI offload 7x faster and 94 less load on 1000 e Block zero a 256GB vmdk 00 21x faster and 92 less load e Hardware Assisted Locking 600 6x more VMs per volume LUN 400 200 62 0 Full Copy Block Zero Load on Server Load on SAN 8000000 490 7000000 350 6000000 300 5000000 250 7455600 4000000 ie 3000000 150 2000000 3320000 100 1000000 30 620000 Full Copy Block Zero Full Copy Block Zero Figure 20 shows the level of improvements attributed to VMware s vSphere Storage APIs for Array Integration VAAI support with the P4000 SAN Best practices previously suggested that up to sixteen
16. SANs have unique benefits with internal no latency iSCSI characteristics versus this same SAN connecting to external devices VM network traffic requirements differ in every application These basic guidelines help you follow best practice for connecting P4000 SANs however understanding the approach to bandwidth tuning involves understanding the complete solution objectives and needs HP BladeSystems with Flex 10 present tunable network interfaces as physical adapters within vSphere thereby offering competing bandwidth allocation and enforcement opportunities than with vSphere Management and deployment options will vary In larger multiple HP BladeSystem environments HP CloudSystem Matrix offers additional orchestration opportunities to leverage network bandwidth automation It is also noted that 1OGbE interfaces provide increased bandwidth options over 1GbE Two network ports VMware vSphere 5 hosts with only two Gigabit IGbE network ports are not ideal for iSCSI SANs connected by the iSCSI Software Adapter because they compete with other network traffic to ensure good iSCSI SAN performance As shown in Figure 1 vSphere 5 hosts with only two Gigabit network ports are configured with a single standard virtual switch that comprises both Gigabit ports teamed together and contains e A Virtual Machine Port Group network for virtual machines e A VMkernel Port network with vMotion Management Traffic and iSCSI Port Binding enabled Four network po
17. SEVES e Enabling Load Balancing for performance Virtual IP load balancing VIPLB or load balancing is a setting on each defined server in the CMC that allows for the distribution of iSCSI sessions throughout the P4000 cluster of storage nodes in order to maximize performance and bandwidth utilization Most iSCSI initiators including the vSphere 5 Storage iSCSI Software Adapter and HBAs support this feature Virtual IP load balancing is enabled by default in the CMC All vSphere 5 iSCSI initiators connected to the P4000 SAN should have this feature enabled Discovery of the first iSCSI volume To discover volumes the iSCSI Software Adapter of each ESXi host must have added the Virtual IP VIP address of the P4000 cluster which contains the volume in its Dynamic Discovery Send Targets list New targets can then be discovered by simply choosing the Rescan All LUNS option from the vSphere Client Alternatively Static Discovery may also be manually entered for each target however this is generally not considered best management practice The iSCSI session status of an ESXi host can be viewed in the CMC by selecting volume and selecting the iSCSI Sessions tab Volumes that are connected show an IP address in the Gateway Connection field on the server s Volumes amp Snapshots tab Figure 7 If an IP address is not listed look for an incorrect configuration on the ESXi host s or on the server in the CMC Figure 7 HP P40
18. To enable this on a P4000 volume assign multiple servers one for each vSphere 5 server iSCSI Software Adapter to the same volume To use multiple servers simply create one server for each vSphere 5 host that connects to the SAN using the steps outlined earlier in the section Creating the first iSCSI volume on the SAN When multiple servers are assigned to the same volume in the CMC a warning indicates that this configuration is intended for clustered servers or clustered file systems such as ESXi Figure 18 Centralized Management Console inom Tou are in ihe process of damy mere than one gerver be a flume on enageehol Geel pr ea recommend wer oe bere Custer Assocwiing more Bian one parvet 10 8 volume oF smapthat A dended dor chuma wer vers or clustered fie yann Te Sort a pried aed aaea mere Mia ore Ger ver 1S Bus voume or mapi choi Coninue P4000 SAN server iSCSI IQN authentication maps EEE P PO ESE EE la ei tk oak win one vSphere 5 iSCSI Software Adapter to one or more custar chch Cancel volumes P4000 SAN server CHAP authentication maps one or more vSphere 5 iSCSI Software Adapters to Se one or more volumes CHAP authentication can also be used for increased security either 1 way or 2 way Each vSphere 5 host must be contigured to use the correct CHAP credentials New volumes on each vSphere 5 host must be discovered Dynamic or Static Discovery as described in the section Discovery of the first iSCSI volume
19. aracteristics defined as a tier can be requested in a defined VM storage profile These profiles are then used during provisioning cloning and vMotion to ensure that storage stays in compliance VM compliance status enables the checking of all VMs and disks in a single view VM Administrators that may not have access to all layers can now validate virtual machine compliance easily Profile Driven storage also removes the need for maintaining complex mapping and validating compliance during every migration creation of VMs or virtual disks HP Insight Control Storage Module for vCenter The HP Insight Control for VMware vCenter server 6 3 Storage Module is a plug in module to VMware s vCenter Server which enables VMware administrators to monitor and manage their HP servers networks and storage Administrators may clearly view the relationships between virtual machines datastores and the storage arrays and manage these arrays directly from the vCenter console Instead of provisioning storage through the CMC Insight Control supports creation of datastores and VMs from within the vCenter Server Console Also note with version 6 3 now supporting VASA the P4000 VASA provider HPICSM is installed through the 6 3 installation process in registering the VASA Provider Figure 26 HP Insight Control registers with vSphere and has the core framework and functionality to integrate the server network and storage environment The HPICSM VASA provider allows users
20. atastore This is usually best practice and ensures that datastore snapshots fully contain all VM data e VM Anti Affinity two specified VMs including associated disks are placed on different datastores This is recommended when known high storage resource VMs should be separated to ensure best VM SLAs vSphere Storage APls for Storage Awareness VASA vSphere Storage APls tor Storage Awareness VASA detects the capabilities of the storage array volumes and exposes them to the vCenter environment providing a foundation for quality of service based profile management VASA support is enabled tor the P4000 SAN through the HP Insight Control for VMware vCenter Server version 6 3 software Key capabilities integrated into the vCenter console with HP Insight Control integration with VASA e Combined physical and virtual view From the vCenter console monitor status and performance of VMs and the P4000 SAN Integrated troubleshooting Receive pre failure and failure alerts from the P4000 SAN Provision P4000 SAN Storage Add new datastores deleting or expanding an existing datastore creating new VMs or cloning existing VMs Storage capability RAID level thin or thick LUN provisioning and replication states are now alll visible within vCenter This enables profile driven storage allowing rapid and intelligent placement of VMs based upon a service level agreement SLA availability or performance With protile driven storage ch
21. basdioeindiedinnessidhawestiacdec 9 Disconnecting iSCSI volumes from ESXi hosts cccccccceceeeessseeeeeeeceeeeeeeeessasaeeeeees 9 Troubleshooting volume CONNECTIVITY cccccceeeeeeecceceeeeeeeeeceeeeesaaeeeeeeeeesaeeeeeeeeeeaas 9 Creating a new datastore on the iSCSI volume s sssssnnnnusssssnnnenssssrrnressssrreeens 10 Expanding cin 15C ol LOIN WPA OOO rasoei enura re EE EA 10 Expanding a volume and extending the datastore ccccccceessseeeseeeeeeesaeeeeeeeeeees 10 Expanding mel cms Gs RAG stale CMC eussreocoeeri isrener ansiar ier eect nee anne ere eee 10 Expanding an ISCSLIUN in ySphere aise x tern esecnnsian dcnaatewndy nseni niani iieis 10 Snapshots Remote Copy and HP SmartClone volumes ccccceeeesseeeeeeseeeeaeeees 13 Fe MCG SEL SS NON Sagan teres ga rseraescdsactvadowartn cod OE EESE E EER 13 HP P4000 Application Managed Snapshots in ESX ccccesssseeeeceseeeesaeeeeeeeeeees 14 HP PAGOO Snapshols OF VOW devias eersc2ic teucct eww rsuwenduutenndberacucescapbsnadshewuend dagenees 15 HP P4000 snapshots VMFS GGiGStOreS oie i scsiccecnadancsiesrnapievencssausceossaoucwmnarseedeeaceets 15 HP P4000 Remote Copy volumes and SRM ccccceeeecccceeeseeeeeeeeseeesaeeeeeeeeeeeaas 15 PAP POO Muli Site SA INS goes ee natecnec ninnaa EE eactecasecesoeen seu 16 HP PAOO0 SmarClone volumes ee ne ee ee ey enn eee ne en ee ANE 16 vMotion Clustering HA DRS and F c sedss2sasscensaseucarsacneserateenb
22. connectivity HBAs typically have their own ping utilities inside the BIOS of the HBA so use the appropriate utility for your HBA ESXi has a network troubleshooting utility that can be used from the ESXi Management Console or remotely through SSH to attempt a ping to the SAN e Double check all IQNs or Challenge Authentication Protocol CHAP entries For iSCSI authentication to 10 work correctly these must be exact Simplifying the IQN to something shorter than the default can help with troubleshooting e Make sure that all hosts on the SAN have load balancing enabled in the CMC If there is a mix if for example some have it enabled and some don t then hosts that don t have load balancing enabled may not connect to their volumes Creating a new datastore on the iSCSI volume Atter the vSphere 5 host has an iSCSI P4000 SAN volume connected it can be formatted as a new VMware vStorage Virtual Machine File System VMFS datastore or mounted as a raw device mapping RDM directly to virtual machines New datastores are formatted from within the VMware vSphere client in the new VMFS 5 format An example of the process for creating a new datastore is shown in Figure 8 Expanding an iSCSI LUN in PAO O O Figure 8 Creating a new datastore Select the iSCSI Disk Note the HP P4000 SAN volume will be identified as a LeftHand iSCSI Disk Expanding a volume and extending the datastore Both
23. e vSphere 5 Storage Enhancements for P4000 VMware vSphere 5 offers many new capabilities improving and extending storage specific benefits with HP P4000 SAN integration over vSphere 4 1 These new features and enhancements provide increased performance optimization easier provisioning monitoring and troubleshooting thereby impacting utilization and operational efficiency The major vSphere 5 storage enhancements impacting the P4000 SAN with SAN iQ 9 5 will be presented 19 20 VMFS 5 vSphere VMFS 5 introduces core architectural changes enabling greater scalability and performance Major features introduced include 64TB device support unified block size and improved subblocking The unitied block size is now 1MB Note that upgrading VMFS 3 to VMFS 5 is non disruptive and immediate however volumes will still retain their original block size Therefore modifying block size to fully recognize new VMFS 5 benefits will require a reformat of the volume This is particularly problematic if the data needs to be retained or VMs depending upon the volume need to stay online It is therefore recommended to create new volumes from the P4000 SAN as a purposed upgrade replacement Storage vMotion would then be leveraged for moving current VMs from VMFS 3 to VMFS 5 volumes Additional increased efficiency is achieved with subblocking reducing the storage overhead associated with smaller files Small tiles need not occupy an entire 1MB unified b
24. e 13 vSphere Client now reflects the change made to the iSCSI volume cm Bi ees fas pa Ea a mi aia Gis c Hika mhii aE j mi a ii mHE H Eme Sig eee mah iare t 9 iik 2 eee es heise ore Aon T sofia er rh ii Ir Ay foo ee ee oe eH ob at a Eo oot ge Eiin eg ge Tare Ll or cl ee iog i ee Seat Poe Le eee l ie i L aj siyin aem daenna Friii LE EA _ ciao pja p TERPEN 1s Mita Snapshots Remote Copy and HP SmartClone volumes Resignaturing Snapshots In order for vSphere 5 hosts to utilize SAN based snapshots or clones a snapshot or a clone is taken of a volume and presented to the vSphere hosts The process of presenting the snapshot or cloned volume is similar to adding a new iSCSI volume datastore vSphere 5 supports 3 options to mount a new SAN based snapshot or cloned volume that has VMFS data Keep the existing signature assign a new signature or format the disk Figure 14 e Keep the existing signature allows you to mount the VMFS volume without changing the signature e Assign a new signature allows you to retain the existing data and mount the VMFS volume present on the disk The signature will change and uniquely identity the volume without impacting data within the VMFS file system e Format this disk will destroy all data on the volume In keeping the existing signature during a mount may generate an error HostStorageSystem ResolveMultipleUnresolvedVmfsVolu
25. e 5 environments can be protected by Remote Copy volumes on a scheduled basis and automated by VMware Site Recovery Manager for a simple and complete disaster recovery solution HP provides a Storage Replication Adapter SRA for VMware Site Recovery Manager SRM to integrate Remote Copy volumes seamlessly with a vSphere 5 environment For more information on Remote Copy volumes review the Remote Copy User Manual installed with the P4000 CMC vSphere 5 and VMware Site Recovery Manager 5 requires SAN iQ 9 5 s Application Integration Solution Pack An option for SRA for VMware SRM 5 must be selected for vSphere 5 and Site Recovery Manager 5 support Previous version supported SRM 4 1 in a single installer option SAN iQ 9 5 supports SRM 1 x 4 x or SRM 5 Figure 16 A best practice is to ensure that SAN iQ hardware and software are always up to date Note that automated failback is now supported as part of the SRM 5 features is 16 HP P4000 Multi Site SANs HP P4000 Multi Site SANs enable vSphere 5 clusters to be stretched across locations to provide multi site VM vMotion VM High Availability HA Distributed Resource Scheduler DRS including new extensions to storage and Fault Tolerance FT Multi Site SAN configurations use synchronous replication in the underlying SAN to create a single SAN that spans both locations Ensure that Multi Site SAN has adequate bandwidth and latency for storage data and the FT required logging as demonstrated
26. e host uses an automatic path selection algorithm rotating through all available paths This implements load balancing across all of the available physical paths Load balancing spreads host I O requests across all available host paths with the goal of optimizing performance storage throughput It is important to note that native vSphere 5 multi pathing should not be used with HP P4000 Multi Site SAN configurations that utilize more than one subnet and iSCSI Virtual IP Connecting and using iSCSI fi New Volume volumes Creating the first iSCSI volume a on the SAN Volume hame ESY Volume Before a vSphere 5 host can access a new iSCSI Renner datastore that volume must be created on the P4000 Cluster Available Space 3 74755 TB SAN and must have authentication configured to _ aa aa F allow the host to access the volume All hosts within a datacenter cluster must also have access to that same volume in order to support vMotion High Availability and Fault Tolerance Use the P4000 Centralized Management Console CMC as shown in Figure 6 or use the command line interface CLI to create a Figure 6 New Volume new volume Then using the CMC create a new server to represent the ESXi host s iSCSI Software Adapter using the IQN s copied from the vSphere host s initiator s or use the Challenge Handshake Authentication Protocol CHAP as required SAN volumes are created and then assigned to vSphere 5 hosts within the CMC or CLI
27. e iSCSI Software Adapter is a good choice The impact of iSCSI processing on modern processors is minimal so with either adapter performance will be related primarily to the quality of the physical network disk quantity and disk rotation speed of the attached SAN In addition most enterprise network interfaces offer offloading of CRC check sums further improving the comparison between network interfaces and dedicated iSCSI HBAs Most vSphere deployments on P4000 SANs are done with the iSCSI Software Adapter included with vSphere What initiator should I use for additional raw device mappings or virtual disks Aside from the boot LUN volume additional volumes should be used for storing application data Specifically as a best practice database and logs volumes for many applications require separate volumes These should be presented as either raw devices RDMs through your chosen vSphere 5 host or connected as iSCSI disks directly through the software initiator of the virtual machine s guest operating system Using RDMs or direct iSCSI allows these application volumes to be transported seamlessly between physical and virtual servers as they are formatted in the native file system NTFS EXT3 etc of the guest VM OS To use the guest operating system initiator successtully follow these guidelines e Make sure the guest operating system initiator is supported by P4000 SAN technology Please consult the HP Single Point of Connectivity Knowledge
28. e volume at the time it was taken As an extension to this feature the P4000 SAN allows read only snapshots to be mounted and written to in a temporary area This data area is called a temporary space and contains delta writes from the original snapshot data By assigning a new signature in vSphere 5 we are writing data to this temporary space From the P4000 SAN s perspective temporary space will be lost if the SAN is rebooted and retains the same high availability characteristics of the source volume If data written to the temporary space along with the snapshot needs to be preserved in a new volume the CMC may be used to create a new volume copy with this temporary delta data merged with the source snapshot In order to preserve data on the volume prior to conversion VMs or virtual disks depending on this volume must be stopped suspended or disconnected 13 14 Alternatively selecting the Delete Temp Space option will forcibly remove this data from the P4000 SAN Once temp space is deleted trom the SAN it may not be recovered It is a best practice to unmount the snapshot prior to deleting temp space If you wish to remount the snapshot again you can issue the Rescan All command in the vSphere Client prior to mounting the snapshot again The CMC supports the Convert Temp Space option to add this data to a new volume on the P4000 SAN Once the new volume is created with a new name it may be presented as a new datastore with the same Re
29. ectly Also you should confirm all network configurations Verify on the iSCSI SAN that IP addresses can be pinged Please refer to Discovery of the first iSCSI volume Is virtual IP load balancing supported in vSphere 5 The vSphere 5 Software Adapter and hardware adapters are supported by P4000 Virtual IP Load Balancing As a best practice load balancing should be enabled in the P4000 CMC on the servers defined for assigning authentication of all vSphere 5 hosts What initiator should I use for a virtual machine boot volume LUN There are many ways to connect and present an iSCSI volume to a virtual machine on a vSphere 5 host These include using the vSphere 5 Software Adapter a hardware adapter HBA or the software iSCSI initiator provided by some guest operating systems For VMFS datastores containing a virtual machine s definitions and virtual disk files you need to use the vSphere host s hardware or software adapter You may use either adapter Both give you full vSphere 5 functionality and both are supported equally The HBA adapter has the advantages of supporting boot from SAN prior to ESXi and offloading iSCSI processing from the vSphere 5 host Note that although ESXi supports iBFT it is not currently supported or listed in HP s Single Point of Connectivity Knowledge base or the P4000 Compatibility Matrix VMware s Auto Deploy feature may also be another stateless booting mechanism If boot from SAN is not necessary then th
30. he iSCSI Sottware Adapter The improvement over four ports is achieved by separating vMotion and Fault Tolerance traffic from iSCSI traffic so that they do not have to share bandwidth Each traffic type performs better in this environment To contigure vSphere 5 hosts with six Gigabit network ports use three standard virtual switches each comprising two Gigabit ports teamed together as shown in Figure 5 If possible one port from each of the separate Gigabit adapters are used in each team to prevent some bus or network interface card failures from affecting an entire standard virtual switch e The first standard virtual switch should have A VMkernel Port network with iSCSI Port Binding enabled e The second standard virtual switch should have A VMkernel Port network with vMotion and Management Traffic enabled e The third standard virtual switch should have A Virtual Machine Port Group network for virtual machines Figure 5 Three Standard Switches Six Network Ports view wSphere Sandard Saitoh vSphere Dispibuted Saitoh Networking ai id Sewitch woeetcht J vMikael for vMotion o Standard Switth wiwiteh2 Remgve Properties GI WMikemel for Management Traffic More than six network ports If more than six network ports are available add more ports to the iSCSI virtual switch to increase available bandwidth or add more ports for any other network services desired 10GbE options Both vSphere 5
31. ied name IQN that identities each vSphere 5 host it will be needed for authentication later in defining volume access configuring the P4000 SAN HBA connectivity and networking An iSCSI host bus adapter uses dedicated hardware to connect to a 1GbE or 10GbE network mitigating the overhead of iSCSI and TCP processing of Ethernet interrupts thereby improving the performance of servers iSCSI HBAs can include PCI option ROM to allow booting from an iSCSI target SAN connectivity via iSCSI HBAs therefore enables both offloading of the iSCSI processing from the vSphere 5 server and another method of booting ESXi from the iSCSI SAN HBAs do not require licensing or special networking within vSphere 5 hosts as they provide a dedicated network connection for iSCSI traffic only and are seen within the host similarly to FC host adapters The physical network for HBAs may be a 1GbE or 1OGbE network dedicated to the SAN just as it is for iSCSI Software Adapters As best practice use two HBA initiators a dual port or two single ports each configured with a path to all iSCSI targets for failover Configuring multiple HBA initiators to connect to the same target requires authentication for each initiator s IQN to be configured on the SAN This is configured in the P4000 Centralized Management Console CMC software as two servers one for each HBA initiator each with authentication permissions to the same volumes on the SAN Note that a group of hosts in the sa
32. ig P Call Po ESPN De a ee Ca Paap hd hag 2S a d EEF i AE Aam ap ODT bei Hyaa ttt TEF TE B ae ae i Lre i W a f Die hw j Tia eee eee ee i 19 i ios The Lora ul Tia ioe hes Fa es Tene Expanding an iSCSI LUN in vSphere 1 Highlight the ESXi host where the iSCSI LUN is presented 2 Click on the Configuration tab 3 Highlight the iSCSI adaptor The LUN is still showing 1TB Figure 11 Figure 11 vSphere Client shows iSCSI volume still as 1TB prior to Rescan All Tai Priis imir Parem Pa Si aoe veer bp PR cee veer T fick SOI ISPI based 406 Fibre Chonrerl to PCT apres HA tbe Fire Chanmai SO g a 0 i eth S Ele fE ete Ee a EA deh Fi GCST Nama gn OL corm eevee chard Phe GHOSE Aist Conreched Targets 4 Drie f Pate 4 iE i ha EA fd Runtime Hame j Type Transport Capacity Owner Mardware Acoslers LEFTHAND GCS Duk fraa 000b iibb n wehbakcoTet0 O dik GCI NOOTR NMP Supcerted LEFTHAMD GCE Duk rake Siia I i fn T g ak BCH S00 G AMP Unies aj E a 1 12 4 Click on Rescan All and pick both Scan for Storage devices and Scan for New Volumes Figure 12 Figure 12 Rescan for the change in the iSCSI LUN G Rexcan nc IW Scan for New Storage Devices Scan for New VMFS Volumes storage Li cont te 5 The new size is reported immediately in the vSphere Client Figure 13 Figur
33. lock as subblocks of 8KB and 1KB are now possible Pass through RDMs larger than 2TB are now also supported Note that VMFS 5 has finer grained SCSI locking controls supporting VAAI atomic test and set ATS In order to take full advantage of this accelerated P4000 feature found in SAN iQ 9 0 and above ensure that the VMFS volumes are using the VMFS 5 0 format VAAI ATS improves scalability and performance The lock manager for VMFS 5 scales linearly with the number of vSphere hosts in the cluster resource pool which is particularly important with the P4000 This impacts scalability of storage resources in the same manner fully realizing benefits of scalable clustered storage architecture core to P4000 SAN Please refer to vSphere Storage APIs for Array Integration Improvements to the lock manager have also shown up to a 40 improvement during concurrent updates of shared resources in a cluster This impacts the VMFS 5 clustered file system not only sharing volumes but also when performing metadata state updates to the volume This efficiency is seen during volume creation vMotion VM creation and deletion VM Snapshot creation and deletion and VM power on and off transition states VMFS 5 New versus Upgraded Datastore Similarities e The maximum size of datastore device for either VMFS 5 or physical compatibility mode RDMs is 64TB e The maximum size of virtual compatibility mode RDMs on VMFS 5 is 2TB 512 bytes e The maximum size of a V
34. lone volumes are also very useful for performing tests on virtual machines by quickly reproducing them without actually using space on the SAN to copy them Unlike snapshots delta data is persisted between each source snapshot Please see the section on Resignaturing snapshots Note that every write including a delete is a delta block persisted in the SmartClone delta space If long term space efficiency is required with SmartClone volumes minimizing writes to the SmartClone datastores include avoiding defragmentation within the guest VM Successful approaches have also included separation of User and Application data including file redirection Space reclamation can only be performed with a SmartClone volume by creating a new one retaining the original small source reference with minimal delta data By removing OS dependence in separation from the User and Applications periodic SmartClone deletion and re creation ensures that data delta growth is minimized Without these approaches SmartClone volumes may eventually occupy an entire SmartClone s volume space as delta change data SmartClone s initial value is in immediate cloning of golden image volumes Efficient space utilization objectives need to understand use and mitigation approaches to maximize success vMotion Clustering HA DRS and FT The advanced vSphere 5 features of vMotion clustering HA DRS and FT all require multiple vSphere 5 hosts to have simultaneous access to volumes
35. me cluster will also require authentication for each volume Please go to HP s Single Point of Connectivity Knowledge base for tested and supported boot from SAN iSCSI HBAs with P4000 SANs hitp www hp com storage spock Note that most 1GbE enterprise network adapters support CRC checksum offload efficiencies and make server performance improvements negligible However 10GbE has shown the importance of TCP processing offloads in adapters to lessen server impact Some of the JOGbE adapters have also introduced protocol convergence of 10GbE Ethernet offloading TCP and iSCSI processing BIOS boot support and iSCSI protocol support some even with multiple protocols providing new compelling reasons in leveraging these advantages Multi pathing iSCSI Native Round Robin iSCSI multi pathing in vSphere 5 provides superior performance by leveraging multiple paths to the SAN Each path to the P4000 SAN will establish a unique iSCSI session and may be distributed over different storage nodes in the P4000 cluster Since P4000 SANs scale leveraging multiple storage nodes compute disk and network resources iSCSI multi pathing s advantage is seen in multi threading data approach up to the maximum throughput sizing capability of a P4000 SAN Configuring iSCSI multi pathing requires at least two network ports on the virtual switch The following steps are performed on each ESXi host individually e From the host s Configuration Networking
36. mes In this example a VMFS datastore can be mounted only if it does not conflict with an already mounted VMFS datastore that has the same UUID VMFS signature If the original LUN in a vSphere datacenter that contains the original VMFS datastore is currently mounted at the same time with its snapshot LUN force mounting the VMFS datastore from the snapshot LUN on this same ESX host is not allowed In this case the only way to mount that VMFS datastore from the snapshot LUN is to assign a new signature and choose a new label for that datastore In assigning a new signature references to the existing signature from virtual machine configuration tiles will need to be updated Note in the Ready to Complete summary the Original UUID signature is provided along with Assign a new UUID The event task is resignature unresolved VMFS volume With a successtul mount the datastore mounts with a generate Identification snap XXXXXXXX lt Original label gt On volume properties select Rename to provide a more suitable volume Identification name Figure 14 Add Storage when mounting a volume clone or snapshot Seda poco i m ae ee EPE Sf Te eg eels oe Se EE E a E A ler Fah Pied pels ee ee eee pee eee ee BPS pa Ss ee Se eee Pe eee ie see p Ene Ta ee Cee Ere e Pee ee e e F armsi iba dui CPEE I TFE DENTE a l hak wu mr Note that P4000 snapshots are read only persistent points in time representing data on the sourc
37. multiple HP P4000 SAN clusters SATA MDL SAS or SAS based and exposing different LUNs with different characteristics such as volumes with different network RAID levels datastore clusters can simplify management for new VM placement decisions on new or current VMs including during HA movements 21 22 Figure 24 SDRS Runtime rules for Datastore Clusters F ay Lee aat f a pi Ongoing balancing recommendations are made when a datastore in a Datastore Cluster exceeds user configurable space utilization or I O latency thresholds These thresholds are defined during the Datastore Cluster configuration Figure 24 I O load is evaluated every 8 hours by default which is also modifiable When configured maximum space utilization or I O latency exceeds thresholds Storage DRS calculates all possible moves to balance VMs accordingly with a cost benefit analysis If the Datastore Cluster is configured for full automation VMs will move and rebalance accordingly if possible Storage DRS affinity rules enable control over which virtual disks should or should not be placed on the same datastore in a datastore cluster e VMDK Anti affinity virtual disks of a VM with multiple virtual disks are placed on different datastores This ensures maximizing multiple datastore throughput over unique virtual disks This will generally complicate P4000 SAN based snapshot and restoration e VMDK Affinity virtual disks are kept together on the same d
38. nabled by default in P4000 SANs Virtual machines that can be backed up and restored together should share the same datastore volume HP recommends using HP network cards that are capable of offloading TCP IP in iSCSI network Network cards capable of offloading processes from the CPU to the network card ASIC enhances the performance of ESX servers and balances the processing in the solution For a list of vSphere 5 supported network interfaces refer to the VMware Compatibility Guide for network I O devices htto www vmware com resources compatibility Because P4000 snapshots Remote Copy volumes and HP SmartClone volumes work on a per volume basis it is best to group virtual machines on volumes based on their backup and restore relationships For example a test environment made up of a domain controller and a few application servers would be a good candidate for being put on the same volume These elements could be snapshot cloned and restored as one unit e HP recommends synchronizing P4000 nodes and vSphere hosts with the same time server Time synchronizing your environment enhances your ability to troubleshoot problems 25 26 Frequently asked questions added the P4000 cluster virtual IP address to vSphere 5 host s Dynamic Discovery Send Targets list why don t I see a new target Most likely either you need to perform a Rescan All on the storage adapters or else you have not configured authentication on the SAN corr
39. ncnelaseeacuecih aceon 16 Choosing datastores and volumes for virtual machines cccccccce eee esseseeeeeeeeeeees 17 VMware s vStorage API for Array Integration cccccsccccccccceceeeeesasseeeeeeeeeeeeeeees 17 vSphere 5 Storage Enhancements for PAQOO cseeeseeccecceeeeeeeeseeeeeeeeeeeeeeees 19 alee PETEERE EEE AET ee ee ee eee 20 Storage Distributed Resource Scheduler Storage DRS ccccccceessseeeeeeeeeeeaaeees 20 vSphere Storage APIs for Storage Awareness VASA cccccccccceeeeeesseeeeeeeeeeeeeees 22 HP Insight Control Storage Module for vCenter cccccccccceesssssseeeeeeeeeeeeeeeeaaaaas 23 Storage I O Control Storage I O Control 0 0 ccccecccccceeessseeceeeceeeeeeeeeeeeessaaeeees 24 beprei E 25 Frequently asked QUESTIONS crire airi iaeiaiai aiaiai 26 For more information c cccccccccecececccccccceucecucecuceceucucucucceueueeeucucucecucenensucuscanens 28 vmware Executive summary This white paper provides detailed information on how to integrate VMware vSphere 5 0 with HP P4000 LeftHand SAN Solutions VMware vSphere is an industry leading virtualization platform and software cloud infrastructure enabling critical business applications to run with assurance and agility by virtualizing server resources Complementing this technology the HP P4000 LeftHand SANs address the storage demands and cost pressures associated with server virtualization data growth and business continuit
40. ne adapter while sending out through both The best bonding option is to leverage Link Aggregation Control Protocol LACP 802 3AD if the switching infrastructure supports it From the storage node LACP bonding supports both sending and receiving data from both adapters Network teams on vSphere 5 servers are contigured at the virtual switch level As an added improvement over virtual switch bonding enable native MPIO round robin for iSCSI Network teams should show Active I O across both interfaces Properties on the vSwitch will show both adapters as active Properties on the datastore manage paths will also verify Active I O on all paths The VMkernel network for iSCSI should be separate from the management and virtual networks used by virtual machines If enough networks are available vMotion and FT should also use a separate network Separating networks by functionality iSCSI vMotion FT and virtual machines provides higher reliability and performance of those functions For improved performance enable VIP load balancing for iSCSI in the CMC Hardware based flow control is recommended for P4000 SANs on all network interfaces and switch interfaces The load balancing feature of the P4000 software allows iSCSI connections during volume login to be redirected to the cluster s least busy storage node This keeps the load on storage nodes throughout the cluster as even as possible and improves overall performance of the SAN This setting is e
41. p a Epi mA E rE iji E i Eai i E i p e W FF lls As a best performance practice the VMkernel network for iSCSI might be separate from the host management and virtual networks used by virtual machines however high availability is usually preferred with minimal network hosts If enough networks are available to provide network redundancy it is preferred to separate vSphere vMotion and Fault Tolerance FT networks on unique virtual switches These networks are placed on vSphere separate standard switches within each host or distributed switches which handle networking traffic for all associated hosts on the datacenter Port groups defined with unique network labels also limit network broadcast domains across multiple hosts In a vSphere distributed switch with Network I O Control enabled distributed switch traffic is divided across predefined network resource pools VMware Fault Tolerance traffic iSCSI traffic vMotion traffic management traffic NFS traffic and virtual machine traffic Note the similarities with standard switches and manual segregation however Network I O Control enforces these rules across a datacenter thereby easing policy enforcement and management The resources that you can deploy vary based upon the type and quantity of resources available Larger performance scalable P4000 SANs will have different requirements than smaller P4000 clusters HP BladeSystem c7000 solutions with local P4800
42. r configuring tuning and deploying your environment Initial iSCSI setup of vSphere 5 Networking for the iSCSI Sottware Adapter Before SAN connectivity via the vSphere 5 iSCSI software adapter is enabled specific network contiguration is recommended To correctly enable connectivity a VMkernel network is created with sufficient network access to the iSCSI SAN As a best practice use at least two 1GbE or IOGbE network adapters teamed together for performance and failover Figure 1 Network teams might want to be created by assigning multiple network adapters to a standard switch under Configuration Hardware Networking Figure 2 For the iSCSI Software Adapter Round Robin path selection on active active I O paths is optimal Figure 3 Figure 1 Standard Virtual Switch Two Physical Adapters Figure 2 iSCSI VMkernel Properties NIC Teaming Genin aee N E jam j 7 prer diie m ape Heg i a F A FE i SPE w eS Geel es T aN iis T SE zane Pe Oe ee eae Steer Fem eer a a i i iuj frap ee eel ee tees T Ti a ii Wm a E fee 2 priate Tae ete Dd ee e a a ee e Fetai ii men P Pe 1 fet Toei oe Bae oe eer Tem a om el lesen eee eS ee gee ee ee Lee HE ETE P Fe ee ee de eee i rh fha i reper 24 il a n icime kimran m E e ie ERIA LI EML a ci une IEE LAI REET n ii 4 Nel palling Lugern a E Hasr Aderir azae Lert Figure 3 Manage Paths Active Active I O
43. rts VMware vSphere 5 hosts with four 1GbE network ports perform better if you separate virtual machine traffic from VMkernel vMotion Management and iSCSI traffic As illustrated in Figure 4 vSphere 5 hosts with four Gigabit network ports are configured with two virtual switches each comprising two Gigabit ports teamed together If hosts contain multiple dual port network adapters use one port from each of the two dual port adapters balancing high availability across both adapters For example if using two onboard Gigabit adapters and a dual port Ethernet card team together port O from the onboard adapter and port O from the Ethernet card and then team together port 1 from the onboard adapter and port 1 from the Ethernet card This configuration provides protection from some bus or network interface card failures e The first standard virtual switch should have A VMkernel Port network with iSCSI Port Binding vMotion and Management Traffic enabled e The second standard virtual switch should have A Virtual Machine Port Group network for virtual machines Figure 4 Two Standard Switches Four Network Ports View vSohere Standard Seiichi wSohere Distributed Saitoh Networking a n j AF Pe mid Sco ji o _ e eo F jil ai a i TEN al Fine Ful ba agen oe TF yt F iai a Tarp a l oe Wenn F Ea Six network ports VMware vSphere 5 hosts with six Gigabit network ports are ideal for delivering performance with t
44. s are available as shared raw LUNs without losing data across multiple vSphere hosts accessing the same LUN e Snapshots VM snapshots are possible on RDM LUNs in virtual compatibility mode Combined with SAN iQ 9 5 Application Managed snapshots raw devices may be quiesced in context VMs Note that this is NOT supported with physical compatibility mode RDMs HP P4000 snapshots on VMFS datastores P4000 SAN snapshots are very useful in a vSphere 5 environment All virtual machines stored on a single volume can be snapshot and rolled back to at any time Moreover P4000 snapshots can be mounted to any vSphere 5 host without interrupting access to their source volume This may be used for data mining or testing real world data without affecting live corporate data Figure 16 HP P4000 Application Integration Solution Pack SAN iQ 9 5 required for vSphere 5 PP Pe Apple at eet io rege el ei elt pes Poe i n y e r M IIIo I IiIiA _ a y m HP P4000 Apoplicaton integration Solufien Pod E tome Service lemsa irt HP P4000 Storage Replication Adapter SRA for VMware Pacer SEM hes E Be IF PP Pani eee lier aie elmer IAA eres PO e etc ee ee ee ee Rel Mane SM id eek Gul i Rey OF Wie eel ached ee oe HP P4000 Remote Copy volumes and SRM Remote Copy replicates P4000 snapshots over WAN links to remote sites for disaster recovery or backup vSpher
45. select Properties of the standard virtual switch Select the Network Adapters tab and add the appropriate number of adapters e Select the Ports tab and Edit or Add the VMkernel On the Properties NIC Teaming tab select Load Balancing and other appropriate policy exceptions e From the host s Configuration Storage highlight the iSCSI datastore Right click and select Properties and then select the Manage Paths option Policy Path Selection is Round Robin VMware Select Change option Note the valid multiple paths listed and the Status of Active I O for each path e From the host s Configuration Storage highlight the iSCSI datastore Note the Datastore Details Total Paths should accurately reflect the number of valid paths to the storage volume Once contigured correctly perform a Rescan All scanning for new storage devices and scan for new VMEFS volumes An iSCSI session is connected for each path bound to the iSCSI Software Adapter The example referred to in Figure 3 gives each iSCSI LUN datastore two iSCSI paths using two separate physical network adapters To achieve load balancing across the multiple paths datastores are configured with a path selection policy of round robin This can be done manually for each datastore in the vSphere client A reboot of the host is required for storage devices currently managed by the host However new storage devices do not require a host reboot The round robin policy ensures that th
46. signaturing Snapshot rules above Snapshots offer excellent ways at promoting data recovery Figure 15 P4000 Centralized Management Console Convert Temp Space fe fre Jeet see OC i te lige ary Sheed E p FE Fps frr ee i oe Tt aa da Kite ni iyi CE eiiie O A Aa rag rF j 4 Maf Pea Barr ia p p p pm Tu 1 E Baal in p l gt i or Tus lomi Teer a lir oa eru i SAMLET E iaf rei mmia Bip vikra 1 z lene iar e O hira ee i ee Ijare Fra heh be Taree a a E E TER m ma a y E icme Terp Ipe oy eed oe as Ceit Tage eae B Eai ep ia p IODH EE com L paa eel Bare i TT I or Ery i Ba Bemir NEE r Pire Sheth be Fees Drea ee Tere 2 Paes oacahct Denr HHT Liwy mel heey bm se PERF Mai ari ee en ll EH 2 eels Ee leh Ek HP P4000 Application Managed Snapshots in ESX New to HP P4000 SANs in SAN iQ 9 5 expands Application Integration functionality to include VMware integrated snapshots This new Application Aware Snapshot Manager enables VMware volume SAN based snapshots thereby creating application consistent point in time copies for reliable recovery of VM states Without this integration option snapshots previously were as on volume trom the P4000 SAN state In flight cached data may not have been fully quiesced creating an application recovery point Now full data recovery does not depend upon a paused or stopped VM state and flushed data to the SAN The
47. tastores into a single object Figure 22 New Datastore Cluster object P VCENTERS vihan Clant M File Eda View Inventory Adiminestratron E c Home ff an imeni 0 Dwtastores and Dolzsiae Chaimi cara g e Plug ins Help VOENTERS reo i cataag C New Folder Ctrl F La dating j uster OOO o d LP SCSI ONP New Datastore Cluster Cire i Sidi Hoaf tHe e oF Rll ma lirfa ml E r hmn a iid Ri Storage DRS is managed in the same manner as a compute resource cluster This enables consistently the smartest and best placement for new VMs and virtual disks as well as ongoing evaluation thereby enforcing or suggesting best practice load balancing based upon existing workloads Note that the configuration may be automatic or manual Figure 23 Figure 23 SDRS Automation Level Se E ee Se el ee LF FR reyi kom e i Tri a i ie ire n n j Te ol h Sp ee ee el ae ee ee ed l ee ee ee er oi hiaai Ter sl be wje niha a Eeri r a Erma ae Grouping of datastores allows for more consistent higher level abstraction and less intimate dependency for knowledge of the HP P4000 SAN datastore volumes Storage DRS thresholds govern and trigger rules for enabling expert action Figure 18 Storage tiering is the process of moving data to different types of storage based upon the value of the data This tiering can be based upon performance capacity high availability or data segregation requirements By leveraging
48. w VASA storage awareness and discovery enhances performance and brings array information into DRS and Profile Driven features e High Availability HA architecture updated to simplify deployment and increase scalability The HP P4000 SAN Solution increases and builds upon features found in SAN iQ 9 0 with 9 5 e VAAI feature extension support found in vSphere 4 1 improves storage efficiency and performance Note that newer VAAI features extensions introduced in vSphere 5 0 are not supported in SAN iQ 9 5 e SAN iQ 9 5 expands application integration of VMware integrated snapshots supporting application consistent point in time copies for speedy and reliable VM recovery e SAN iQ 9 5 features zero to VSA automated installation for easy deployment e SAN iQ 9 5 supports new Storage Replication Adapter SRA for VMware vCenter Site Recovery Manager 5 SRM e Network RAID 5 and 6 dynamic stripe size improves efficiency and is applied to the entire volume stack Scheduled snapshots for space efficiency are no longer required Successfully addressing and deploying these new features across the solution is imperative if you wish to maximize the return on investment ROI for your HP P4000 SAN and VMware vSphere infrastructure while continuing to meet the dynamic needs of the business To help you successfully deploy these new features and maximize the performance and scalability of your vSphere environment this paper presents best practices fo
49. ware vSphere Storage APIs for Array Integration VAAI enables the P4000 SAN running SAN iQ 9 5 to provide offloading from the vSphere 5 hosts for specific storage operations These functions increase the performance and efficiencies for each vSphere host and offloads vSphere host storage processing to the P4000 SAN where it is most efficient Originally introduced in vSphere 4 1 vSphere 5 0 further extends these features P4000 SANs only support the original functional primitives at this time in SAN iQ 9 0 and 9 5 Figure 19 vStorage API for Array Integration VAAI Framework vStorage API for Array Integration VAAI Storage vMotion gt VMware vSphere API Provisioning VMs from template Improve provisioning disk performance VMFS share Disk Array P storage pool 17 18 Currently supported VAAI offloading capabilities e Full copy Enables the P4000 SAN to make full copies of data within the array Previously data movement would traverse from the SAN occupying bandwidth to the vSphere host utilize CPU resources processing the data movement and again traversing back to the SAN e Block zeroing Enables the P4000 SAN to zero out large numbers of blocks with a single command Previously repetitive zero s would traverse from the vSphere host to the SAN occupying bandwidth and utilizing CPU resources processing the data movement This feature only speeds up the process of provisioning of V
50. y P4000 SANs scale capacity and performance linearly without incurring downtime enabling it to meet the requirements of small pay as you grow customers to the mission critical applications of an enterprise This document presents configuration guidelines best practices and answers to frequently asked questions that will help you accelerate a successful deployment of VMware vSphere 5 0 on HP P4000 SAN Solutions Target audience VMware and SAN Administrators who are implementing VMware vSphere 5 0 on an HP P4000 LeftHand SAN Solution New Feature Challenges With vSphere 5 0 VMware continues to increase both its infrastructure and application services e VMware ESXi converges on a thinner hypervisor architecture supporting a hardened 144MB disk footprint streamlined patching and configuration model e New Virtual Machine VM virtual hardware version format v8 evolves additional graphic and USB 3 0 support VMs can now have 32 virtual CPUs and 1TB of RAM e Storage Distributed Resource Scheduler SDRS aggregates storage into pools simplifying scale management ensuring best VM load balancing avoiding storage resource bottlenecks e Protile Driven storage matches appropriate storage to given VMs ensuring correct placement e VMFS 5 enhances scalability and increased performance supporting larger 64TB datastores a unified IMB block size and small tile space utilization with 8KB sub block sizes e Storage APIs enhanced VAAI and ne
Download Pdf Manuals
Related Search
Related Contents
Renesas R8C/2D User's Manual REC TwinPeak 太陽電池モジュール1 限定保証書 Quantum Dynamo ATS - Pride Mobility Products 02m1_MANUEL_LT1205_10875.indd 1 4/8/08 3:04:51 PM ASIGNATURA / COURSE TITLE Traducción general B1 (Inglés) 1.1 D3PLOT 9.4 user manual LBB 1920/00 Pré-amplificador universal Plena Copyright © All rights reserved.
Failed to retrieve file