Home
PDF
Contents
1. Folder Refer Used Avail CIFS NFS FTP RSYNC WebDAV Index Delete zpool NFS1a 256 00 KB 256 00 KB 7 60 TB Edit x E zpool NFS1b 256 00 KB 256 00 KB 7 60 TB Edit E A x zpool NFS2a 256 00 KB 256 00 KB 7 60 TB Edit x zpool NFS2b 256 00 KB 256 00 KB 7 60 TB Edit x C Delete selected Results 1 4 all About Support Add Capacity Register Help ci e gt Data Management FUELS Shares E SCSI Target Runners Navigate to Data Management and select Shares Navigate to the NFS Server field and verify it is listed as online c Select Configure DD NFS Serwer Configure Basic NFS configuration View Log View Service Logs November 2013 Solutions Blueprint 329591 001 25 Walk Through Setup and Configuration d Set the Client Version to 3 e Set the Concurrent NFSD Servers to 2048 if not already f Leave the remaining values at their defaults and select the Save button MANAGE NFS SERVER SETTINGS Service State Server Version Client Version Concurrent NFSD Servers NF SD queue length Concurrent LOCKD Servers LOCKD queue length Service is currently disabled Check it to enable service J Sets the maximum version of the NFS protocol that will be registered and offeret Sets the maximum version of the NFS protocol that will be used by the NFS clien attempted first The default is 4 046 Maximum number of concurrent NFS requests j Connection queue length
2. 1 2 1 3 Introduction Audience and Purpose The purpose of this document is to give IT professionals instruction on how to construct a purpose built high OPS low cost Storage Area Network SAN using off the shelf Intel components the Nexenta software stack and VMware ESXi 5 1 This software based storage solution is specifically targeted at highly dense virtual environments that interleave VM I O resulting in a mostly random I O demand also known as the I O Blender This document steps through basic installation and setup of hardware HW Nexenta software SW and connection of NFS datastores at the VMware host system to NFS shares on a Nexenta server In addition this document explains a best known method for hardware and NFS setup in Nexenta software for optimal performance as a VM datastore using Intel 10Gb Ethernet controllers and Intel Solid State Drive Data Center 3500 Series Intel SSD DC 3500 Series products Additional details on Nexenta may be found in the Nexenta installation manual and administration guide located at http info nexenta com rs nexenta images NexentaStor nstallationGuide 3 1 x pdf Additional details concerning VMware setup may be found in the vSphere 5 1 administrator s guide and related documents located at http www vmware com support product support vsphere hypervisor html In addition to installation guides and manuals many helpful tips may be found in both Nexenta s
3. Still more expensive per drive than traditional Spinning disks HDDs in most cases the SSD has no moving parts and no seek time which makes random I O a perfect fit for these drives SSDs cost more in per Gigabyte GB however since they generate phenomenal volumes of OPS per drive the cost per OPS is dramatically less than traditional disks For instance a 800GB Intel SSD DC 3500 Series drive can produce 75K random read IOPS at an approximate list price of 940 which yields 79 IOPS per dollar Whereas a 15K RPM 600GB HDD can produce 200 random read IOPS at 300 or 0 67 IOPS per dollar The bottom line in virtualized and cloud infrastructure is that consolidating VMs produces an uptick in random IOPS random IOPS devalue caching algorithms and hard disk drives cannot deliver these random IOPS efficiently SSDs on the other hand continue to decline in cost per GB excel at providing random I O run on approximately one third the power and have now evolved with enterprise class high endurance features and consistency features It seems a logical conclusion that for high density VM environments such as the ones found in enterprise IT data centers an SSD based SAN is an optimal solution The solution examined in this blueprint shows an SSD based solution using Intel Solutions Blueprint November 2013 10 329591 001 intel Solutions I mplementation Jt components and Intel s enterprise class Intel SSD DC 3500 Series Multi L
4. and VMware s community blog forums and support sites Prerequisites This document assumes the reader has prior knowledge of hardware and systems basic Linux configuration and installation and advanced knowledge of VMware ESXi and vSphere 5 1 interface specifically storage configuration Solution Summary The solution described as configured within this document is capable of supplying virtual machines VMs with LOOK random 4K block mixed read write I O operations per second IOPS for an approximate cost of 30K while using only 2 units of rack Space and up to 650 watts of power Industry background and the adoption of 10Gb Ethernet LOGbE networking have allowed flexibility and efficiency in the data center Businesses employing LOGbE can replace many 1Gb twisted pair cables connecting an individual server with only two LOGbE cables saving costs on both structured cabling and switch ports without sacrificing performance Using NFS shares iSCSI or Fibre Channel over Ethernet FCoE protocols with LOGbE allows for removal of parallel storage networks and reduces cabling and switching complexity This document describes how to construct an NFS storage solution using off the shelf Intel Xeon processor based servers Intel LOGbE Network Adapters Intel SSD DC 3500 November 2013 Solutions Blueprint 329591 001 5 intel Introduction Series and the Nexenta software stack Building an equivalent IOPS capable solu
5. 0 1n st a oO 2 v v v v v O O O O a a a LSI 9207 8i A 6x SSD HBA Only _ lige Intel X520 DA2 lie LSI 9207 8 A 6x SSD HBA Only ligs RA Figure 4 1 System Architecture Diagram shows the PCle to controller relationships between the two 8 port SAS HBAs controllers the Intel x520 DA2 network controller and the local SATA controller for the VMware ESXi hypervisor This configuration was chosen for the ability of the system with two controllers to handle the I O produced by 12 SSDs The configuration spreads I O across both processors in this dual socket system Further performance gains are possible using more many SAS controllers and additional 10GbE network cards to spread the I O load even further across multiple CPUs and I O paths however these expanded configurations were not tested in the scenario presented in this paper Software Setup and Configuration Software used in the high density VM datastore SAN and host configuration and testing e SAN NAS NexentaStor version 3 1 4 e Virtualization Cloud Vmware ESXi and vCenter server version 5 1 Update 2 e VMs Windows Server 2008 R2 w current updates and McAfee VSE 8 8 patch 2 e O load generation lometer v2006 07 27 available from iometer org Solutions Blueprint November 2013 14 329591 001 intel Test Configuration and Plan 4 3 Software Architecture and Network Configuration Diagram The diagram
6. a eO1S S13d0 mpxin disk 745 21 GB aetna ss cis SA4a0 mponofdisk 74524 GE eOS EE2d0 mpok 745 21 68 Remove seleced chs 1000 mpxicidisk 745 21 6B Remoa November 2013 Solutions Blueprint 329591 001 21 intel Walk Through Setup and Configuration 5 Enter a name and description a Enable deduplication and select OK to the warning b Leave the other values at their defaults and select the Create Volume button Volume Properties Name Volume name must begin with a letter and can only contain alphanumeric characters a z A Z 0 9 in addition to the following three special characters underscore _ hyphen and period Volume name has the following restrictions the beginning sequence c 0 9 is not allowed the name log is reserved a name that begins with mirror raidz or spare is not allowed because these name are reserved In addition volume name must not contain a percent sign Description intel SSD zPool Optional volume description Maximum length is 255 characters Deduplication Controls the deduplication option for the volume If enabled it will optimize use of duplicate copies of data Default is off Compression Controls the compression algorithm used for this dataset Default is on Setting compression to on uses the Izjb compression algorithm The Izjb compression algorithm is optimized for performance while providing decent data compression Currently gzi
7. for the NFSD over a connection oriented transport 1024 Maximum number of concurrent LOCKD requests Connection queue length for the LOCKD over a connection oriented transport At the time of this writing VMware vSphere does not support NFS version 4 5 6 VMware NFS Configuration After the NFS export configuration is complete on the Nexenta SW SAN 1 Open the VMware vSphere client console and add an NFS mount point in the VMware host For our example we used the following private 172 16 x x address and the mount point listed in the NexentaStor console we created in the previous steps 2 Solutions Blueprint 26 On the Configuration tab of a selected host click on Add Storage November 2013 329591 001 Walk Through Setup and Configuration fm intel com VMware ESXi 5 1 0 838463 Summary Virtual Machines Performance Configuration Tasks amp Events Alarms Permissions Maps Storage Views Hardware Status Update Manager View Datastores Devices Processors Datastores Refresh Delete Add Strrage Memory Identification 7 Status Device Drive Type Capacity Free Type Storage 8 03b tv800x8r5 Norma 03b tv800x8r5 1 Non SS 5 09 TB 4 68 TB VMFSS Networking amp 03c tv800x8rs Normal 03c tv800x8r5 1 Non SSD 5 09 TB 4 88 TB VMFSS Storage Adapters E esx05 locl Norma rial Attached 144 00 GB 143 05 GB VMFS5 Network Adapters ta nfs01 tv400x85 Norma 172 16 4 20
8. intel Delivering Low Cost High I OPS VM Datastores Using Nexenta and the Intel SSD Data Center S3500 Series Solutions Blueprint November 2013 Revision 1 0 Document Number 329591 001 in te Introduction INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTEL S TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE AND OR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT A Mission Critical Application is any application in which failure of the Intel Product could result directly or indirectly in personal injury or death SHOULD YOU PURCHASE OR USE INTEL S PRODUCTS FOR ANY SUCH MISSION CRITICAL APPLICATION YOU SHALL INDEMNIFY AND HOLD INTEL AND ITS SUBSIDIARIES SUBCONTRACTORS AND AFFILIATES AND THE DIRECTORS OFFICERS AND EMPLOYEES OF EACH HARMLESS AGAINST ALL CLAIMS COSTS DAMAGES AND EXPENSES AND REASONABLE ATTORNEYS FEES ARISING OUT OF DIRECTLY OR INDIRECTLY ANY CLAIM OF PRODUCT LIABILITY PERSONAL INJURY OR DEATH ARISING IN ANY WAY OUT OF SUCH MISSION CRITICAL APPLICATION WHETHER OR NOT INTEL OR ITS SU
9. mnt Unknown 976 05 GB 761 00 GB NFS Advanced Settings nfs02 tv400x8r5 Norma 172 16 5 20 mnt Unknown 976 05 GB 339 53 GB NFS Power Management B s ia Normal HA A 1 Non SD 2 54 TB 1 92 TB VMFS5 m Software Datastore Details licensed Features Last Update 9 30 2013 10 50 14 A 9 30 2013 10 50 14 A 9 30 2013 10 25 45 A 9 30 2013 10 33 29 A 9 30 2013 10 33 29 A 9 30 2013 10 50 14 A 3 Click the Network File System button if it is not selected and click Next Add Storage Select Storage Type Specify if you want to format a new volume or use a shared folder over the network E NAS m Storage Type Network File System i 7 C Disk LUN Ready to Complete Create a datastore on a Fibre Channel iSCSI or local SCSI disk or mount an existing VMFS volume Network File System Choose this option if you want to create a Network File System 4 Follow the series of instructions for adding a network file system mount address and path IP November 2013 Solutions Blueprint 329591 001 27 5 Walk Through Setup and Configuration r Add Storage Locate Network File System E NAS Network File System Ready to Complete 172 16 5 17 nas nas it com 192 168 0 1 or FE80 0 0 0 2AA FF FESA 4CA2 volumes zpool NFS 1a Example vols vol0 datastore 001 Which shared folder will be used as a vSphere datastore Ex Mount NFS read only A If
10. 0 0 0 a o c0t5001517G6F35E2421d0 2 8 17 4 13 7 62 4 0 0 0 0 0 0 0 0 0 o c0t50015178F35E247Ad0 2 8 17 4 13 7 62 4 0 0 0 0 0 0 0 0 0 0 o0t50015178F35F2513d0 2 8 16 2 13 7 57 6 0 0 0 0 0 0 0 0 0 0 c0ts001517EF35E2544d0 2 8 16 2 13 7 57 6 0 0 0 0 0 0 0 0 0 0 c0t50015178F35E26B2d0 2 8 13 8 13 7 48 0 0 0 0 0 0 0 0 0 0 cotsoo15178F35E9108d0 78 13 8 13 7 48 0 0 0 0 0 0 0 0 0 0 0 5 5 Configure NFS Now that the storage has been configured it s time to share it utilizing NFS A total of 4 NFS mount points will be created resulting in two mount points utilizing each LOGbE network port In this design the 4 NFS mount points are named as NFS1a NFS1Lb NFS2a and NFS2b This static load balancing will not be configured in the Nexenta appliance but rather the systems utilizing the storage When adding NFS storage to a VMware environment the a mount points will utilize one LOGbE network port while the b mount points will use the other This is accomplished by using the different IP addresses on the a and b network ports 6 Create Folder a Navigate to the Folder field and select Create About Support Add Capacity Register Help rire ieee Data Management Buri QB Shares 5 5CSiTarget AutoSerices Runners fg Data Sets Volumes SUMMARY INFORMATION VOLUMES Show All Volumes Summary Information Create Volume Configuration Create New Volume zpool raidzi group 1 devices 12 Import oe ae Necke Ci
11. 001 17 intel Walk Through Setup and Configuration 5 Determine the correct VID and PIDs of the installed drives by issuing the following command echo walk sd state grep 0 print struct sd_lun un_sd iprint struct sc si_device sd_ing iprint Struct scsi_inguiry ing vid ing pia mdb k A EP admin fm21vsan07 export home admi The VID PID entry in the sd conf file uses a concatenation of the VID and PID to identify the drive Notice the extra spaces following the VID entries This space is very important when crafting our entries for the sd conf file as the configuration file is delimited by spacing The output of the command referenced in step 5 should read as follows ing vid TATA ing pid INTEL SSDSA2BZ20 ing vid ATA inq_pid INTEL SSDSC2BB80 Each disk entry in the sd config list will contain 5 variables VID PID Block Size Queue Depth DiskSort and Cache Flushing ATA INTEL SSDSA2BZ20 The 1 entry is the concatenation of the VID and PID The entry for the Intel SSD DC S3500 Series drive is listed above physical block size 4096 Specify a physical block size 4KiB to align with the Intel SSD DC 3500 Series In addition to identifying which drives are 4KiB we must also tell the OS which optimizations are present and what should or should not be used throttle max 32 The Intel SSD DC 3700 Series drives and Intel SSD DC 3500 Series dr
12. 00x8r5 Normal 4 Software Datastore Details m Calculating SSD Endurance in RAID Although this endurance calculation method is similar to a ZFS Z1 set ZFS uses Stripes more efficiently and will likely result in a longer endurance than standard RAID5 When using RAID5 and SSDs the endurance of the array N 1 where N is the number of disks in the RAID5 array a RAID Z1 will have more endurance than a RAID5 set due to the variable stripe size amp efficiency Table 5 1 below shows an example calculation for a 400GB Intel SSD in 5 disk RAID5 set In Table 5 1 we use the formatted capacity of the drives in Bytes using base 2 1024 instead of base 10 1000 This calculation provides a more accurate representation of what an end user will see in a formatted volume Solutions Blueprint 28 Delete Add Storage Type Last Update VMFSS 9 30 2013 11 01 05 4 VMFSS 9 30 2013 11 01 05 4 VMFSS 9 30 2013 11 01 05 4 NFS 9 30 2013 11 01 05 4 NFS 9 30 2013 11 01 05 4 NFS 9 30 2013 11 01 05 4 November 2013 329591 001 intel Walk Through Setup and Configuration Table 5 1 Sample Endurance Calculation 5 Disk RAI D5 Set DC S3500 Series Per Drive GB Usable Capacity 780GB Per Drive Overwrites Day Per Drive Lifetime years Per Drive Lifetime terabytes 416 Terabytes Usable capacity X overwrites day X days X years 1024 TB 1024 PB Number of disk in RAID set Equation Lifetime of 5 disk RAID5 5 1 en
13. BCONTRACTOR WAS NEGLIGENT IN THE DESIGN MANUFACTURE OR WARNING OF THE INTEL PRODUCT OR ANY OF ITS PARTS Intel may make changes to specifications and product descriptions at any time without notice Designers must not rely on the absence or characteristics of any features or instructions marked reserved or undefined Intel reserves these for future definition and shall have no responsibility whatsoever for conflicts or incompatibilities arising from future changes to them The information here is subject to change without notice Do not finalize a design with this information The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products Conf
14. a datastore already exists in the datacenter for this NFS share and you intend to configure the same datastore on new hosts make sure that you enter the same input data Server and Folder that you used for the original datastore Different input data would mean different datastores even if the underlying NFS storage is the same Datastore Name Nexenta_NFS la Once this process is finished you should end up with something similar to the screen Shot below which shows an NFS datastore in the VMware storage configuration tab Repeat these steps for all the ESXi hosts which require access to the Nexenta SW SAN m intel com VMware ESXi 5 1 0 838463 Summary Virtual Machines Performance Configuration Tasks amp Events Alarms Permissions Maps Storage Views Hardware Status Update Manager Device Drive Type Capacity 03b tv800x8r5 1 Non SSD 5 09 TB O3c tv800x8rS 1 mun sD 5 09 TB Serial Attached SSD 144 00 GB 172 16 5 Rs vol Unknown 5 67 TB 172 16 4 4172 16 5 17 volumes zpool NFS1a 05 GB 172 16 5 20 mnt Unknown 976 05 GB Refresh Free 4 65 TB 4 69 TB 143 05 GB 5 62 TB 761 00 GB 339 53 GB Hardware View Datastores Devices Processors Datastores Memory Identification Status Storage ta 03b tv800x8r5 Normal Networking amp 03c tv800x8r5 Normal Storage Adapters a Normal Network Adapters Normal Advanced Settings zi R Normal Power Management 8 nfs02 tv4
15. al unit number LUN One byproduct of this consolidation to VMs is a phenomenal ramp in the randomness of disk I O For example say you have a single virtual machine VM connected to a SAN with 14x disks in a RAID array as your VM datastore This configuration behaves exactly like a dedicated server and the disk I O profile of the OS app stack is identical Now you introduce a second virtual machine and have combined the two OS app disk I O profiles as they read and write to the same VM datastore and your disk I O becomes 50 random when viewed from the perspective of the SAN Continue to add VMs to this same datastore and with 10 or more active VMs the I O pattern is almost completely random as the individual VMs I O streams interleave Random 1 0 does a few unsightly things to the typical SAN It renders caching algorithms useless for predicting I O and with caching ineffective you revert to the least common denominator of about 200 IOPS per physical disk running at 15K RPM Should OPS demand from any particular VM reach 2 8K IOPS we have exhausted the capability of our theoretical 14x disk array and every other VM in this datastore degrades because of a single noisy neighbor Note The typical VM datastore at 2 4TB in size can easily host 50 100 VMs at 40GB storage footprint per VM which makes I O contention commonplace The solution examined in this document is to construct a SAN which is composed entirely of Solid State Drives SSDs
16. and Intel SSD DC 3500 Series have a 4KiB physical block size max queue depth of 32 disable sorting of seek distance requests and disable cache flushing Also pay attention the comma after the first entry for the Intel SSD DC 3700 Series and the semicolon after the last entry When presenting multiple disk types a comma separated list ending with a semicolon IS necessary Sample sd conf sd config list entry sd conitig List Disk Drive Options Disk Drive Options Disk Drive Options 7 When finished modify the file update the OS with the following command update_drv vf sd ir EP admin fm21vsan07 export home admi Pay attention only to the last two lines The OS has been correctly updated if these following lines are present Forcing update of sd conf Updated in the kernel 8 Verify the SSDs have been updated to use a 4KiB block size echo sd_state mdb k grep phy_blocksize Note Block sizes of 0x200 and 0x1000 represent 512B and 4KiB respectively Solutions Blueprint November 2013 20 329591 001 intel Walk Through Setup and Configuration 5 4 Configure Storage and Features Now that the storage devices have been optimized it s time to concentrate pooling them together 1 Open a web browser and navigate to the NexentaStor appliance 2 Navigate to the Data Management tab and select Data Sets About Support Add Capacity Re
17. ate button Dedupication Controls the deduplication option for this dataset If enabled it will opt Controls the compression algorithm used for this dataset Default is c compression Currently gzip is equivalent to qzip 5 Number of Copies Be Controls the number of copies of data stored for this dataset Default Case Sensitivity mixed Indicates whether the file name matching algorithm used by the file sy and NFS atthe same time Default is mixed The Folder Summary View is displayed listing the newly created folder Folder Refer Used Avail CIFS NFS FTP RSYNC WebDAV Index Delete zpool NFS1la 256 00 KB 256 00 KB 7 00 TB x Filter Delete selected Results 1 1 all 7 Create 3 additional Folders a Repeat Steps la 1f specifying unique folder names Solutions Blueprint November 2013 24 329591 001 Walk Through Setup and Configuration b When finished there will be a total of 4 folders listed SUAMARY THOR ATION POLDERS Folder Refer Used Avail GIFS NFS FTP RSYNC WebDaAy Index Delete E pool NFS1a 256 00 KB 256 00 KA zaw O F m m E ai 3t E zpool NFS1b 256 00 KB 256 00 KB ziw A Ei E ial El K zpool NFS 20 256 00 KB 256 00 Ke 7 60 Te Fl A Fl E x E spool NFS7b 256 00 KB 256 00 KB 7 6078 O Fi ial ial 5 o x F rier Delete setectes Results 1 4 all 8 A Enable NFS Place a checkmark under NFS for each folder and select OK to the prompt 9 Configure NFS
18. below shows the relationship between the 48 VMs loaded with Windows Server 2008 R2 and lometer the four hosts loaded with ESXi 5 1 U2 used to house those 48 VMs and the LOGbE lab network across which storage traffic to the NFS based VM datastores occurs on the NexentaStor server Figure 4 2 Network Configuration Diagram KKK font l eT Nexenta SW S K J 10GbE Lab Network Kad SAN Virtual Machines CRE D ESXi 5 1 Hosts November 2013 Solutions Blueprint 329591 001 15 5 1 5 2 intel Walk Through Setup and Configuration Walk Through Setup and Configuration Basic I nstallation Nexenta The basic installation and configuration of the NexentaStor 3 1 4 software stack is not covered in this document Refer to the NexentaStor QuickStart Guide and Installation Guide to install and perform the basic setup of the software http info nexenta com rs nexenta images doc_3 1_nexentastor quickstart 3 1 pdf Network Configuration In order for the storage requests to be load balanced to the NexentaStor appliance it is important to pay special attention to the network configuration Requests must traverse both network ports to alleviate potential network bottlenecks Placing each 10GbE port on a different subnet will enable TCP IP routing decisions to be made If these ports were placed on the same subnet there would be no way to enable each 10GbE port to be used efficiently It is true that LACP port channels co
19. durance 416TB or 1 6 Petabytes of write activity 5 8 Configuration and Setup Results 5 8 1 Results e 4 hosts running against 4 NFS targets with 48 VMs 12VMs per NFS datastore e Each VM configured with a 100 random 4K workload of 90 read 10 write e Sustained 100K 100 Random IOPS across a 10GbE network at 2 5ms latency e Power footprint of 650 watts rack space of 2U and estimated retail pricing under 30K 5 8 2 Conclusion The combination of Intel Xeon processors Intel x520 DA2 10GbE and Intel SSD DC 3500 Series with the Nexenta software based SAN using the methods described in this paper provides a high OPS solution capable of hosting hundreds of virtual machines Given the endurance of the Intel SSD DC 3500 Series the solution presented here will last through 24 hour operation with a 10 write workload for 3 4 years using the sample endurance calculation outlined in the previous section Finally the small power and data center footprint plus low cost per 1 OPS when compared to traditional SAN supplies the business value required to pursue this non traditional software based storage solution With the price of SSDs continuing to decrease as storage capacity increases the ongoing move toward the cloud and VMs and increasing demand for cost efficient IT services the enterprise cloud community must look at alternative methods for supplying the data center with IOPS Intel SSDs and platforms will play an integral
20. erver chassis board controllers and network Solutions Blueprint November 2013 8 329591 001 Solution Overview General assumptions for calculations Drive Type IOPS Drives Cost Power LOPS Target Required Consumption 15K RPM SAS 100K 150 000 7 5kW 1 50 Intel SSD DC i 53500 Series 100K 30 000 650W Summary lt 20 11x less lt 80 Notes 1 15K RPM high performance drives cost and performance numbers based on an industry average of 200 random IOPS per 15K RPM drive at 15W of power 2 Cost numbers based on suggested retail pricing of Intel SSD DC 3500 Series 800GB drive as of September 2013 November 2013 Solutions Blueprint 329591 001 9 intel Solutions I mplementation 3 Solutions Implementation Building a private cloud is a large and complicated task One of the main challenges revolves around shared storage and SAN A decade ago servers were built with 3 8 disks in a Redundant Array of Independent Disks or RAID set which serviced the operating system and application In addition high performance storage needs were addressed with SAN and the only items placed in high cost SAN were databases Fast forward to 2013 the typical server has only one or two local disks with a virtualization layer or hypervisor and all the virtual machines sitting in SAN In essence we have inverted the disk ratio where 10 years ago we had 3 8 disks for an OS app stack now we have 3 8 OS app stacks per disk in a SAN logic
21. evel Cell MLC NAND which allows for up to 5 years of continuous writes to the drive at 1x drive overwrites every 3 days Nexenta Product Overview NexentaStor is a fully featured NAS SAN software platform with enterprise class capabilities that address the challenges presented by ever growing datasets NexentaStor continues to build on its position as a superior open source multi platform alternative to vertically integrated and proprietary storage offerings Certified on a wide range of industry standard x86 based hardware it provides users the flexibility to select their storage operating system independently from the hardware it runs on This enables users to select the best components for their need NexentaStor helps organizations enjoy significant benefits including higher workload throughput and improved management capabilities Using NexentaStor as a storage foundation enables organizations to implement high performance cost effective data storage solutions with features that include inline compression unlimited snapshots cloning and support for highly available storage This strategy reduces IT complexity and delivers compelling storage economics NexentaStor 4 0 is a full featured storage operating system built on community supported IllumOS currently Open Solaris in 3 1 4 enabling organizations to deploy an incredibly fast file system without being locked into a single proprietary storage hardware supplier Ba
22. f Nexenta Installation Guide http info nexenta com rs nexenta images NexentaS tor InstallationGuide 3 1 x pdf vSphere 5 1 Administration http www vmware com support product Guide Support vsphere hypervisor html Intel Ethernet Converged http www intel com content www us en network Network Adapter Home Page adapters converged network adapters html Intel SSD DC 3500 Series http www intel com content www us en solid state Home Page drives solid state drives dc s3500 series html Intel Xeon E5 Family http www intel com content www us en processors Home Page xeon xeon processor 5000 sequence html November 2013 Solutions Blueprint 329591 001 7 2 2 1 2 2 2 3 intel Solution Overview Solution Overview Intel Solid State Drives balance performance endurance and lower solution cost to meet virtualization and cloud demands Existing Challenges Pressure to move toward cloud Cloud computing offers a multitude of benefits Balancing the enterprise private cloud with public offerings IT organizations must increase internal density to compete with the public cloud s economy of scale Many VMs per datastore As internal density increases more and more VMs reside in single large storage area network SAN based logical drives LUNs This interleaves I O and produces from the SAN s perspective a random workload Doing more for less All IT organizations are driven by budget c
23. gister Help D Status CEAL Analytics MGeneral j Storage Fa Network GENERAL STATUS A Overall Status Information Shares Data Volumes CPU and VO Monitor All data volumes SCSI Target Replication services Status and progress CPU Utiliz oad oobi bi Be Zi Network services 3 Runners Network Storage services 3 Navigate to the Volumes field and select Create About Support Add Capacity Register Help CBee BT ile me Data Management WUE ce fj Data Sets Shares 2 SCSITarget zAuto Services h Runners SUMMARY INFORMATION VOLUMES Show i No available volumes You can create a new one or import Summary Information Create Create New Volume Import Impor Existing Volume 4 Create a ZPOOL a Select 6 disks under Available Disks b Select RAID Z1 single parity from the Redundancy Type c Click the Add to Pool gt gt Button d Select the remaining 6 disks under Available Disks e Select RAI D Z1 single parity from the Redundancy Type f Click the Add to Pool gt gt button Volare Configuration Available Disks Final Volume Configragion S x ets 0hOd mpraidisk 745 21 GS chI SAJO mprioldsk 745 21 GB ee oe ens 237k memothsk 745 21 GB chs yigi memioidisk 745 21 GB idd bp spare gt COS 36040 mpxiaidigk 745 29 GE i Bites mpuoiderk T4521 GE eoes4eidd mpwiajdiskj 745 21 GB Ea COSTA mpoddesk 745 21 G8 ERETTE
24. igurations 1 NexentaStor appliance Intel Dual Processor 2x Intel Xeon E5 2690 Intel S2600CP Xeon Motherboard 128 GB of RAM in 16x 8GB 24x 2 5 inch drive 2U Server 2x LSI 9207 81 HBA 12x 2 5 inch 800GB Intel SSD DC S3500 Series SATA SSD 1x 2 5 inch 400GB Intel SSD DC S3700 Series SATA SSD 1x Intel x520 DA2 Network Adapter 2 ESXi Hosts Intel Dual Processor 2x Intel Xeon E5 2690 Supermicro Super X9DRFR Xeon Motherboard 128 GB of RAM in 16x 8GB 1x 2 5 inch Boot Swap 1x Intel x520 DA2 Network Adapter Results have been estimated based on internal Intel analysis and are provided for informational purposes only Any difference in system hardware or software design or configuration may affect actual performance Intel does not control or audit the design or implementation of third party benchmark data or Web sites referenced in this document Intel encourages all of its customers to visit the referenced Web sites or others where similar performance benchmark data are reported and confirm whether the referenced benchmark data are accurate and reflect performance of systems available for purchase For more information go to http www intel com performance Copies of documents which have an order number and are referenced in this document or other Intel literature may be obtained by calling 1 800 548 4725 or go to http www intel com design literature htm Intel and Intel logo are trademarks or registered trademarks
25. ives are SATA drives and have a max queue depth NCQ of 32 disksort false Solutions Blueprint November 2013 18 329591 001 intel Walk Through Setup and Configuration By default the OS optimizes for HDD seek distance Intel SSD Series do not have any moving parts thus optimizing for seek distances hurts SSD performance cache nonvolatile true The Intel SSD DC 3700 Series and Intel SSD DC 3500 Series drives are protected with PLI circuitry Power Loss Imminent that commit all unwritten data in the transfer buffer to storage in the event of a power loss Cache flushing with PLI protected SSDs can lead to performance hits and is unnecessary 6 Edit kernel drv sd conf with VIM vim kernel drv sd cont l a Navigate to the end of the file and add the following lines to the configuration Adding these comments can make the file easier to read for future modifications 7 Intel DC 53700 SSD Intel DC 53500 53D sd contag lList YATA INTEL SSDSCZBZ20 physical block size 4096 throttle MaxtsZ2 disksorttialse cache nonvolatitle truce ATA INTEL SSDSCZBES8U0 physical block size 4096 Cthrorrcle Maxts2 drsksorttialse cache nonvolatitle true E EP admin fm21vsan07 export home admi c eo L X November 2013 Solutions Blueprint 329591 001 19 intel Walk Through Setup and Configuration This configuration tells NexentaStor that both the Intel SSD DC 3700 Series
26. ly the majority of disk drive vendors used a block or sector size of 512 Bytes and subsequently most filing systems are optimized for 512B blocks Many SSD vendors today are optimizing around sector sizes of 4096 Bytes or 4KiB These drives still support 512B blocks but do so differently 512B sectors are now emulated or listed as the logical block size and 4KiB sectors are listed as the physical sector size This is because each 4KiB sector will actually contain eight smaller 512B sectors on the disk The operating system can still access these 512B sectors but will do so with a performance penalty for write transactions Running disk benchmark utilities such as lometer will reveal these performance barriers with most 4KiB drives at the sub 4KiB transaction size ZFS can be tuned to operate at a 4KiB sector size and increase performance To do so the NexentaStor must be told which drives are 4KiB based and the drives in this paper are indeed based on the larger sector size Modifying the system file sd conf and adding lines to identify the Intel drives as 4KiB will help to accomplish this 4 Using PuTTY or secure shell tool SSH to the NexentaStor appliance Log in as admin then SU to root EP admin fm21vsan0 export home admi NOTE Logging in as Root will bring you to the Nexenta Management Console NMC The changes being made will need to be done through a command shell not the NMC November 2013 Solutions Blueprint 329591
27. ntictiicc Import Existing Volume Live Volumes Disks Statistics Disk r s w s pool 12 disks Show cOt50015178F35E20BDd0 1 2 0 8 Summary Information Sr ee cOt50015178F35E219Ad0 1 2 0 8 Create cOt50015178F35E2237d0 1 2 0 8 Create New Folder c0t50015178F35E2331d0 1 2 0 8 Search pi gt o ry ESF cOt50015178F35E236Dd0 1 2 0 8 cOt50015178F35E23D0d0 1 2 0 8 November 2013 Solutions Blueprint 329591 001 23 Walk Through Setup and Configuration b Select zpool Volume if it hasn t already been pre populated c Specify the Folder Name and Description Volume Folders volume Folder Name JFS1a Each folder pathname s component delimited by backslash 7 can on Folder pathname must begins with an alphanumeric character and no Description WFSt4 Human readable description for this folder d Select the default 128K for the Record Size Record Size 423k specifies a suggested block size for files in the folder Default is 128K Note The NFS record size denotes the largest record that can be used to accommodate files placed on the ZFS based storage Smaller files will use record sizes closer to their size and larger files will do the same For example 3kb files will require a 4K record size where as 1MB files would require 8x records at 128K However all these files will still reside in 4KiB ZFS sectors e Enable deduplication and select OK to the warning f Leave the remaining values at their defaults and select the Cre
28. of Intel Corporation or its subsidiaries in the United States and other countries Other names and brands may be claimed as the property of others Copyright 2013 Intel Corporation All rights reserved Solutions Blueprint November 2013 2 329591 001 Introduction Contents 1 WOU TO Meeri rege E EE EE EE A EEE E EEE E EEEE 5 1 1 Audience GNC PUPDOSES s cesncdsonnctevesuscaccanuadsieneny eadnatee dupes eiaa ne iendea 5 1 2 PSCC USCS A A P E E E E A E E E EE 5 1 3 PORON T ea A E E E 5 1 4 TE a E E E E 6 1 5 Reference Documents sesssrresrrrerrrerrrrrrsrrrerrrsrerrrrsrrrerrrrrrrrrrerrrerrreren 7 2 SOMIMON OVOIVI OW eeen e EEEE E E EEEE E EEEE ESE 8 2 1 EXISTING Choleng lt cnccsscuseaceesieuecencatsecemeneravntsscooctaesaaaues cauuecesetanesnecanseauiens 8 2 2 MASTS DURO r E A somite ocmeraueanceauence 8 2 3 SOORT A n e tens seudees a a E 8 3 S l ti ns implementati ocrni arnee aeai a a E ek e e Ea a ies 10 3 1 Nexenta Product Overview sssssessrserrserenrrrerrrerrrrrrrrrrerrrerrrrrerrrrerrrerar 11 3 2 NexentaStor Additional Features s ssssserssrrrerrrerrerrrsrrrerrrerrerrrerrrerne 12 4 Test C ngu ration and PIAI mrisrirssressrs oimn tianinesn ienen a Ga anani 13 4 1 Hardware Setup and Configuration s ssessseresrrrsrerrrrrrrrsrrrerrrerrerrrerrrerne 13 4 2 Software Setup and Configuration sssssrrssrrsrresrrrerrrerrrrrrrrrrerrrerrrsrere 14 4 3 Software Architecture and Network C
29. onfiguration Diagram cceeeeeee eens 15 5 Walk Through Setup and Configuration ss ssssrsersesrrsrresrrrerrrrrrerrrsrrrerrrrrrerrrene 16 5 1 Basic Installation Nexen a osiris egei aa 16 5 2 Network ConguratiO f iosissina i e EE EE Ea EEN 16 Ia Optimize ZFS for 4KiB Block Devices s ssssrssrrsrrrnrrrsrrrerrrerrerrrsrrrerrerrrne 17 5 4 Configure Storage and FeatureS s s ssrssrrssrrsrrrnrrrrsrrrrrrrrrrerrrerrrrrrrrrrene 21 5 5 COMMOUl CINE S aerie E E EE E AEE E ER 23 5 6 VMware NFS Configuration sssssssssrrssrensrrrsrrrerrnrrrrrrrsrrrerrrrrrsrrrerrrrrene 26 5 7 Calculating SSD Endurance in RAID s ssssssssrssrrnerrnrrrsrrrerrrrrrnsrrrerrrerrrrrrere 28 5 8 Configuration and Setup Results ssssnssssnresrrnrrrsrrrerrrrrrrrrrsrrrerrrrrerrrrene 29 5 8 1 PE E E E E E O E EE E E E EA 29 5 8 2 CONCIA TO aeee EE A E ERT 29 Figures Figure 4 1 System Architecture Diagram s sssrssrrrerrsrrrsrrrerrrerrerrrsrrrerrrrrrsrrreree 14 Figure 4 2 Network Configuration Diagram s ssssssrsssrssrrrerererrerrrerrrerrrerrsrrreree 15 Tables Table 5 1 Sample Endurance Calculation 5 Disk RAID5 Set DC 3500 Series 29 November 2013 Solutions Blueprint 329591 001 3 intel Introduction Revision History Document Revision Description Revision Date Number Number 329591 001 Initial release November 2013 Solutions Blueprint November 2013 4 329591 001 Introduction 1 1
30. onstraints Any solution that requires a smaller data center footprint uses less power and leverages hardware resources more efficiently helps accommodate these constraints Intel Solution Intel SSD DC 3500 Series Deploying a software based NAS solution to handle many VMs using the Intel SSD DC S3500 Series can provide a low power low cost high IOPS solution relative to existing alternatives Solution mpact 100k Random I O Operations per Second 1OPS The Intel SSD DC 3500 Series Nexenta based software SAN delivers 100K random 4kB IOPS at a 90 10 read write ratio the equivalent of roughly 500 15K RPM high performance hard disk drives HDDs at a much smaller power budget of 650 watts This configuration as detailed in sections 4 1 and 4 2 is capable of driving hundreds of concurrent VMs Lower costs When compared with the cost of powering the hundreds of 15K RPM high performance HDDs needed to produce 100K IOPS this Intel SSD based solution consumes 650 watts versus roughly 7 5 kilowatts 11x less power When compared with the average SAN at 1 per random IOPS for 100K sustained workloads this 30K solution comes in at only 0 30 per random IOPS a 70 cost savings In addition the 2U footprint is smaller than most comparable performing SANs allows for an additional 12 disks of growth requires less cooling budget and reduces overall complexity for simplified management Note Cost includes small percentage for s
31. p is equivalent to gzip 6 Autoexpand Controls automatic pool expansion when the underlying LUN is grown Sync standard x Controls synchronous requests standard ensure all synchronous requests are written to stable storage always every file system transaction will be written and flushed to stable storage by system call return disabled synchronous requests are disabled Default is standard Force creation Forces use of virtual or physical disks LUNs even if they appear to be in use Create Volume c Provide Administrative credentials and select login Solutions Blueprint November 2013 22 329591 001 intel Walk Through Setup and Configuration The zpool has been created and we are returned to data sets view under data management SUMMARY INFORMATION VOLUMES All Volumes Volume Configuration Size Allocated Frer Capacity Dedup Ratio State Grow Fxport Delete spool raidzi group 1 devices 12 8 69 TB 1 41 MB 8 69 TB 0 1 00x ONLINE ar K2 x Live Volumes Disks Statistics Disk r s w s kr s kw s wait adv wsvec_t asvc_t Yow b Tpool 12 disks cotsO015178F35E208Dd0 2 8 14 0 13 7 48 8 0 0 0 0 0 0 0 0 0 9 o0t50015178F35F219Ad0 2 8 14 0 13 7 48 8 0 0 0 0 0 0 0 0 0 0 c0t5001517SF35E2237d0 2 8 14 4 13 7 50 4 0 0 0 0 0 0 0 0 0 0 c0t50015178F3562331d0 2 8 14 2 13 7 49 0 0 0 0 0 0 0 0 0 0 4 o0t50015178F35F2 360d0 7 8 15 4 13 7 54 4 0 0 0 0 0 0 0 0 0 0 cOt50015178F35E2300d0 2 8 15 0 13 7 52 8 0 0 0 0 0
32. part in changing the face of storage in the data center November 2013 Solutions Blueprint 329591 001 29
33. sed on the ZFS File System and through partnerships with the open source community NexentaStor benefits from an exchange of product development and Support activities that collectively allow for greater resources than any single storage vendor can provide At its core is open source code and built around its Nexenta proprietary IP code seamlessly integrated and packaged for commercial deployment and use NexentaStor can create shared pools of storage from any combination of storage hardware including solid state drives It also delivers end to end data integrity integrated search and inline virus scanning The power of NexentaStor is extended by the use of its licensed features These include namespace cluster high availability clusters OST for Symantec NetBackup and Nexenta Target FC These also provide integration enhancements for management and administration with other 3rd Party products such as virtualization Suites Incidentally these optional features can be installed without restarting NexentaStor or upgrading the entire system November 2013 Solutions Blueprint 329591 001 Ll intel Solutions I mplementation 3 2 NexentaStor Additional Features Additional features include e Highly Scalable File System ZFS based technology provides a highly scalable and flexible 128 bit file system e End to End Integrity Because of the self healing nature of the ZFS file system users need not worry about silent data corr
34. taStor Intel Dual Processor 2 Intel Xeon E5 2690 Intel S2600CP Xeon Motherboard 128 GB of RAM in 16x 8GB 24 2 5 inch drive 2U Server 2 LSI 9207 81 HBA 12 2 5 inch 800GB SATA Intel SSD DC 3500 Series 1 2 5 inch 400GB SATA Intel SSD DC 3700 Series 1 Intel x520 DA2 Network Adapter Hardware configuration used for NexentaStor Configuration consists of 12 Intel SSD DC 3500 Series connected to 2 HBAs 6 drives each This is commonly referred to as a channel per drive configuration as there is no expander being used Each drive has a direct SAS connection to the HBA and balance the I O to from the PCle BUS for maximum performance Four more drives could be added two to each HBA to increase capacity and performance if necessary but is not covered in this guide Additionally there are 12 more drive bays in the solution used so expanding with an additional SAS controller and 8 more drives attached to that controller is also a possibility The Solid State Drives are configured in two software RAID Z1 volumes one per HBA While RAID Z1 is comparable to a traditional hardware RAID 5 RAID Z1 uses copy on write rather than the slower read modify write or write hole to update blocks November 2013 Solutions Blueprint 329591 001 13 intel Test Configuration and Plan Figure 4 1 System Architecture Diagram 4 2 D intel Xeon 5 2600 QPI 8G 02 Memory DDR3 T wo 2 e e 3 a
35. tion from a simple array of 15K RPM high performance 2 5 disks would require approximately 500 disks 40U of rack space 7 5 Kilowatts of power and cost roughly 125K for the disks alone Similarly a SAN solution capable of producing this type of sustained random I O could cost over 1 per OPS from a typical storage vendor In both cases as long as the target storage footprint does not exceed the proposed 8TB solution presented here it offers superior performance to the competing solution in OPS per dollar and overall cost and upkeep Note 15K RPM drives cost and performance numbers based on an industry average of 200 random IOPS per 15K RPM drive at 15W of power General assumptions for calculations Drive Type Cost Power Consumption 15K RPM SAS 300 600GB Intel SSD DC 3500 Series 75000 900 Note 1 Cost numbers based on suggested retail pricing of Intel SSD DC 3500 800GB drive as of September 2013 1 4 Terminology Copy on write A computer programming optimization strategy CoW virtual machine monitor that manages virtual machines Non RAID storage architecture where each drive is accessed directly as a standalone device Acronym stands for Just a Bunch Of Drives Virtual Machine Solutions Blueprint November 2013 6 329591 001 Introduction 1 5 Reference Documents Nexenta Quick Start Installation Guide http info nexenta com rs nexenta images doc_3 1_ nexentastor quickstart 3 1 pd
36. uld be used on the network but that would take the load balancing decisions away from the systems and place it at the switching layer 1 Open a web browser and navigate to the NexentaStor appliance http lt IP or Hostname gt 2000 status general 2 Navigate to Settings tab and select Network About Support Add Capacity Register Help General Overall Status Information Data Volumes All data volumes Network services w Users Network Storage services Fault Management services Preferences Ales ees m mA mtm m Status gt Data Management Analytics a Network Es Appliance Fig Storage ILS FM21VSANO07 Misc Services j j 5 7 Rep lication ee A Disks status and progress Solutions Blueprint November 2013 16 329591 001 intel Walk Through Setup and Configuration 3 Verify that you have configured the two 10 GbE network adapters on different subnets SUMMARY NETWORK SETTINGS Network Interfaces Interface Type Configuration Primary Actions igbO physical Configured as 10 80 143 17 255 255 255 0 with mtu 1500 OCO H igb1 physical Unconfigured O H igb2 physical Unconfigured Oo H ixgbe0 physical Configured as 172 16 5 17 255 255 255 0 with mtu 1500 O H ixgbe1 physical Configured as 172 16 4 17 255 255 255 0 with mtu 1500 sd Note 1GbE and 10 GbE adapters are listed as igb and ixgbe respectively 5 3 Optimize ZFS for 4KiB Block Devices Until recent
37. uption e Unlimited Storage Capacity Designed for large scale growth with unlimited Snapshots and copy on write clones e Largest File Size Commercially Available While most legacy solutions limit the size of the file system there are no such limitations with ZFS e Unified Appliance Support for NAS NFS CIFS WebDAV FTP and SAN iSCSI and FC means flexibility e Data De duplication and Native Compression Inline data de dupe and compression reduces the use and expense of primary storage e Hybrid Storage Pools Exploit multiple layers of cache to accelerate read and write performance e Heterogeneous Block and File Replication Asynchronous replication for easy disaster recovery e Block level Mirroring Setup remote backups and disaster recovery at offsite locations e Simplified Disk Management Manage disks and J BODs using either command line or the web based interface provided in the NexentaStor Management Viewer NMV e Total Product Safety Proactive automated alert mechanisms send alerts when configurations deviate from the IT defined user based policy Solutions Blueprint November 2013 12 329591 001 intel Test Configuration and Plan 4 1 Test Configuration and Plan Hardware Setup and Configuration It is important that all hardware utilized is listed on the Nexenta HSL Hardware Supported List http www nexenta com corp support support overview hardware Supported list Hardware setup used for Nexen
Download Pdf Manuals
Related Search
PDF pdf pdf editor pdffiller pdf to word pdf to jpg pdf merger pdf combiner pdf converter pdf editor online free pdf to excel pdf reader pdf compressor pdf24 pdfescape pdf to png pdf to word converter pdf viewer pdf24 creator pdf splitter pdf reader free download pdf to jpeg pdf-xchange editor pdf files pdf converter free pdf to excel converter
Related Contents
C-Series Stack Dryer Operator`s Manual Manual de Utilização - Hanna Instruments Portugal Copyright © All rights reserved.
Failed to retrieve file