Home
Dell Management Plug-in for VMware vCenter 1.6 Reference Architecture
Contents
1. 1 Out of band management is not considered critical to user workload and does not have redundancy Page 10 8 Power Cooling and Weight Considerations Active System 800v solution is configured with Power Distribution Units PDUs to meet the power requirements of the components as well as regional constraints Power consumed cooling required and information regarding rack weight are provided to enable customers to plan for the solution 9 Flexible configurations Active System 800v is pre configured to suit most customer needs for a virtualized infrastructure The solution also supports additional options such as configuring racks server processors server memory and storage based on customer needs 5 Reference Architecture This solution consists of a PowerEdge M1000e chassis populated with PowerEdge M620 blade servers running VMware ESXi Figure 2 provides the high level reference architecture for the solution Figure 2 Active System 800v Network Topology Logical View pw X aa p y Core Network ag Y ee F 7 Jit Forceto saato IN Forcet0 54810 E 16 24 or 32 PowerEdge M620 in tor 2 Powerkdge M1000e 2x Powerfcge M620 TIET o LAN ECS SAN Converged Network VLT Peer Lag PowerEdge MIDODe COMC ports Powertage R620 IDRAC ports and EquniLoge management ports connect to
2. Active System Manager provides immediate alerting in case of a hardware fault and enables rapid and easy migration of the workload to other infrastructure resources Multiple warnings and errors are aggregated into a single console e Guided user workflows and multi level views Active System Manager presents a wizard driven graphical user interface with feature guided step by step work flows It provides a graphical logical network topology view for better decision making through improved visibility For more information on Dell Active System Manager see Dell Active System Manager 3 3 Dell PowerEdge Blade Servers Blade Modular Enclosure The Dell PowerEdge M1000e is a high density energy efficient blade chassis that supports up to sixteen half height blade servers or eight full height blade servers and six I O modules A high speed passive mid plane connects the server modules to the I O modules management and power in the rear of the chassis The enclosure includes a flip out LCD screen for Page 5 local configuration six hot pluggable redundant power supplies and nine hot pluggable N 1 redundant fan modules Blade Servers The PowerEdge M620 blade server is the Dell 12 generation PowerEdge half height blade server offering e New high efficiency Intel Xeon E5 2600 family processors for more advanced processing performance memory and I O bandwidth e Greater memory density than any previous PowerEdge server E
3. a larger number of disks the potential performance of iSCSI volumes within the pool is increased with each member added 8 2 RAID Array Design The storage array RAID configuration is highly dependent on the workload in your virtual environment The EqualLogic PS series storage arrays support four RAID types RAID 6 RAID 10 and RAID 50 The RAID configuration will depend on workloads and customer requirements In general RAID 10 provides the best performance at the expense of storage capacity especially in random I O situations RAID 50 generally provides more usable storage but has less performance than RAID 10 RAID 6 provides better data protection than RAID 50 For more information on configuring RAID in EqualLogic refer to the white paper How to Select the Correct RAID for an EqualLogic SAN 8 3 Volume Size Considerations Volumes are created in the storage pools Volume sizes depend on the customer environment and the type of workloads Volumes must be sized to accommodate not only the VM virtual hard drive but also the size of the virtual memory of the VM and additional capacity for any snapshots of the VM Page 20 It is important to include space for the guest operating system memory cache snapshots and VMware configuration files when sizing these volumes Additionally you can configure thin provisioned volumes to grow on demand only when additional storage is needed for those volumes Thin provisioning can increase the eff
4. e Configuring iSCSI Connectivity with VMware vSphere 5 and Dell EqualLogic PS Series Storage e Configuring and Installing the EqualLogic Multipathing Extension Module for VMware vSphere 5 1 5 0 and 4 1 and PS Series SANs e How to Select the Correct RAID for an EqualLogic SAN e Using Tiered Storage in a PS Series SAN e Monitoring your PS Series SAN with SAN HQ Dell Management reference e Dell Management Plug In for VMware vCenter references Solution Brief Page 30
5. e Remote Services are standards based interfaces that enable consoles to integrate for example bare metal provisioning and one to many OS deployments for servers located remotely Dell s Lifecycle Controller takes advantage of the capabilities of both USC and Remote Services to deliver significant advancement and simplification of server deployment e Lifecycle Controller Serviceability aims at simplifying server re provisioning and or replacing failed parts and thus reduces maintenance downtime For more information on Dell Lifecycle Controllers and blade servers see http content dell com us en enterprise dcsm embedded management and Dell com blades 3 4 Dell PowerEdge M I O Aggregator The Dell PowerEdge M I O Aggregator IOA is a flexible 1 10GbE aggregation device that is automated and pre configured for easy deployment into converged iSCSI and FCoE networks The key feature of the PowerEdge M I O Aggregator is that all VLANs are allowed as a default setting This allows the top of rack ToR managed switch to perform all VLAN management related tasks The external ports of the PowerEdge M I O Aggregator are automatically all part of a single link aggregation group LAG and thus there is no need for Spanning tree The PowerEdge M I O Aggregator can use Data Center Bridging DCB and Data Center Bridging Exchange DCBX to support converged network architecture The PowerEdge M I O Aggregator provides connectivity to the CNA Network a
6. Dell PowerEdge M I O Aggregator see http www dell com us business p poweredge m io aggregator pd 3 5 OpenManage Essentials The Dell OpenManage Essentials OME Console provides a single easy to use one to many interface through which to manage resources in multivendor operating system and hypervisor environments It automates basic repetitive hardware management tasks like discovery inventory and monitoring for Dell servers storage and network systems OME employs the embedded management of Page 7 PowerEdge servers Integrated Dell Remote Access Controller 7 iDRAC7 with Lifecycle Controller to enable agent free remote management and monitoring of server hardware components like storage networking processors and memory OpenManage Essentials helps you maximize IT performance and uptime with capabilities like e Automated discovery inventory and monitoring of Dell PowerEdge servers Dell EqualLogic and Dell PowerVault storage and Dell PowerConnect switches e Server health monitoring as well as BIOS firmware and driver updates for Dell PowerEdge servers blade systems and internal storage e Control of PowerEdge servers within Microsoft Windows Linux VMware and Hyper V environments For more information on OpenManage Essentials see the Data Center Systems Management page 3 6 Dell Force10 4810 Switches The Force10 S Series 4810 is an ultra low latency 10 40 GbE Top o
7. and store validated configuration information including host compliance networking storage and security settings For more information on VMware vSphere see www vmware com products vsphere 3 2 Dell Active System Manager Dell Active System Manager is the Active Infrastructure management software that is part of the Active System 800v Active System Manager addresses key factors that impact service levels namely infrastructure configuration errors incorrect problem troubleshooting and slow recovery from failures Active System Manager dramatically improves the accuracy of infrastructure configuration by reducing manual touch points The key capabilities of Dell Active System Manager are e Template based provisioning Workload specific infrastructure requirements are encapsulated in the form of a template which can be repeatedly applied on demand as needed This brings efficiency accuracy and consistency in the infrastructure configuration process e Automated configuration Active System Manager enables simplified discovery inventory and configuration of modular infrastructure This results in better visibility and resource allocation through efficient pooling of available resources e Infrastructure lifecycle management Active System Manager provides the capability to manage the entire lifecycle of infrastructure from discovery and on boarding through provisioning on going management and decommissioning e Workload failover
8. blade servers and the Broadcom 57810 Dual Port 10Gb Network Adapters in PowerEdge R620 rack servers is partitioned into four ports using NPAR to obtain a total of eight I O ports on each server As detailed in the subsequent sections one partition each on every physical 1 0 port is assigned to management traffic vMotion traffic VM traffic and iSCSI traffic The Broadcom NDC and the Broadcom Network Adapter allow setting a maximum bandwidth limitation to each partition Setting maximum bandwidth at 100 will prevent the artificial capping of any individual traffic type during periods of non contention For customers with specific requirements NPAR maximum bandwidth settings may be modified to limit the maximum bandwidth available to a specific traffic type regardless of contention The Broadcom NDC and the Broadcom Network Adapter also allow setting relative bandwidth assignments for each partition While utilizing NPAR in conjunction with Data Center Bridging DCB and Data Center Bridging Exchange DCBX the relative bandwidth settings of the partitions are not enforced Due this fact the relative bandwidth capability of the Broadcom NDCs and the Broadcom Network Adapters are not utilized in Active System 800v iSCSI hardware offload In Active System 800v iSCSI hardware offload functionality is used in the Broadcom 57810 k Dual port 10GbE KR Blade NDCs in the PowerEdge M620 blade servers and also in the Broadcom 57810 Dual Port 10Gb Network A
9. can be scaled seamlessly and independent of the compute and network architectures Additional EqualLogic PS6110 arrays of same or different Page 26 configuration can be added to the existing PS 6110 arrays New volumes can be created or existing volumes can be expanded to utilize the capacity in the added enclosures Active System 800v solution can scale up to maximum of 8 arrays To scale beyond this additional racks can be added which may require additional switches and networking 11 Delivery Model This Reference Architecture can be purchased as a complete solution the Dell Active System 800v This solution is available to be racked cabled and delivered to the customer site to speed deployment Dell Services will deploy and configure the solution tailored to the business needs of the customer and based on the architecture developed and validated by Dell Engineering For more details or questions about the delivery model please consult with your Dell Sales representative Figure 8 below shows the Active System 800v solution with a single chassis Figure 9 shows Active System 800v with two chassis and maximum storage enclosures Note that all EqualLogic arrays shown in the figures are PS6110X If a different PS6110 array type is ordered the actual rack configuration may be different from the one shown below Also note that the switches shown in figures are shown mounted forward for representation In actual use ports face the back of the r
10. device port in the converged network to simultaneously carry multiple traffic classes while guaranteeing performance and QoS In case of Active System 800v DCB settings are used for the two traffic classes i Traffic class for iSCSI traffic and ii Traffic class for all non iSCSI traffic which in the case of Active System 800v are different LAN traffic types DCB ETS settings are configured to assign bandwidth limits to the two traffic classes These bandwidth limitations are effective during periods of contention between the two traffic classes The iSCSI traffic class is also configured with Priority Flow Control PFC which guarantees lossless iSCSI traffic The Broadcom Network Adapters and the Broadcom NDCs support DCB and DCBX This capability along with iSCSI hardware offload allows Active System 800v solution to include an end to end converged network design without requiring support from the VMware vSphere hypervisor for DCB Figure 5 below provides a conceptual view of converged traffic with Data Center Bridging in Active System 800v Figure 5 Conceptual View of Converged Traffic Using DCB Converged Traffic through a DCB enabled Switch Port Bandwidth restriction for iSCSI and LAN traffic groups during contention using iSCSI VLAN Enhanced Transmission Selection ETS VM Traffic VLAN 00B Management VLAN Lossless medium using Priority Flow Control PFC Virtual Link Trunking VLT for 4810s
11. for discovery inventory and hardware level monitoring of blade tt Rack servers blade chassis PowerEdge M I O Aggregator modules EqualLogic storage and Force10 network switches Each of these components are configured to send SNMP traps to the centralized OME console to provide a single pane of glass monitoring interface for major hardware components OME provides a comprehensive inventory of solution component thought WS MAN and SNMP inventory calls For instance reporting is available to provide blade and rack server firmware versions or solution warranty status OME can be used as the single point of monitoring for all hardware components within an enterprise For more information on OpenManage Essentials see the Data Center Systems Management page 9 3 Dell Repository Manager DRM Within the Active System 800v solution Dell Repository Manager DRM is installed on the same Windows 2008 R2 VM as Dell OpenManage Essentials DRM is an application that allows IT Admins to more easily manage system updates DRM provides a searchable interface used to create custom collections known as bundles and repositories of Dell Update Packages DUPs These bundles and repositories allow for the deployment of multiple firmware BIOS driver and software updates at once Additionally Dell Repository Manager makes it easier to locate specific updates for a particular platform which saves you time For example in Repository Manager you can create a bun
12. iSCSI SAN connections for performance and reliability Multi Path 17O MPIO provides multiple paths from servers to storage delivering fault tolerance high availability and improved performance Active System 800v uses EqualLogic Multipath Extension Module MEM for VMware vSphere to enable MPIO for the iSCSI storage EqualLogic MEM offers e Ease of installation and iSCSI configuration in ESXi servers e Increased bandwidth e Reduced network latency e Automatic load balancing across multiple active paths e Automatic connection management e Automatic failure detection and failover e Multiple connections to a single iSCSI target Once installed the EqualLogic MEM will automatically create iSCSI sessions to each member that a volume spans As the storage environment changes the MEM will respond by automatically adding or removing iSCSI sessions as needed Page 21 As storage I O requests are generated on the ESXi hosts the MEM plug in will intelligently route these requests to the array member best suited to handle the request This results in efficient load balancing of the iSCSI storage traffic reduced network latency and increased bandwidth For more information on EqualLogic MEM refer to white paper Configuring and Installing the EqualLogic Multipathing Extension Module for VMware vSphere 5 1 5 0 and 4 1 and PS Series SANs 9 Management Infrastructure Within the Active System 800v solution two Dell PowerEdge R620 serve
13. management software Dell Active System Manager The following management components are included in the Active System 800v solution e Dell Active System Manager e VMware vCenter Server e Dell Management Plug in for VMware vCenter e Dell OpenManage Essentials OME e Dell EqualLogic Virtual Storage Manager VSM for VMware e Dell EqualLogic SAN HeadQuarters HQ e Dell Repository Manager e VMware vCloud Connector These components are installed in virtual machines installed as virtual machines in the management infrastructure as illustrated in Figure 7 Page 22 Figure 7 Management Components 9 g VMware vCloud VMware vCloud Connector Server Connector Node Dell Active System OpenManage Repository EqualLogic Boxes represent individual VMs in the management Manager Essentials Manager SAN HQ cluster Server images represent individual services OpenManage EqualLogic Plugin Virtual Storage for vCenter Manager vCenter Server VMware Management Cluster vMotion Enabled VMware HA Enabled VMware DRS Enabled VMware DPM Disabled Solution u Core Network SQL DNS AD Server The remainder of this section will provide an introduction to each component and how they are integrated into the Active System 800v solution 9 1 Dell Active System Manager As described in section 3 2 Dell Active System Manager the Dell Active System Manager is the Active Infrastructure management software that i
14. option is that it is easy to configure and provides load balancing across VMs especially in the case of a large number of VMs Uplinks There are several options to uplink the Force10 switches to the core network Selecting the uplink option depends on the customer core network and customer requirements One simple option is to create multiple uplinks on each switch and connect them to the core network switches Uplink LAGs can then be created from the Force10 4810 switches to the core network 8 Storage Architecture EqualLogic PS6110 provides capabilities essential to the Active System 800v design like 10Gb connectivity flexibility in configuring RAID arrays and creating volumes thin provisioning and storage tiering while providing tight integration with VMware vSphere for better performance and manageability through the use of EqualLogic MEM and EqualLogic VSM for VMware 8 1 EqualLogic Group and Pool Configuration Each EqualLogic array or member is assigned to a particular group Groups help in simplifying management by enabling management of all members in a group from a single interface Each group contains one or more storage pools Each pool must contain one or more members and each member is associated with only one storage pool The iSCSI volumes are created at the pool level In the case where multiple members are placed in a single pool the data is distributed amongst the members of the pool With data being distributed over
15. the ESXi Enterprise Plus license level include e VMware vMotion VMware vMotion technology provides real time migration of running virtual machines VM from one host to another with no disruption or downtime e VMware High Availability HA VMware HA provides high availability at the virtual machine VM level Upon host failure VMware HA automatically re starts VMs on other physical hosts running ESXi VMware vSphere 5 1 uses Fault Domain Manager FDM for High Availability e VMware Distributed Resource Scheduler DRS and VMware Distributed Power Management DPM VMware DRS technology enables vMotion to automatically achieve load balancing according to resource requirements When VMs in a DRS cluster need fewer resources such as during nights and weekends DPM consolidates workloads onto fewer hosts and powers off the rest to reduce power consumption Page 4 e VMware vCenter Update Manager VMware vCenter Update Manager automates patch management enforcing compliance to patch standards for VMware ESXi hosts e VMware Storage vMotion VMware Storage vMotion enables real time migration of running VM disks from one storage array to another with no disruption or downtime It minimizes service disruptions due to planned storage downtime previously incurred for rebalancing or retiring storage arrays e Host Profiles Host Profiles standardize and simplify the deployment and management of VMware ESXi host configurations They capture
16. Inside each Active System 800v a Virtual Link Trunking interconnect VLTi is configured between the two Force10 54810 switches using the Virtual Link Trunking VLT technology VLT peer LAGs are configured between the PowerEdge M I O Aggregator modules and Force10 4810 switches and also between the Force10 4810 switch and the Force10 4810 switches Virtual Link Trunking technology allows a server or bridge to uplink a single trunk into more than one Force10 4810 switch and to remain unaware of the fact that the single trunk is connected to two different switches The switches a VLT pair make themselves appear as a single switch for a Page 16 connecting bridge or server Both links from the bridge network can actively forward and receive traffic VLT provides a replacement for Spanning Tree based networks by providing both redundancy and active active full bandwidth utilization Major benefits of VLT technology are 1 Dual control plane on the access side that lends resiliency 2 Full utilization of the active LAG interfaces 3 Rack level maintenance is hitless and one switch can be kept active at all times Note that the two switches can also be stacked together However this is not recommended as this configuration will incur downtime during firmware updates of the switch or failure of stack links NPAR configuration In Active System 800v each port of the Broadcom 57810 k Dual port 10GbE KR Blade NDCs in the PowerEdge M620
17. Reference Architecture for an Active System 800 with VMware vSphere Release 1 0 for Dell PowerEdge 12 Generation Blade Servers Dell Force10 Switches and Dell EqualLogic iSCSI SAN with Dell Active System Manager Dell Virtualization Solutions Engineering Revision A00 Active System 800v with VMware vSphere Reference Architecture This document is for informational purposes only and may contain typographical errors and technical inaccuracies The content is provided as is without express or implied warranties of any kind 2012 Dell Inc All rights reserved Dell and its affiliates cannot be responsible for errors or omissions in typography or photography Dell the Dell logo OpenManage Force10 Kace EqualLogic PowerVault PowerConnect and PowerEdge are trademarks of Dell Inc Intel and Xeon are registered trademarks of Intel Corporation in the U S and other countries Microsoft Windows Hyper V and Windows Server are either trademarks or registered trademarks of Microsoft Corporation in the United States and or other countries VMware vSphere ESXi vMotion vCloud and vCenter are registered trademarks or trademarks of VMware Inc in the United States and or other jurisdictions Linux is the registered trademark of Linus Torvalds in the U S and other countries Other trademarks and trade names may be used in this document to refer to either the entities claiming the marks and names or their products Dell disclaims propr
18. ach PowerEdge M620 can deploy up to 24x 32GB DIMMs or 768GB of RAM per blade 12TB of RAM in a single M1000e chassis e Agent Free management with the new iDRAC7 with Lifecycle Controller allows customers to deploy update maintain and monitor their systems throughout the system lifecycle without a software management agent regardless of the operating system e The PowerEdge Select Network Adapter formerly NDC on the PowerEdge M620 offers three modular choices for embedded fabric capability With 10Gb CNA offerings from Broadcom QLogic amp Intel our customers can choose the networking vendor and technology that s right for them and their applications and even change in the future as those needs evolve over time The Broadcom and QLogic offerings offer Switch Independent Partitioning technology developed in partnership with Dell which allows for virtual partitioning of the 10Gb ports I O Modules The Dell blade chassis has three separate fabrics referred to as A B and C Each fabric can have two I O modules for a total of six O module slots in the chassis The 1 0 modules are A1 A2 B1 B2 C1 and C2 Each I O module can be an Ethernet physical switch an Ethernet pass through module FC switch or FC pass through module InfiniBand switch modules are also supported Each half height blade server has a dual port network daughter card NDC and two optional dual port mezzanine I O cards The NDC connects to Fabric A One me
19. ack The PDUs are not shown in the illustration because they will vary by region or customer power requirements Page 27 Figure 8 Active System 800v Single Chassis Rack Overview re Force10 55 2x Force10 s4810 2x PowerEdge R620 KMM 4x EqualLogic PS6110 Arrays PowerEdge M1000e 16x PowerEdge M620 2x 10GbE PowerEdge M 1 0 Aggregator Page 28 Figure 9 Active System 800v Two Chassis and Maximum Storage Rack Overview Force10 555 2x Force10 4810 2x PowerEdge R620 PowerEdge M1000e 16x PowerEdge M620 2x 10GbE PowerEdge M 1 0 Aggregator PowerEdge M1000e 16x PowerEdge M620 2x 10GbE PowerEdge M 1 0 Aggregator t j Rack 1 aa aia ea ll lal 40 4p 40 40 40 40 4D paz En anl an anl anl aanl aan wc E 40 440 40 2 a ec ne i 8x EqualLogic PS6110 Arrays Page 29 12 Reference Dell Active Infrastructure reference e Dell Active System Manager e Dell Active Infrastructure Wiki VMware references e VMware vSphere Edition Comparisons e VMware vSphere Compatibility Matrixes e VMware High Availability HA Deployment Best Practices e VMware Virtual Networking Concepts Dell PowerEdge References e Dell PowerEdge M1000e Technical Guide e Dell PowerEdge M I O Aggregator Configuration Quick Reference e NIC Partitioning NPAR Dell EqualLogic references e EqualLogic Technical Content e Dell EqualLogic PS Series Architecture Whitepaper
20. bE access ports with up to four modular 10GbE uplinks in 1 RU to conserve valuable rack space The Force10 S55 switch incorporates multiple architectural features that optimize data center network efficiency and reliability including reversible front to back or back to front airflow for hot cold aisle environments and redundant hot swappable power supplies and fans For more information on Force10 switches see Dell com force10 3 8 Dell EqualLogic PS6110 Series iSCSI SAN Arrays The Dell EqualLogic PS6110 series arrays are 10GbE iSCSI SAN arrays The EqualLogic PS6110 arrays provide 10GbE connectivity using SPF or lower cost 10GBASE T A dedicated management port allows better utilization of the 10GbE ports for the storage network I O traffic by segmenting the Page 8 management traffic The PS6110 Series 10GbE arrays can use Data Center Bridging DCB to improve Ethernet quality of service and greatly reduce dropped packets for an end to end iSCSI over DCB solution from host adapters to iSCSI target The key features of the EqualLogic PS6110 series arrays are e Dedicated 10GbE ports that enable you to use SFP or 10GBASE T cabling options e Simplified network storage management with a dedicated management port e 2 5 drives in 2U or 3 5 drives in 4U form factors e SAS NL SAS and solid state drive and hybrid options available e Supports DCB and DCBX technologies for use in a converged LAN amp iSCSI SAN network e Efficient data prot
21. controlled independently for each priority e Enhanced Transmission Selection ETS This capability provides a framework and mechanism for bandwidth management for different traffic types by assigning bandwidth to different frame priorities e Data Center Bridging Exchange DCBX This functionality is used for conveying the capabilities and configuration of the above features between neighbors to ensure consistent configuration across the network Dell Force10 4810 switches Dell PowerEdge M I O Aggregator modules Broadcom 57810 k Dual port 10GbE KR Blade NDCs and EqualLogic PS6110 iSCSI SAN arrays enable Active System 800v to utilize these technologies features and capabilities to support converged network architecture 7 1 Converged Network Connectivity The Active System 800v design is based upon a converged network All LAN and iSCSI traffic within the solution share the same physical connections The following section describes the converged network architecture of Active System 800v Connectivity between hypervisor hosts and converged network switches The compute cluster hypervisor hosts PowerEdge M620 blade servers connect to the Force10 54810 switches through the PowerEdge M I O Aggregator I O Modules in the PowerEdge M1000e blade chassis The management cluster hypervisor hosts PowerEdge R620 rack servers directly connect to the Force10 4810 switches e Connectivity between the Dell PowerEdge M620 blade servers and Dell Po
22. dapters in the PowerEdge R620 rack servers The iSCSI offload protocol is enabled on one of the partitions on each port of the NDC or the Network Adapter With iSCSI hardware offload all iSCSI sessions are terminated on the Broadcom NDC or on the Broadcom Network Adapter Traffic isolation using VLANs Within the converged network the LAN traffic is separated into four unique VLANs one VLAN each for management vMotion VM traffic and out of band management The iSCSI traffic also uses a unique VM Network traffic is tagged with the respective VLAN ID for each traffic type in the virtual switch Routing between the management and out of band management VLANs is required to be configured in the core or the Force10 4810 switches Additionally the Force10 4810 switch ports that connect to the blade servers are configured in VLAN trunk mode to pass traffic with different VLANs on a given physical port The table 2 below provides an overview of different traffic types segregated by VLANs in the Active System 800v and which edge devices with which they are associated Page 17 Table 2 VLAN Overview Traffic Type Description Associated Network Device VLAN segregation vSphere management traffic and Broadcom NDC and Management Broad N k Ad Active System 800v management services roadcom Network Adapter Broadcom NDC and vMotion VMware vMotion traffic Broadcom Network Adapter VM LAN traffic generated by compute cluster Broadc
23. dapters internally and externally to upstream network devices Internally the PowerEdge M I O Aggregator provides thirty two 32 connections The connections are 10 Gigabit Ethernet connections for basic Ethernet traffic iSCSI storage traffic or FCoE storage traffic In a typical PowerEdge M1000e configuration with 16 half height blade server ports 1 16 are used and 17 32 are disabled If quad port CAN Network adapters or quarter height blade servers are used then ports 17 32 will be enabled The PowerEdge M I O Aggregator includes two integrated 40Gb Ethernet ports on the base module These ports can be used in a default configuration with a 4 X 10Gb breakout cable to provide four 10Gb links for network traffic Alternatively these ports can be used as 40Gb links for stacking The Dell PowerEdge M I O Aggregator also supports three different types of add in expansion modules which are called FlexlO Expansion modules The modules available are 4 port 10Gbase T FlexlO module 4 port 10G SFP FlexlO module and the 2 port 40G QSFP FlexlO module The PowerEdge M I O Aggregator modules can be managed through the PowerEdge M1000e Chassis Management Controller CMC GUI Also the out of band management port on the PowerEdge M I O Aggregator is reached by connection through the CMC s management port This one management port on the CMC allows for management connections to all 1 0 modules within the PowerEdge M1000e chassis For more information on
24. dle with the latest updates for a Dell PowerEdge M620 DRM can be used in conjunction with other OpenManage tools helps to ensure that your PowerEdge server is kept up to date For more information on Dell Repository Manager see http content dell com us en enterprise d solutions repository manager 9 4 Dell Management Plug in for VMware vCenter DMPVV Dell Management Plug in for VMware vCenter is deployed as a virtual appliance within the management cluster and is attached to the VMware vCenter Server within the Active System 800v stack DMPVV communicates with the VMware vCenter Server the hypervisor management interfaces and server out of band management interfaces iDRAC For ease of appliance firmware updates and warranty information it is recommend that the DMPVV appliance has access to an internet connect either directly or thought a proxy Dell Management Plug in for VMware vCenter enables customers to e Get deep level detail from Dell servers for inventory monitoring and alerting all from within vCenter e Apply BIOS and Firmware updates to Dell servers from within vCenter e Automatically perform Dell recommended vCenter actions based on Dell hardware alerts e Access Dell hardware warranty information online e Rapidly deploy new bare metal hosts using Profile features For more information see the web page for Dell Management Plug in for VMware vCenter Page 24 9 5 Dell EqualLogic Virtual Storage Manager VSM
25. e Dell vCloud website 4 Design Principles The following principles are central to the design and architecture of Active System 800v Solution 1 Converged Network The infrastructure is designed to achieve end to end LAN and SAN convergence Redundancy with no single point of failure Redundancy is incorporated in every critical aspect of the solution including server high availability features networking and storage Management Provide integrated management using VMware vCenter Dell Management Plug in for VMware vCenter Dell OpenManage Essentials and Equallogic Virtual Storage Manager VSM for VMware plug in Cloud Enabled The solution also includes connectivity to Dell vCloud using VMware vCloud Connector Integration into an existing data center This architecture assumes that there is an existing 10 Gb Ethernet infrastructure with which to integrate Hardware configuration for virtualization This solution is designed for virtualization for most general cases Each blade server is configured with appropriate processor memory and network adapters as required for virtualization Racked Cabled and Ready to be deployed Active System 800v is available racked cabled and delivered to the customer site ready for deployment Components are configured and racked to optimize airflow and thermals Based on customer needs different rack sizes and configurations are available to support various datacenter requirements
26. ection and simplified management and operation of the EqualLogic SAN through tight integration with Microsoft VMware and Linux host operating platforms e Includes a full featured array monitoring and analysis tool to help strengthen your ability to analyze and optimize storage performance and resource allocation For more information on EqualLogic storage see Dell com equallogic 3 9 PowerEdge R620 Management Server The Dell PowerEdge R620 uses Intel Xeon E5 2600 series processors and Intel chipset architecture in a 1U rack mount form factor These servers support up to ten 2 5 drives and provide the option for an LCD located in the front of the server for system health monitoring alerting and basic management configuration An AC power meter and ambient temperature thermometer are built into the server both of which can be monitored on this display without any software tools The server features two CPU sockets and 24 memory DIMM slots For more information see the PowerEdge R620 guides at Dell com PowerEdge 3 10 Dell Management Plug in for VMware vCenter Dell Management Plug in for VMware vCenter is included in the solution This enables customers to e Get deep level detail from Dell servers for inventory monitoring and alerting all from within vCenter e Apply BIOS and Firmware updates to Dell servers from within vCenter e Automatically perform Dell recommended vCenter actions based on Dell hardware alerts e Acc
27. em Manager Active System Manager simplifies complex and error prone infrastructure lifecycle management activities like discovery inventory deployment configuration and on going monitoring and management through automation and collapsing the management interfaces into a highly optimized guided workflow By simplifying and automating these activities through a wizard driven graphical user interface Dell Active System manager enables IT to respond rapidly to business needs maximize data center efficiency and strengthen quality of IT service delivery 2 Audience IT administrators and IT managers who have purchased or are planning to purchase an Active System configuration can use this document to understand the design elements hardware and software components and the overall architecture of the solution 3 Overview This section provides a high level product overview of VMware vSphere Dell PowerEdge blade servers Dell PowerEdge M I O Aggregator Dell Force10 4810 switch Dell Force10 555 switch and Dell Page 2 EqualLogic Storage as illustrated in Figure 1 Readers can skip the sections of products with which they are familiar Figure 1 Active System 800v Overview VMware vSphere 5 1 Hypervisor vMotion Storage vMotion VMware HA and DRS Dell PowerEdge Blade Servers for Compute Cluster Energy efficient PowerEdge M1000e enclosure 12 generation M620 blade server Flex Address CMC and iKVM for endosure
28. enabling NIC failover and load balancing for each vSwitch On the management cluster hosts the PowerEdge R620 rack servers one vSwitch each is created for management traffic vMotion traffic and iSCSI traffic In this case all VMs are management VMs so the VM traffic and the vSphere management traffic are on the same management VLAN Due to this fact the VM traffic port group and the vSphere management traffic port group are on the same vSwitch The resultant compute cluster and management cluster hypervisor host configuration is illustrated in Figure 6 Page 18 Figure 6 vSwitch and NPAR Configuration for the Hypervisor Hosts PowerEdge M620 Blade Server Compute Cluster Host Dell PowerEdge M Dell PowerEdge M 1 0 Aggregator Module 1 0 Aggregator Module LAG LAN f SAN VLTI LAN amp SAN Force10 4810 Force10 4810 PowerEdge R620 Rack Server Management Cluster Host vSwitchO vSwitch1 vSwitch2 Dell PowerEdge M 1 0 Aggregator Module VIT LAN amp SAN Force10 54810 Force10 54810 Page 19 Load Balancing and Failover This solution uses Route based on the originating virtual switch port ID configuration at the vSwitch for load balancing the LAN traffic Any given virtual network adapter will use only one physical adapter port at any given time In other words if a VM has only one virtual NIC it will use only one physical adapter port at any given time The reason for choosing this
29. ent plug in for VMware vCenter Dell OpenManage Essentials Dell EqualLogic Virtual Storage Manager VSM for VMware Dell EqualLogic SAN HeadQuarters HQ Dell Repository Manager Cloud Enablement VMware vCloud Connector for Dell vCloud connectivity Page 3 Table 1 below describes the key solution components and the roles served Table 1 Solution Components Component Details Hypervisor Server e Up to 2x Dell PowerEdge M1000e chassis with up to 32x Dell PowerEdge M620 Blade Servers and embedded VMware vSphere 5 1 Converged Fabric Switch e 2xDell Force10 S4810 e 2x Dell PowerEdge M I O Aggregator in each Dell PowerEdge M1000e chassis Storage e Up to 8x Dell EqualLogic PS6110 series arrays Management Infrastructure e 2x Dell PowerEdge R620 servers with embedded VMware vSphere 5 1 hosting management VMs e 1x Dell Force10 S55 used as a 1Gb out of band management switch Management components e Dell Active System Manager hosted in the management infrastructure e VMware vCenter Server e Dell Management Plug in for VMware vCenter e Dell OpenManage Essentials e Dell EqualLogic Virtual Storage Manager VSM for VMware e Dell EqualLogic SAN HeadQuarters HQ e VMware vCloud Connector e Dell Repository Manager 3 1 VMware vSphere 5 1 VMware vSphere 5 1 includes the ESXi hypervisor as well as vCenter Server which is used to configure and manage VMware hosts Key capabilities for
30. erged Network Configuration This section provides details of the different configurations in the Active System 800v that enable the converged network in the solution DCB Configuration Data Center Bridging DCB and Data Center Bridging Exchange DCBX technologies are used in Active System 800v to enable converged networking The Force10 54810 switches PowerEdge M I O Aggregator modules Broadcom 57810 k Dual port 10GbE KR Blade NDCs Broadcom 57810 Dual Port 10Gb Network Adapters and EqualLogic PS6110 iSCSI SAN arrays support DCB and DCBX Within the Active System 800v environment DCB settings are configured within the Force10 4810 switches Utilizing the DCBX protocol these settings are then automatically propagated to the PowerEdge M I O Aggregator modules Additionally the DCB settings are also propagated to the network end nodes including the Broadcom Network Adapters in PowerEdge R620 rack servers the Page 15 Broadcom NDCs in the PowerEdge M620 blade servers and the EqualLogic P56110 storage controllers The DCB settings are not propagated to the Force10 555 out of band management switch and the associated out of band management ports but the out of band management traffic going to the core from Force10 55 switch traverses through the Force10 4810 switches When the out of band management traffic traverses through the Force10 4810 switches it obeys the DCB settings DCB technologies enable each switch port and each network
31. ess Dell hardware warranty information online e Rapidly deploy new bare metal hosts using Profile features For more information see the web page for Dell Management Plug in for VMware vCenter Page 9 3 11 Dell Cloud Connectivity using VMware vCloud Connector VMware vCloud Connector lets you view operate on and transfer your computing resources across vSphere and vCloud Director in your private cloud environment as well as Dell vCloud public cloud Expand your view across hybrid clouds Use a single pane of glass management interface that seamlessly spans your private vSphere and public Dell vCloud environment Extend your datacenter Move VMs vApps and templates from private vSphere to a Dell vCloud to free up your on premise datacenter resources as needed Consume cloud resources with confidence Run Development QA and production workloads using Dell vCloud a VMware technology based public cloud The Dell Cloud with VMware vCloud Datacenter is an enterprise class multi tenant infrastructure as a service laaS public cloud solution that is hosted in secured Dell data centers Utilizing VMware vCloud Connector Dell Cloud provides you with unique hybrid cloud capabilities to extend your internal data center with Dell and VMware by transitioning your VMware virtualized workloads into our vCloud data center vCloud hosting provides you with a secure manageable and flexible public cloud application For more information se
32. f Rack ToR switch purpose built for applications in high performance data center and computing environments Leveraging a non blocking cut through switching architecture the 4810 delivers line rate L2 and L3 forwarding capacity with ultra low latency to maximize network performance The compact Force10 4810 design provides industry leading density of 48 dual speed 1 10 GbE SFP ports as well as four 40GbE QSFP uplinks to conserve valuable rack space and simplify the migration to 40Gbps in the data center core Each 40GbE QSFP uplink can support four 10GbE ports with a breakout cable Powerful Quality of Service QoS features coupled with Data Center Bridging DCB support to make the Force10 54810 ideally suited for iSCSI storage environments In addition the 4810 incorporates multiple architectural features that optimize data center network flexibility efficiency and availability including Force10 s stacking technology reversible front to back or back to front airflow for hot cold aisle environments and redundant hot swappable power supplies and fans For more information on Force10 switches see Dell com force10 3 7 Dell Force10 55 The Dell Force10 S Series 555 1 10 GbE ToR switch is designed for high performance data center applications The 555 leverages a non blocking architecture that delivers line rate low latency L2 and L3 switching to eliminate network bottlenecks The high density Force10 55 design provides 48G
33. for VMware Within Active System 800v the Dell EqualLogic Virtual Storage Manager VSM for VMware is deployed as a virtual appliance within the management cluster and is attached to the VMware vCenter Server within the Active System 800v stack VSM communicates with the dedicated management interfaces of the EqualLogic storage enclosures over the out of band network VSM enables customers to preform many storage administrative tasks from vSphere client including e Create Smart Copy snapshots replicas and clones of various types of VMware Infrastructure VI objects e Restore the state of virtual machines using saved Smart Copy snapshots and replicas e Setup replication of data stores and sets of data stores stored on one PS Series group to a secondary PS Series group potentially at a remote location for disaster tolerance e Recover from replicas on the secondary site including failover and failback of virtual machines and their data e Create Virtual Desktop Infrastructure VDI manual desktop pools e Provision of data stores on EqualLogic iSCSI volumes 9 6 Dell EqualLogic SAN HQ Within the Active System 800 Dell EqualLogic SAN HQ is installed on the same Windows 2008 R2 VM as OpenManage Essentials SAN HQ communicates with the dedicated management interface of the EqualLogic storage enclosure to gather performance and event logs Dell EqualLogic SAN HQ provides consolidated performance and robust event monitoring across mul
34. formance capacity utilization and trending group configuration with alerts replication status host connections and more 9 7 VMware vCloud Connector VMware vCloud Connector is an optional component of the Active System 800v solution When included it is deployed upon the management stack alongside other management VMs For the base functionality three VMs are necessary a single server VM and two node VMs The node VMs are responsibility for the physical transfer of VM workloads Within the Active System 800v two of these components the server and the local node are installed The third component remote node VM should be installed outside of the Active System 800v solution near the infrastructure to which it provides connectivity After deploying the VMware vCloud Connector node VMs the size of the virtual disk may have to be increased based on the size of expected VM to be transferred and the number of concurrent transfers anticipated As described in the section 3 11 of this document Dell Cloud Connectivity using VMware vCloud Connector VMware vCloud Connector lets you view operate on and transfer your computing resources across vSphere and vCloud Director in your private cloud environment as well as Dell vCloud public cloud The key capabilities provided by VMware vCloud Connector are e Expand your view across hybrid clouds Use a single pane of glass management interface that seamlessly span
35. iciency of storage utilization With each volume created and presented to the servers additional iSCSI sessions are initiated When planning the solution it is important to understand that group and pool limits exist for the number of simultaneous iSCSI sessions that can created For more information refer to the current EqualLogic Firmware FW Release Notes available at the EqualLogic Support site 8 4 Drive Types and Automated Tiered Storage Dell EqualLogic PS6110 arrays with the 10Gb dual controller configuration provide high bandwidth for data flows This bandwidth is complemented with a large variety of drives in multiple speeds and sizes including 10K RPM and 15K RPM SAS drives 7 2K RPM NL SAS drives and solid state disks The reference architecture presented in this document shows EqualLogic PS6110X arrays with 24 x 10K RPM SAS drives in each array The disk and array type should be selected by carefully considering the workload requirements Active System 800v supports a maximum of 8 x PS6110 arrays EqualLogic PS arrays provide IT organizations numerous techniques for storage tiering as a standard part of their all inclusive feature set These techniques extend the automation at the core of the PS Series design philosophy while allowing broad customization of storage tiers to suit a wide range of business and organizational requirements 8 5 Multipath Configuration The Dell EqualLogic PS Series storage array supports multiple
36. ietary interest in the marks and names of others Page ii Active System 800v with VMware vSphere Reference Architecture Revision History Revision Description A00 Initial Version Page iii Contents 1 INEKOQUCOON aasa 2 2 E EE NGANGA NAGANA NAG E BG ANEK ET 2 Br ARA AA AA AA AA AA AA AA 2 4 IDesign Principles ssccccescsccrscegeacedeecesecesegeseceseGeaeteeettecseeeceaetasetdeesaseteceseeateucseeacancsaeans 10 5 Reference Architecture aaa aa ANAKAN ANAN AA 11 6 Dell Blade Network Architecture 00 00o wa awa naaawa nanana nn NN NON NBA NONG BAKONG GA K0NG 00KG i a 12 7 Converged Network Architecture Xx sa GANA NANINIRA RNGA AG 13 8 Storage Architecture vs a AA AGA KA AABANGAN RANA NANA 20 9 Management Infrastructure NAAN cetaaedd nel aced NATEN ANA ERA NRA 22 TO Scalability erakin eane nAn REREN AREE NERENN E ANEREN ARNAN AR ARERR REES 26 Td Delivery MOdel c cacevecicesevsescacarecaweetaasewasesedaweeteadavab ise daweeteaeavabise ENSEN ANNA 27 12 RETERENCE yi ada NANANG BATANG ATON GATA Ba NAAN 30 Figures Figure 1 Active System 800V OVervieW can aa NANAIG cee cess ces entscemecess AANO 3 Figure 2 Active System 800v Network Topology Logical View c cca asa asaanaaaaaaaaaaaanaana 11 Figure 3 1 0 Connectivity for PowerEdge M620 Blade Server casa awaawaasaaaaaaaaasaanaaaana 12 Figure 4 Converged Network Logical Connectivity esssssssssosssesessssssseseso
37. istrators can split each 10GbE port of an NDC into four separate partitions or physical functions and allocate the desired bandwidth and resources as needed Each of these partitions is enumerated as a PCI Express function that appears as a separate physical NIC in the server operating systems BIOS and hypervisor Active System 800v solution takes advantage of NPAR Partitions are created for various traffic types and bandwidth is allocated as described in the following section Page 12 7 Converged Network Architecture One of the key attributes of the Active System 800v is the convergence of SAN and LAN over the same network infrastructure LAN and iSCSI SAN traffic share the same physical connections from servers to storage The converged network is designed using Data Center Bridging IEEE 802 1 and Data Center Bridging Exchange IEEE 802 1AB technologies and features The converged network design drastically reduces cost and complexity by reducing the components and physical connections and the associated efforts in deploying configuring and managing the infrastructure Data Center Bridging is a set of related standards to achieve enhance Ethernet capabilities especially in datacenter environments through converge network connectivity The functionalities provided by DCB and DCBX are e Priority Flow Control PFC This capability provides zero packet loss under congestion by providing a link level flow control mechanism that can be
38. management Dell PowerEdge M I O Aggregator Highest Performance in a Blade Switch Highest Densityin a Single Blade Switch Stackable for Simplified Management Scalability amp Modular to Fit Your Business Support for converged networking with Data Center Bridging DCB Dell PowerEdge Rack Servers for Management Cluster 12 generation R620 rack servers Concentrated computing powerin 1U form factor Large memory and 1 0 capacity Powerful systems management with Dell iDRAC and Lifecyde Controller Rail Force10 4810 Switches for Converged Network High density 48 port 10 GbE switch with four 40 GbE uplinks Ultra low latency non blocking cut through switch forline rate L2 and L3 performance Integrated network automation and virtualization tools via the Open Automation Framework Support for converged networking with Data Center Bridging DCB Dell Force10 S55 Switch for Management High density 48 port 1 10 GbE scalable switch Low latency non blocking switch for line rate L2 and L3 performance Integrated network automation and virtualization tools via the Open Automation Framework Dell EqualLogic Storage 10GbE iSCSI SAN arrays with SFP and 10GBase T support Thin Provisioning and Storage Tiering Support for converged networking with Data Center Bridging DCB Integration with VMware Integrated Management Dell Active System Manager VMware vCenter Server Dell Managem
39. o Force10 4810 switches This design ensures load balancing while maintaining redundancy e Connectivity between the Dell PowerEdge R620 rack servers and Force10 4810 switches Both of the PowerEdge R620 servers have two 10Gb connections to the Force10 4810 switches through one Broadcom 57810 Dual Port 10Gb Network Adapter in each of the PowerEdge R620 servers Connectivity between the two converged network switches The two Force10 54810 switches are connected using Inter Switch Links ISLs using two 40 Gbps QSFP links Virtual Link Trunking VLT is configured between the two Force10 54810 switches This design eliminates the need for Spanning Tree based networks and also provides redundancy as well as active active full bandwidth utilization on all links Connectivity between the converged network switches and iSCSI storage arrays Each EqualLogic PS6110 array in Active System 800v uses two controllers The 10Gb SFP port on each EqualLogic controller is connected to the Force10 4810 switches This dual controller configuration provides high availability and load balancing Figure 4 below illustrates the resultant logical converged network connectivity within the Active System 800v solution Page 14 Figure 4 Converged Network Logical Connectivity PowerEdge M1000e PowerEdge R620 PowerEdge M I O Aggregators PowerEdge M a O Aggregators EqualLogic Storage LAN amp iSCSI SAN Converged Network 7 2 Conv
40. om NDC and VMs Broadcom Network Adapter f Broadcom NDC and pol babaan Isa NG Broadcom Network Adapter Out of Band ge iDRAC CMC and EqualLogic Management Out of Band Management traffic Management Ports Hypervisor network configuration for LAN and iSCSI SAN traffic VMware ESXi hypervisor is configured for the LAN and iSCSI SAN traffic that are associated with the blade servers LAN traffic in Active System 800v solution is categorized into four traffic types VM traffic management traffic vMotion traffic and Out of Band OOB management traffic OOB management traffic is associated with CMC iDRAC and EqualLogic SAN management traffic VM traffic management traffic and vMotion traffic are associated with the blade servers in the compute cluster and the rack servers in the management servers Similarly iSCSI SAN traffic is also associated with the blade servers and the rack servers On each hypervisor host within the compute cluster and the management cluster a virtual switch is created for each of the three LAN traffic types associated with the blade and the rack servers and also for the iSCSI traffic On the compute cluster hosts the PowerEdge M620 blade servers one vSwitch each is created for VM traffic vSphere management traffic vMotion traffic and iSCSI traffic Two partitions one from each physical network port are connected as uplinks to each of the virtual switches This creates a team of two network ports
41. ossssssesooesssesesossses 15 Figure 5 Figure 6 Figure 7 Figure 8 Figure 9 Conceptual View of Converged Traffic Using DCB ceccenceccenceccenceceeccecceusenaeees 16 vSwitch and NPAR Configuration for the hypervisor hosts sceseeeeeeeeeeeeeeeeeeeeeeees 19 Management COMPONENLS 4 3 cessseve sess cevecevecevesevesevecevaceve cee ecevecttecevectvacevecevoaeveceeeaes 23 Active System 800v Single Chassis Rack Overview csa susasaausasaaasasaaasasuaaaasaan 28 Active System 800v Two Chassis and Maximum Storage Rack Overview eeseeeeeeeees 29 Page 1 1 Introduction Dell Active Infrastructure is a family of converged infrastructure solutions that combine servers storage networking and infrastructure management into an integrated and optimized system that provides general purpose virtualized resource pools Active Infrastructure leverages Dell innovations including unified management Active System Manager converged LAN SAN fabrics and modular server architecture for the ultimate converged infrastructure solution Active Infrastructure helps IT rapidly respond to dynamic business demands maximize data center efficiency and strengthen IT service quality The Active System 800 solution a member of Dell Active Infrastructure family is a converged infrastructure solution that has been designed and validated by Dell Engineering It is available to be racked cabled and delivered to yo
42. rs and one Dell Force10 555 1 10GbE Ethernet switch are used for the management infrastructure The Force10 555 switch is used for out of band management connectivity for Dell CMC Dell iDRAC and the management ports on Dell EqualLogic arrays The management cluster infrastructure imitates the compute cluster in using converge network infrastructure and configuration The PowerEdge R620 servers are connected to the Force10 4810 switches using Broadcom 57810 Dual Port 10Gb network adapters The management servers are connected to the EqualLogic storage through the two Force10 4810 switches Note that the EqualLogic storage is shared between the management cluster and the compute cluster The EqualLogic storage must be sized so that sufficient capacity and bandwidth are allocated for both the management VMs and compute VMs The PowerEdge R620 servers run VMware ESXi 5 1 hypervisor and are a part of the unique vSphere Cluster for management VMware High Availability is enabled in that cluster to provide HA for virtual machines Admission control is disabled in the VMware HA Cluster If admission control is enabled VMware HA would prevent putting one of the management servers in maintenance mode since this would violate HA policy of having more than one active server in the cluster The Active System 800v solution includes the necessary management components required to manage the Active System 800v infrastructure including the Converged Infrastructure
43. s part of the Active System 800v solution The Dell Active System Manager virtual appliance is deployed on the management cluster For fullest functionality direct internet access or access through a proxy is recommended Active System Manager addresses key factors that impact service levels namely infrastructure configuration errors incorrect problem troubleshooting and slow recovery from failures Active System Manager dramatically improves the accuracy of infrastructure configuration by reducing manual touch points As highlighted in section 3 2 Dell Active System Manager provides capabilities like template based infrastructure provisioning automated infrastructure configuration infrastructure lifecycle management workload failover and provides a guided user workflow through its wizard driven graphical interface For more information on Dell Active System Manager see Dell Active System Manager 9 2 Dell OpenManage Essentials OME In the Active System 800v Dell OpenManage Essentials OME is sized and configured to monitor the Active System 800v solution components It is deployed on a Windows 2008 R2 virtual machine within the management cluster High availability of the OME virtual machine is provided by VMware High Page 23 Availability service OME utilized a local SQL Express database For fullest functionality direct internet access or through a proxy is recommended Within the Active System 800v OME is utilized
44. s your private vSphere and public Dell vCloud environment e Extend your datacenter Move VMs vApps and templates from private vSphere to a Dell vCloud to free up your on premise datacenter resources as needed e Consume cloud resources with confidence Run Development QA and production workloads using Dell vCloud a VMware technology based public cloud The Dell Cloud with VMware vCloud Datacenter is an enterprise class multi tenant infrastructure as a service laaS public cloud solution that is hosted in secured Dell data centers Utilizing VMware vCloud Connector Dell Cloud provides you with unique hybrid cloud capabilities to extend your internal data center with Dell and VMware by transitioning your VMware virtualized workloads into our vCloud data center vCloud hosting provides you with a secure manageable and flexible public cloud application 10 Scalability As workloads increase the solution can be scaled to provide additional compute and storage resources independently Scaling Compute and Network Resources This solution is configured with two Force10 4810 network switches Up to two PowerEdge M1000e chassis can be added to the two Force10 4810 switches In order to scale the compute nodes beyond two chassis new Force10 4810 switches need to be added Additional switches can either be stacked together and or connected to this distribution switch based on customer needs Scaling Storage Resources EqualLogic storage
45. the Force10 555 switch for out of band management connectivity Nat shawn in the diagram De Equalogic 5110 Series Arrays Page 11 The figure shows high level logical connectivity between various components Subsequent sections of this document provide more detailed connectivity information 6 Dell Blade Network Architecture In Active System 800v the Fabric A in PowerEdge M1000e blade chassis contains two Dell PowerEdge M 1 O Aggregator modules one in I O module slot A1 and the other in slot A2 and is used for converged LAN and SAN traffic Fabric B and Fabric C I O Module slot B1 B2 C1 and C2 are not used The PowerEdge M620 blade servers use the Broadcom 57810 k Dual port 10GbE KR Blade NDC to connect to the Fabric A Dell PowerEdge M I O Aggregator modules uplink to Dell Force10 54810 network switches providing LAN AND SAN connectivity Figure 3 below illustrates how the fabrics are populated in the PowerEdge M1000e blade server chassis and how the I O modules are utilized Figure 3 I O Connectivity for PowerEdge M620 Blade Server Dell PowerEdge M620 Broadcom Mezz B Mezz C 57810 k Unused Unused 10Gb KR NDC Fabric Al PowerEdge M 10 Aggregator Medule Fabric B1 Fabric C1 Fabric A2 PowerEdge M 10 Aggregator Fabric B2 Fabric C2 Network Interface Card Partition NPAR NPAR allows splitting the 10GbE pipe on the NDC with no specific configuration requirements in the switches With NPAR admin
46. tiple groups The key benefits of EqualLogic SAN HQ include e Multi Group Management EqualLogic SAN HQ enables centralized monitoring of multiple EqualLogic PS Series groups from a single graphical interface e Comprehensive information about the EqualLogic PS Series arrays EqualLogic SAN HQ provides comprehensive information on configuration capacity I O performance and network performance for EqualLogic PS Series groups pools members disks volumes and volume collections These in depth analytical tools enable flexible granular views of SAN resources and provide quick notification of hardware capacity and performance related problems e Experimental analysis EqualLogic SAN HQ collects information on current hardware configuration and distribution of reads and writes and provides information about PS Series group performance relative to a specific workload Customers can perform experimental analysis to determine if a group has reached its full capabilities or whether they can increase the group workload with no impact on performance This helps in identifying requirements for storage growth and future planning e Events and alerts EqualLogic SAN HQ provides performance related and email alerts and hardware alarms on multiple parameters This feature ensures users take timely action to make data more available and more secure Page 25 e Formatted reports graphs and archives Customizable reports and graphs are available on per
47. ur site to speed deployment Dell Services will deploy and configure the solution tailored for business needs so that the solution is ready to be integrated into your datacenter Active System 800 is offered in configurations with either VMware vSphere Active System 800v or Microsoft Windows Server 2012 with Hyper V role enabled Active System 800m hypervisors This paper defines the Reference Architecture for the VMware vSphere based Active System 800v solution Active System 800v offers converged LAN amp SAN fabric design to enable a converged infrastructure solution The end to end converged network architecture in Active System 800v is based upon Data Center Bridging DCB technologies that enable convergence of all LAN and iSCSI SAN traffic into a single fabric The converged fabric design of Active System 800v reduces complexity and cost while bringing greater flexibility to the infrastructure solution Active System 800v includes Dell PowerEdge M1000e blade chassis with Dell PowerEdge M 1 0 Aggregator Dell PowerEdge M620 blades Dell EqualLogic Storage Dell Force10 network switches and VMware vSphere 5 1 The solution also includes Dell PowerEdge R620 servers as management servers Dell Active System Manager VMware vCenter Server EqualLogic Virtual Storage Manager for VMware and Dell OpenManage Essentials are included with the solution One of the key components of Active System 800v is Dell Active Syst
48. werEdge M I O Aggregators The internal architecture of PowerEdge M1000e chassis provides connectivity between the Broadcom 57810 k Dual port 10GbE KR Blade NDC in each PowerEdge M620 blade server and the internal ports of the PowerEdge M I O Aggregator The PowerEdge M I O Aggregator has 32 x 10GbE internal ports With one Broadcom 57810 k Dual port 10GbE KR Blade NDC in each PowerEdge M620 blade blade servers 1 16 connect to the internal ports 1 16 of each of the two PowerEdge M I O Aggregator Internal ports 17 32 of each PowerEdge M I O Aggregator are disabled and not used Page 13 e Connectivity between the Dell PowerEdge M I O Aggregator and Force10 S4810 switches The two PowerEdge M I O Aggregator modules are configured to operate as a port aggregator for aggregating 16 internal ports to eight external ports The two fixed 40GbE QSFP ports on each PowerEdge M I O Aggregator are used for network connectivity to the two Force10 54810 switches These two 40GbE ports on each PowerEdge M 1 0 Aggregator are used with a 4 x 10Gb breakout cable to provide four 10Gb links for network traffic from each 40GbE port Out of the 4 x 10Gb links from each 40GbE port on each PowerEdge M I O Aggregator two links connect to one of the Force10 4810 switches and the other two links connect to the other Force10 4810 switch Due to this design each PowerEdge M1000e chassis with two PowerEdge M I O Aggregator modules will have total of 16 x 10Gb links to the tw
49. zzanine 1 0 card attaches to Fabric B with the remaining mezzanine I O card attached to Fabric C Chassis Management The Dell PowerEdge M1000e has integrated management through a redundant Chassis Management Controller CMC module for enclosure management and integrated Keyboard Video and Mouse iKVM modules Through the CMC the enclosure supports FlexAddress Plus technology which enables the blade enclosure to lock the World Wide Names WWN of the FC controllers and Media Access Control MAC addresses of the Ethernet controllers to specific blade slots This enables seamless swapping or upgrading of blade servers without affecting the LAN or SAN configuration Embedded Management with Dell s Lifecycle Controller The Lifecycle Controller is the engine for advanced embedded management and is delivered as part of iDRAC Enterprise in 12th generation Dell PowerEdge blade servers It includes 1GB of managed and persistent storage that embeds systems management features directly on the server thus eliminating the media based delivery of system management tools and utilities previously needed for systems management Embedded management includes e Unified Server Configurator USC aims at local 1 to 1 deployment via a graphical user interface GUI for operating system install updates configuration and for performing diagnostics on single local servers This eliminates the need for multiple option ROMs for hardware configuration Page 6
Download Pdf Manuals
Related Search
Related Contents
Gelee Royale mode d`emploi Volume 6 Tab 2 CBA0001 - Mastro 取扱説明書 LightMoodsTM - AveoEngineering PERISTALTIC PUMP PV-60 / PVT-60 Manuale Studio 22 A Ventouse Guide Installation Instruction Manual Copyright © All rights reserved.