Home

Hitachi 1000 User's Manual

image

Contents

1. PCI Express x4 link SCSI D2 D1 C2 C1 B2 B1 A2 A1 Bus connection Se PCI X 133 MHz 8 slot Hot Plug Support Legacy Bridge Bridge Legacy Bridge mi Interrupt Sw Sw Sw sw interrupt sw sw sw sw o Reg L Card Status Error LED oO pE E EE PoP SF Figure 16 PCI X I O Module block diagram BladeSymphony 1000 Architecture White Paper www hitachi com Table 6 provides information on the connector types for PCI X I O Modules Table 6 PCI X I O Module connector types Name Protocol Frequency Bus Width Remarks PCI X slots 0 to 7 PCI X 133 133 MHz 64 bit PCI Hot Plug SCSI connector 0 1 Ultra 320 160 MHz 16 bit LVD Each I O module has two SCSI con nector ports PCle I O Module To provide more flexibility and to support newer PCI cards a PCle I O module is available The PCle I O Module supports eight PCle cards in total and one I O module can have one PCle card assigned per server blade The PCle I O Module uses a PCle hot plug controller manufactured by MICREL Hot plug is supported for each PCle slot in the PCle I O Module The operating system must support hog plug in order for this operation to be successful PCle I O Module Combo Card A PCle I O
2. Figure 5 Memory configuration The memory system of the Intel Itanium Server Blade includes several RAS features e ECC protection S2EC D2ED Detects an error in any two sets of consecutive two bits and corrects errors in any one set of consecutive two bits www hitachi com BladeSymphony 1000 Architecture White Paper 13 14 e ECC The ECC can correct an error in consecutive four bits in any four DIMM set i e a fault in one DRAM device This function is equivalent to technology generally referred to as Chipkill and allows the contents of memory to be reconstructed even if one chip completely fails The concept is similar to the way RAID protects content on disk drives e Memory device replacing function The NDC and MC have a function to replace a faulty DRAM device with a normal spare one assisted by the System Abstraction Layer SAL firmware This keeps the ECC function S2EC D2ED operating It can replace up to two DRAM devices in any one set of four DIMMs e Memory hierarchy table size bandwidth latency e L1 cache e L2 cache e L3 cache e On board memory e Off board memory e Interleaved vs non interleaved memory configuration e ccNUMA cache coherent Non uniform memory access description SMP Capabilities While dual processors systems are now common place increasing the number of processors sockets beyond two poses many challenges in computer design particularly in the memor
3. Supported OS Microsoft Windows Server 2003 SP2 Standard Edition Microsoft Windows Server 2003 SP2 Enterprise Edition Microsoft Windows Server 2003 SP2 Standard x64 Edition Microsoft Windows Server 2003 SP2 Enterprise x64 Edition Red Hat Enterprise Linux ES4 Red Hat Enterprise Linux AS 4 Intel Xeon 5200 Dual Core Processors The Dual Core Intel Xeon 5200 Series processors utilizes two 45 nm Hi k next generation Intel Core microarchitecture cores The processors feature fast data throughput with Intel I O Acceleration Technology up to 6 MB of L2 cache memory that can be allocated entirely to one core and they support both 32 bit and 64 bit applications The energy efficient processors are optimized for low power dual core 64 bit computing Intel Core Mircroarchitecture integrates an efficient 14 stage pipeline and memory architecture design for greater processor throughput with power management technologies that reduce power consumption without affecting performance The architecture also supports direct L1 to L1 cache transfer and improved memory pre fetch Other technologies featured in the Intel Xeon 5200 processors include e Hyper Threading Technology described in Hyper Threading Technology on page 11 e Intel Virtualization Technology provides hardware assistance for software based virtual environments to support new capabilities including 64 bit operating systems and applications e Intel I O Ac
4. Also because BladeSymphony 1000 can be configured with physical PCI slots I O can be assigned by the slot to any given partition Therefore any partition can be assigned any amount of slots and each partition can be mounted with any standard PCI interface cards Since the PCI slots are assigned to the partition each environment can support a unique PCI interface card www hitachi com BladeSymphony 1000 Architecture White Paper 49 50 Fiber Channel Virtualization Hitachi also offers Fibre Channel I O virtualization for Virtage This allows multiple logical partitions to access a storage device through a single Fiber Channel card allowing fewer physical connections between server and storage and increasing the utilization rates of the storage connections This is exclusive to the 4 GB Hitachi FC card Shared Virtual NIC Functions Virtage also provides a virtual NIC VNIC function which constructs a virtual network between LPARs and enables communication between LPARs without a physical NIC There are two types of virtual NIC functions one that only supports the communication between LPARs which has no connection with external networks The other is a part of the shared NIC function that realizes connection to the external physical network through the shared physical NIC Virtage enables multiple VNICs assigned to LPARs to share a physical NIC This function takes full advantage of the connections between VNICs and external physical ne
5. BladeSymphony 1000 Architecture White Paper Bladesymphony 1000 HITACHI Inspire the Next Table of Contents Introduction EE 3 Execute SUMMANY E 3 Introducing BladeSymphony 1000 cccceseceeeeeseeeeeeeeeeeeeeeeeeeeeeaeeeeeeeeeeeeesaeeeeeseeeeesesaeeeeeseneenees 3 System Architecture Overview 000000 cee eee ees 6 Intel Itanium Server Blade EE 8 Intel Itanium Processor 9000 Series ccccccceeeesseeeeeseeeeeeeseneeeeeseeeeeeeeaeeeeeseneeeeeeeeseseseeeeeeseeees 10 Hitachi Node Controller eee 12 Baseboard Management Controller cccccccccsseeceeesseeeeeeseneeeeseneeeeeseeeeeeseneseeeeeeeeeseeeeeeeeneeees 13 MEMOry E RE 13 SMP Capabilities EE 14 Intel Itanium I O Expansion Module c ccceeseeceeeeeeeeeeeeeeeeeeeeneeeeeeaneeeeeaeeeeeseeeeeseenenensseeeees 18 Intel Xeon Server Blade EE 20 Intel Xeon 5200 Dual Core Processors een 21 Intel Xeon 5400 Quad Core Processors een 22 Memory System w cecicceccasedcecencadcdceacdcesdencdecdsacecgetenccdecdushardcssqnetscduederdcauscedeedueteddedereaescdettescedeeds 22 On M dule Storage cssicecsccecentscacceecvecteteveacdelezezteelevadedesezandcetezscadesszaidieseaniedatezstaeesacetedesdeerseste 25 VO Sub System vie UNENEE tudi eae ee eel ae ER el ere Ee er ee ee 26 ad bie EI 26 Embedded Gigabit Ethernet Switch cccsccceesseeceeeeeeeeeeeeeeeeeeneeeeseneseeseaneeeeeneeeseseeeesesaees 33 SCS Hard Drive Modules w cccccietcccestteccesediceevsateecvesstectv
6. www hitachi com BladeSymphony 1000 Architecture White Paper 45 46 BladeSymphony Management Suite provides centralized system management and control of all server network and storage resources including the ability to setup and configure servers monitor server resources integrate with enterprise management software SNMP phone home and manage server assets Deployment Manager Deployment Manager allows the mass deployment of system images for fast effective server deployment Users can deploy system images and BIOS updates across multiple chassis in multiple locations Updates are executed by batch distribution of service packs and Linux patches to server saving a large number of hours for patching tasks N 1 or N M Cold Standby Fail over BladeSymphony 1000 maintains high uptime levels through sophisticated failover mechanisms The N 1 Cold Standby function enables multiple servers to share standby servers increasing system availability while decreasing the need for multiple standby servers It enables the system to detect a fault in a server blade and switch to the standby server manually or automatically The hardware switching is executed even in the absence of the administrator enabling the system to return to normal operations within a short time With the N M Cold Standby function there are M backup servers for every N active servers so failover is cascading in the event of hardware failure the
7. 1 Tag Controller Controller Controller Intel Itanium I O Expansion Module Some applications require more PCI slots than the two that are available per server blade The Intel Itanium I O Expansion Module provides more ports without the expense of additional server blades Using the Itanium I O Expansion Module with the Intel Itanium Server Blade can increase the number of the PCI expansion card slots that can be connected to the Intel Itanium Server Blade The Itanium WO expansion module cannot be used in with the Intel Xeon Server Blade The Intel Itanium I O Expansion Module increases the number of PCI I O slots to either four or eight slots depending on the chassis type The type A chassis enables connection to four PCI I O slots Figure 11 and the type B chassis enables up to eight PCI I O slots Figure 12 18 BladeSymphony 1000 Architecture White Paper www hitachi com Node Link EBS Chassis for SMP connections CPU module 0 CPU CPU Backplane Bridge Bridge Bridge Bridge Bridge Bridge Bridge Bridge Module 1 Module 0 1O Module 1 Type 1 10 Module 0 Typet PCI X slot 15 14 13 12 11 0 08 20 6 5 M 8 2 A a Figur
8. Intel Xeon Server Blade While the Intel Itanium Server Blades provide SMP and the raw performance that the Itanium processors bring to number crunching jobs the Intel Xeon Server Blades which require less power and cooling and are less expensive are ideal for supporting infrastructure and application workloads as well as 32 bit applications The components of the Intel Xeon Server Blade are listed in Table 4 Table 4 Intel Xeon Server Blade components Dual Core Intel Xeon Processor Quad Core Intel Xeon Processor Processor 5110 Series 5140 5260 Series E5430 Series X5460 Series Series Series Processor Fre 1 60 GHz 2 33 GHz 3 33 GHz 2 66 GHz 3 16 GHz quency Number of Pro Maximum 2 maximum 4 cores Maximum 2 maximum 8 cores cessors Cache Memory L2 4 MB L2 6 MB L2 2 x 6 MB System Bus 1066 MHz 1333 MHz FSB Fre quency Main Memory ECC DDR2 667 FB DIMM Advanced ECC Memory Mirroring 20 BladeSymphony 1000 Architecture White Paper www hitachi com Table 4 Intel Xeon Server Blade components Dual Core Intel Xeon Processor Quad Core Intel Xeon Processor Capacity Maximum 32 GB Memory Slots 8 Internal HDD Up to four 2 5 inch 73 GB or 146 GB 10K RPM SAS HDD Internal Expan One dedicated for RAID card of internal SAS HDD sion Slot Network Inter 1 Gigabit Ethernet SERDES two ports face Power Con 255 W 306W 370W 370W 420W sumption Max
9. PCleX4 Server blade 7 S35 amp RS232C Connector Figure 20 Embedded Fibre Channel Switch Module block diagram The Embedded Fibre Channel Switch Module is configured with three components A Brocade Fibre Channel switch Fibre Channel HBAs and network adapters Directly connecting the HBAs to the FC switch in this manner rather than installing them as PCI cards in the blades eliminates the 16 fiber cables that would be necessary to make these connections in other systems as illustrated in Figure 21 Another benefit is reduced latency on the data path This dramatically reduces complexity administration and points of failure in FC environments It also reduces the effort to install and or reconfigure the storage infrastructure www hitachi com BladeSymphony 1000 Architecture White Paper 29 30 Corresponding server blades Slot 0 Server blade 0 Slot 1 Server blade 1 Slot 2 Server blade 2 Slot 3 Server blade 3 ay EN BiG T N al ad Slot 4 Server blade 4 Slot 5 Server blade 5 Slot 6 Server blode 6 Slot 7 Server blode 7 Slot 8 Server blade 0 Slot 9 Server blade 1 Slot 10 Server blade 2 Slot 11 Server blade 3 Slot 12 Server blade 4 Slot 13 Server blade 5 Slot 14 Server blade 6 Slot 15 Server blade 7 Figure 21 Embedded Fibre Channel Switch Module connection diagram eliminating 16 cables Table 7 provides
10. This BMC is referred to as the primary BMC 42 BladeSymphony 1000 Architecture White Paper www hitachi com The Base Management Controller provides the following functions e Initial diagnosis initial diagnosis setting of BMC and BMC s peripheral hardware e Power control controlling of power input and shutdown for modules e Reset control controlling hard reset and dump reset e Failure handling handling of fatal MCK occurrence e Log management management RC logs detailed logs and SEL e Environmental monitoring monitoring the temperature and voltage inside a module e Panel output LOG through a virtual console Intel Itanium Server Blade only status from BMC SAL or OS is recorded in LOG e SVP consoles Intel Itanium Server Blade only console for maintenance and assisting operation e IPMI standard functions of IPMI SDR SEL FRU WDT sensor etc e Firmware updates Intel Itanium Server Blade only updating SAL BMC and SVP Console Functions BladeSymphony 1000 supports the following three types of consoles e OS console for operating the OS and system firmware only for the Itanium Blade e SVP console system management console can also manage L2 network switch e Remote console access to the VGA keyboard and a mouse functionality from a remote workstation In the Intel Itanium Server Blade the OS console and SVP console can share one communication pathway The OS
11. Module Combo Card is available for the BladeSymphony 1000 which can be installed in the PCle I O Module and provides additional FC and gigabit Ethernet configurations The block diagram is shown in Figure 17 The card includes two1 2 4 Gb sec FC ports supporting FC AL and point to point switch fabric Two gigabit Ethernet ports are also included These ports support auto negotiation and VLAN compatible to IEEE 8 2 1Q and a maximum of 4096 Tool ANS LED FC 4 25Gbps Optical module FS FC Lr Flash Controller Optical module FC 4 25Gbps SS LED 64b 133MHz PCI X LAN Ma GbE 64b 66MHz RJ45 MAG E c g ridge LAN d RJ45 MAG Se en Figure 17 PCle I O Combo Card block diagram Embedded Fibre Channel Switch Module The Embedded Fibre Channel Switch Module consists of a motherboard with one daughter card with an FC switch installed and eight FC HBA Gigabit Ethernet Combo Cards It enables the use of both FC and LAN functions from each server blade in the BladeSymphony 1000 chassis Figure 18 shows an outside view of this module www hitachi com BladeSymphony 1000 Architecture White Paper 27 FCSW Combo card Figure 18 Outside view of the Embedded Fibre Channel Switch Module The Fibre Channel switch within the module consists of 14 ports compatible with the 4 Gb sec Fib
12. card to the SCSI connector on the same I O module and then connecting the board wiring from the SCSI connector to server slot 4 to 7 where the storage module is installed through the backplane The logical numbers of the SCSI connectors on I O module 0 and 1 are defined as 0 to 1 and 2 to 8 respectively Sample Configuration OO S 2 6x HDD Module 1 3x HDD Module 6 x HDD has only one Server chassis SCSI I F port Note that this port is A not connected Backplane SCSI SCSI SCSI Type 2 WF 2 UF 1 VF 0 SCSI Bus Internally connected Se SCSI connector 2 SCSI connector 1 SCSI sabe Bridge Bridge Bridge Bridge Bridge Bridge 2 2 a a E 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 UO module 0 type 1 UO module 1 type 2 SCSI or RAID Card Figure 25 Connection configuration for HDD Modules www hitachi com BladeSymphony 1000 Architecture White Paper 35 36 Chapter 7 Chassis Power and Cooling The BladeSymphony 1000 chassis houses all of the modules previously discussed as well as a passive backplane Power Supply Modules Cooling Fan Modules and the Switch amp Management Modules The chassis and backplane provide a number of redundancy fea
13. console and SVP console are bound to local sessions serial communications via MAINT COM connection or remote sessions telnet session on MAINT LAN with the Console Manager which controls binds between the console and the session Console Manager is a software entity that is run on SVP and undertakes binding of the console and a session and control of the transmission speed in each session Settings for these bindings and communication setup can be changed by using the SVP console command or the hot key In the Intel Xeon Server Blade the graphical console also functions as the OS console Therefore there is no communications path shared between the OS console and SVP console OS Console In the Intel Itanium Server Blade one OS console is provided for each partition An OS console functions as the console of the System Firmware SAL and EFI before OS startup and a text console under the OS after the OS startup The OS recognizes the OS console as a serial port COM1 The serial port of the OS console supports communication speeds of 9600 bps and 19 200 bits per second The interface information on the OS console as a serial port is passed to the OS via System Firmware In the Intel Xeon Server Blades a graphical console serves for this function and the OS console as hardware is not supported independently www hitachi com BladeSymphony 1000 Architecture White Paper 43 SVP Console This function is shared between the Intel Itan
14. enterprise class features BladeSymphony 1000 is an ideal platform for a wide range of data center scenarios including e Consolidation BladeSymphony 1000 is an excellent platform for server and application consolidation because it is capable of running 32 bit and 64 bit applications on Windows or Linux with enterprise class performance reliability and scalability Workload Optimization BladeSymphony 1000 runs a wide range of compute intensive workloads on either both Windows and Linux making it possible to balance the overall data center workload quickly and without disruption or downtime e Resource Optimization BladeSymphony 1000 enables the IT organization to increase utilization rates for expensive resources such as processing power making it possible to fine tune capacity planning and delay unnecessary hardware purchases e Reduce Cost Risk and Complexity With BladeSymphony 1000 acquisition costs are lower than traditional rack mount servers Enterprises can scale up on demand in fine grained increments limiting capital expenditures BladeSymphony 1000 also reduces the risk of downtime with built in sophisticated RAS features And with support for industry standards such as Windows and Linux Itanium and Xeon processors and PCI X and PCI Express PCle I O modules BladeSymphony 1000 is designed for future and protects previous investments in technology www hitachi com BladeSymphony 1000 Architecture W
15. for environments that require CPU processing exclusivity such as databases or real time applications Shared Mode A single processor core or groups of cores can be assigned to multiple logical partitions which in turn can share the assigned processing resources This allows multiple partitions to share one or multiple CPU cores to increase utilization Virtage can also carve up a single processor core into logical partitions for workloads that are smaller than one core Each partition is assigned a service rate of the processor Another advantage is the ability to dynamically change the services ratio for any given partition The system monitors the activity of a partition and if one partition is idle while the other is using 100 percent of its share the system temporarily increases the service rate until CPU resources are required by the other partition High I O Performance When deployed on Itanium processor based server blades Virtage employs direct execution as is used in the mainframe world leveraging Virtage technology embedded in the Hitachi Node Controller The Virtage I O hardware assist feature passes data through the guest I O requests with minimum modification thus does not add an extra layer for guest I O accesses Users can use standard I O device drivers as they are so they can take advantage of the latest functionality with less overhead The hardware assist feature simply modifies the memory addresses for the I O requests
16. function for emulating the PCl version SVP function e HA monitor cluster software interaction function e Management interface SVP console Telnet CLI RS232C CLI SNMP e Assist function E mailing to the maintenance center BladeSymphony 1000 implements the functions of the SVP card through software emulation by BMC and SVP over fast Ethernet and DC connections as shown in Figure 29 Blade Symphony Server Server Server Server Server Server Server Server 0 1 2 3 4 5 6 2 BMC l BMC BMC BMC BMC BMC BMC BMC ee We We ME We Ve We fo fo fo fo o 9 bi LAN SW LAN LAN LAN H SVP SVP Switch amp Switch amp Management UO 1 0 Management Module Module Module Module Sal Figure 29 Fast Ethernet and I2C connections in SVP management interface Base Management Controller BMC One instance of BMC is installed for each CPU module primarily to take charge of management within the physical partition including a single CPU module Only one instance of SVP is active throughout the system managing the entire system in cooperation with BMC SVP and BMC communicate with each other via a SVP build in 100 BASE TX LAN In the case of the Intel Itanium Server Blade a BMC on the primary CPU module operates as the representative of BMCs present in the partition
17. scheduling and memory allocation so that memory accesses by processors only take place within the subject node CPU module An OS with APIs for NUMA also allows applications running on it to perform optimization taking advantage of node localized memory accesses and enabling higher system performance Memory D Address Space Specified size CPU Module CPU Module Figure 8 Full interleave mode and non interleave mode e Mixture mode This mode specifies the ratio of a local memory at a constant rate There can be some restrictions on the ratio of local memory according to the NUMA function support level of operating system in use Figure 9 shows examples of all three types of modes CPU Module CPU Module CPU Module CPU Module Memory Memory Memory Memory 8GB 8GB 8GB 8GB lt Interleave boundary local memory 100 0 ei o KA KI Col br Cal ka o lt lInterleave boundary local memory 50 0 Shows segment of memory lt Interleave boundary local memory0 Interleave boundary local memory0 Interleave boundary local memory Interleave boundary local memory 100 0 32GB 30 no interleaved local no interleaved local memory 4 node interleave 4 node interleave Figure 9 Examples of interleaving www hitachi com BladeSymphony 1000 Architecture White Paper 17 L3 Cache Copy Tag The data residing in caches and main memory acr
18. system automatically detects the fault and identifies the problem by indicating the faulty module allowing immediate failure recovery This approach can cut total downtime by enabling the application workload to be shared among the working servers Operations Management e Reports real time or historical reports can be generated Report interval and display time intervals hour day week month year can be specified Graphical display drill down is possible for detailed analysis An export function supports output of HTML or CSV files e Fault and error detection and notification when a problem occurs on a server various notification methods including SNMP trap notification are used to quickly detect the problem Notices can be filtered according to the seriousness of the problem and can be sent to the administrator via email or the management console to indicate only very serious problems e Customized fault notification rules multiple monitoring conditions can be set such as alarm conditions and irregular conditions Problem notice conditions can be set according to time period and number of occurrences And executable actions can be set such as send email execute a command or send an SNMP trap notice e SMNP Communication an SNMP translator converts alert information managed by an agent service to MIB and sends it by SNMP to and SNMP manager such as OpenView Network Node Manager SNMP manager can be used to view inform
19. the details on the features of the Embedded Fibre Channel Switch module Table 7 Embedded Fibre Channel Switch Module components Function Supported Fibre Chan nel standards Details FC FG FC_AL FC_FLA FC_PLDA FC_VI FC_PH FC_GS_2 FC_PH_3 FC_SW IPFC RFC FC_AL2 FC_PH Fibre Channel port Universal port x14 14 ports equipped as hardware 8 internal ports 6 external ports Port type FL_port F_port and E_port with the function U_port to self detect port type Switch expandability Full fabric architecture configured by up to 239 switches Interoperability SilkWorm II SilkWorm Express and SilkWorm 2000 families Performance 4 250 Gb sec full duplex BladeSymphony 1000 Architecture White Paper www hitachi com Table 7 Embedded Fibre Channel Switch Module components Function Fabric delay time Details Less than 2 microseconds no contention cut through routing Maximum frame size 2112 byte payload Service class Class 2 class 3 class F frame between two switches Data traffic type Unicast multicast broadcast Media type SFP Small Form Factor Pluggable Fabric service SNS Simple Name Server RSCN Registered State Change Notification Alias Server Multicast Brocade Advanced Zoning ISL Trunking Supported FC HBA Gigabit Ethernet Combo Card The FC HBA Gigabit Ethernet Combo Card provides the FC HBA and gigabit Et
20. the management LAN of each server blade Gigabit Ether Broadcom 1 PHY chips for External ports 10BASE T switch 100BASETx 1000BASE T Auto Negotia tion Auto MDIX Gigabit Ether Broadcom 1 Converts the SerDes output from each net PHY server blade into 1000Base T SVP LANO 1 LAN port for connecting the system console Exclusively for maintenance personnel SVP SAN1 1 LAN port for connecting the system console Always connected to the maintenance LAN for notification by mail Maintenance 1 COM port for connecting the system con sole Exclusively for maintenance personnel GBLANO 3 4 10BASE T 100BASE Tx 1000BASE T Ether port Pwrcetrl 1 Not supported SVP manages the entire BladeSymphony 1000 device management via the SVP console SVP provides the fo It also provides a user interface for lowing functions e Module configuration management module type installation location etc within a server chassis e Monitoring and controlling of modules installed on the server chassis power control failure monitoring and partition control e Monitoring and controlling of the environment temperature monitoring controlling fan RPM www hitachi com BladeSymphony 1000 Architecture White Paper 41 e Panel control e Log information management within BladeSymphony 1000 RC logs SEL SVP logs etc e SVP hot standby configuration control e Server Conductor server management software interaction function Including a
21. thread 32 and 64 bit processing capabilities with 12 MB of L2 cache per processor providing more computing for threaded applications in a variety of deployments Intel s 45 nm uses 820 million transistors in the Intel Xeon processor 5400 series Intel Xeon processor 5300 series has 582 million transistors More transistors deliver more capability performance and energy efficiency through expanded power management capabilities Other enhancements are designed to reduce virtualization overhead And 47 new Intel Streaming SIMD Extensions 4 SSE4 instructions can help improve the performance of media and high performance computing applications Other features include e Fully Buffered DIMM FBDIMM technology that increases memory speed to 800 MHz and significantly improves data throughput e Memory mirroring and sparing designed to predict a failing DIMM and copy the data to a spare memory DIMM increasing server availability and uptime e Support for up to 128 GB memory e Enhanced Intel SpeedStep technology allows the system to dynamically adjust processor voltage and core frequency which results in decreased power consumption and heat production Memory System The Intel Xeon Server Blade is equipped with eight FBDIMM slots supporting Registered DDR1 SDRAM Supported capacity includes 512 MB 1 GB and 2 GB DDR1 266 DIMMs The memory system is designed to control a set of two DIMMs for the memory device replacing function Accordingly i
22. Interconnect technology and blade form factor allow IT staff to manage scale up operations on their own without a service call The interconnect allows up to four server blades to be joined into a single server environment composed of the total resources CPU memory and I O resident in each module NUMA Architecture The Intel Itanium Server Blade supports two memory interleave modes full and non interleave In full interleave mode the additional latency in accessing memory on other server blades is averaged across all memory including local memory to provide a consistent access time In non interleave mode a server blades has faster access to local memory than to memory on other server blades Both of these options are illustrated in Figure 8 16 BladeSymphony 1000 Architecture White Paper www hitachi com e Full interleave mode or SMP mode Intended for use with an OS without support for the NUMA architecture or with inadequate support for NUMA In full interleave mode main memory is interleaved between CPU modules in units Since memory accesses do not concentrate on one CPU module in full interleave mode memory bus bottlenecks are less likely and latency is averaged across CPUs e Non interleave mode This mode specifies the ratio of local memory at a constant rate Use non interleave mode with an OS that supports NUMA In non interleave mode memory is not interleaved between CPU modules An OS supporting NUMA performs process
23. MC Renesas 1 Management processor BMC SRAM Renesas 2 MB with Main memory for management processor parity FPGA Xilinx 1 Controls the BMC bus decodes addresses and functions as a bridge for LPC Flash ROM Fujitsu 16 MB Backs up BMC codes and SAL BUS SW 1 Reserved for the SVP duplex future switching over BMC SVP Intel Itanium Processor 9100 Series The Dual Core Intel Itanium 9100 series 64 bit processor delivers scalable performance with two high performance cores per processor memory addressability up to 1024 TB 24 MB of on die cache and a 667 MHz front side bus It also includes multi threading capability two threads per core and support for virtualization in the silicon Explicitly Parallel Instruction Computing EPIC technology is designed to enable parallel throughput on a enormous scale with up to six instructions per clock cycle large execution resources 128 general purpose registers 128 floating point registers and 8 branch registers and advanced capabilities for optimizing parallel throughput The processors deliver mainframe class reliability availability and serviceability features with advanced error detection and correction and containment across all major data pathways and the cache subsystem They also feature integrated standards based error handling across hardware firmware and the operating system 10 BladeSymphony 1000 Architecture White Paper www hitachi com The Intel Itanium is optimiz
24. MP provides higher performance for applications that can utilize large memory and multiple processors such as large databases or visualization applications The maximum SMP configuration supported by BladeSymphony 1000 is e Four Dual Core Intel Itanium Server Blades for a total of 16 CPU cores e 256 GB memory 64 GB per server blades x 4 e Eight gigabit NICs 2 on board per server blade connected to two internal gigabit Ethernet switches e Eight PCI X slots or 16 PCI X slots with chassis B With its unique interconnect technology BladeSymphony 1000 delivers a new level of flexibility in adding computing resources to adapt to changing business needs BladeSymphony 1000 can address scalability requirements by scaling out horizontally or by scaling up vertically Scaling out is ideally suited to online and other front end applications that can divide processing requirements across multiple servers Scaling out can also provide load balancing capabilities and higher availability through redundancy SMP Interconnect SMP Interconnect Backplane Backplane 0S a n sch 4 way 8 way 16 way Server Server Server Figure 7 Scale up capabilities Scaling up is accomplished through SMP shown in Figure 7 This approach is better suited to enterprise class applications requiring 64 bit processing high computational performance and large memory addressability beyond that provided in a typical x86 environment BladeSymphony 1000 SMP
25. Module H h 1 i Switch Module H LAN1 T Switch Module H 2 2 E Figure 26 Chassis backplane connections Table 10 Chassis specifications Module Type A Type B Server Blade e Intel Xeon Server Blades 8 e Intel Xeon Server Blades max 8 max 4 exclusive e Intel Itanium Server Blades against storage modules 8 max e Intel Itanium Server Blades 8 maximum with network boot Storage Modules HDD x 3 N A 4 maximum Storage Modules HDD x 6 N A 2 maximum BladeSymphony 1000 Architecture White Paper www hitachi com Module Type A Type B Switch amp Management Module 1 standard 2 maximum 1 standard 2 maximum I O Module PCI X 2 maximum 2 maximum 2 slots maximum per server 4 slots maximum per server blade 16 slots maximum per blade 16 slots maximum server chassis per server chassis IO Module PCle N A 2 maximum 2 slots maximum per server blade 16 slots maximum per server chassis I O Module Fibre Channel Switch N A 2 Maximum Power Module 4 maximum N 1 redundant configuration Cooling Fan Module 4 standard 3 1 redundant configuration USB CD ROM Drive Optional USB Floppy Disk Drive Optional Outside Dimension 17 5 x 33 5 x 17 4 inches 10 RU Weight 308 pounds lbs Input Voltage frequency 200 240 VAC single phase 50 60 Hz Power Consumption maximum 4 5 kW Operating Temperature 41 to 55 degrees Fahrenhei
26. SB400MHz 10 6 GB s FSB667MHz Node Controller Node Bandwidth 4 8 GB s FSB400MHz 5 3 GB s FSB667MHz PCI Bus 2GB s x3 L3 Cache Copy Tag PCI iCC Numa PCI Express 4 Lane mae Point to point Node Low Latency Controller Controller 7 Pel Slots MC MC Memory Bus Memory Memory SC Ge W s Controller Controller S z 714 Pitt DDR2 H e e 7 H e DDR2 Memory m jE o m jm W I jm Memory E O E oO EI 40 E Figure 6 Hitachi Node Controller connects multiple server blades By dividing the SMP system across several server blades the memory bus contention problem is solved by virtue of the distributed design A processor s access to its on board memory incurs no penalty The two processors four cores can access up to 64 GB at the full soeed of local memory When a processor needs data that is not contained in its locally attached memory its node controller needs to contact the appropriate other node controller to retrieve the data The latency for retrieving that data is therefore higher than retrieving data from local memory Since remote memory takes longer to access this is Known as a non uniform memory architecture NUMA The advantage of using non uniform memory is the ability to scale to a larger number of processors within a single system image while still allowing for the speed of local memory access While there is a
27. able Intel Xeon Server Blade and Intel Itanium Server Blade A 10 RU BladeSymphony 1000 server chassis can accommodate eight server blades of these types It can also accommodate a mixture of server blades as well as storage modules In addition multiple Intel Itanium Server Blades can be combined to build multiple Symmetric Multi Processor SMP configurations Figure 3 shows a logical diagram of modules interconnecting on the backplane for a possible configuration with one SMP server and one Intel Xeon server as well as various options for hard drive and I O modules 6 BladeSymphony 1000 Architecture White Paper www hitachi com Up to 8 way SMP configurable 8 modules max 4 The HDD module having six HDDs mounted occupies the space for two modules Figure 3 Logical components of the BladeSymphony 1000 Ser Kl Server blade Kl Server blade Front ver blade F T HDD module 3 HDDs max EICH es HDD module 6 HDDs max Bas aoe Replaced with 4 server blade Switch amp Management 1GbitEther 16 slots in total 16 slots in total The following chapters detail the major components of BladeSymphony 1000 as well as management software and Virtage embedded virtualization technology e Intel Itanium Server Blade on page 8 provides details on the Intel Itanium Server Blades and how they can be combined to create SMP systems up of to 16 c
28. ail For example Asset Management can be used to discover the amount of memory in a server blade to search for servers that have old version of firmware or to search to periodically check the status of service pack installations www hitachi com BladeSymphony 1000 Architecture White Paper 47 Chapter 9 Virtage Virtage is a key technical differentiator for BladeSymphony 1000 It brings mainframe class virtualization to blade computing Leveraging Hitachi s decades of development work on mainframe virtualization technology Virtage delivers high performance extremely reliable and transparent virtualization for Dual Core Intel Itanium and Quad Core Intel Xeon processor based server blades Virtage is built in and requires no separate operating system layer or third party virtualization software so it safely shares or isolates resources among partitions without the performance hit of traditional software only virtualization solutions It is tuned specifically for BladeSymphony 1000 and is extensively tested in enterprise production environments Virtage is designed to deliver a higher level of stability manageability performance throughput and reliability than comparable virtualization technology and it sets a new standard for on demand infrastructure provisioning Virtage offers a number of benefits including e Reduced number of facial servers through consolidation and virtualization e Increased server utilization rates e Isolation of
29. and the hardware in the server blade It is connected to the service processor SVP inside the Switch amp Management Module The BMC and SVP cooperate with each other to control and monitor the entire system Sensors built into the system report to the BMC on different parameters such as temperature cooling fan speeds power mode OS status etc The BMC can send alerts to the system administrator if the parameters vary from specified preset limits indicating a potential failure of a component or the system Memory System Intel Itanium Server Blades are equipped with 16 DIMM slots which support Registered DDR2 400 SDRAM in 512 MB 1 GB 2 GB and 4 GB DDR2 583 for a total of up to 64 GB per server blade or 16 GB per core The memory system is designed to control a set of four DIMMs for the ECC and the memory device replacing function Accordingly if DIMMs are added they must be arranged in four DIMM units The different DIMMs in each row can be used logically as shown in Figure 5 NDC Logical 72 bit 72 bit Name of DIMM sets MC1 MCO Priority of DIMM set hie bit 4 72 bit b72 bit Jrz bit 4 72 bit ke bit hie bit Are bit mounting lt gt Row 01 DIMM1A0 DIMM1BO DIMMOAO DIMMOBO CO Row 23 DIMM1C0 DIMM1D0 DIMMOCO DIMMODO KS Row 45 DIMM1A1 DIMM1B1 DIMMOA1 DIMMOB1 CD Row67 DIMM1C1 SE pimmoc ouer
30. applications for increased reliability and security e Improved manageability e Simplified deployment e Support for legacy systems Virtage is Hypervisor type virtualization and therefore has a natural performance advantage over host emulation virtualization offerings because guest operating systems can be simply and directly executed on virtualized environment without host intervention Virtage leverages Intel s Virtualization Technology VT to help ensure that processor performance is optimized for the virtual environment and to provide a stable platform that incorporates virtualization into the hardware layer Virtage can partition physical server resources by constructing multiple logical partitions _PARs that are isolated and each of these environments can run independently A different operating system called a Guest operating system can run on each LPAR on a single physical server The BladeSymphony1000 Server Blades can run in basic mode non virtualized or with Virtage The Virtage feature is embedded within the system and can be activated or de activated based on customer needs Each system can support multiple virtualized or non virtualized environments based on specific preferences A single server blade or SMP can be configured up to 16 LPARs at a time High CPU Performance and Features Virtage is Hypervisor type virtualization and therefore features a natural performance advantage over host emulation virtualization offer
31. at displays the current window from the consoles e Shell Console for Intel Itanium Server Blades standard shell interface e BSMS Chat feature enables a chat session between a Windows server and the management console e Power control the power supply can be controlled remotely to turn power on and off Network Management The BladeSymphony Management Suite Network Management function provides one point management of network switch VLAN configuration information Configuration information related to VLANs is obtained from the switches on the network and then managed from one port The management GUI can be used to set up and manage configuration information regardless of the command specifications for each type of switch Rack Management In the BladeSymphony 1000 networks server blades and storage devices are installed as modules in a single rack Rack Management provides graphical displays of the information about the devices such as layout amount of available free space and error locations It can also display detailed information about each device such as type IP address and size and alert information in the event of failure Asset Management The Asset Management functionality enables server resources and asset information inventory information to be checked on screen Servers matching particular inventory information conditions can be searched and the search results can be distributed periodically through em
32. ate networking structure Eight of the ports connect through the backplane to server blades and the remaining four ports are used to connect to external networks as illustrated in Figure 23 The switch provides up to 24 Gb sec total throughput performance and the ability to relay packets at 1 488 000 packets sec Additional features are listed in Table 9 Figure 23 Back view of chassis with blow up of Embedded Gigabit Ethernet Switch The Embedded Gigabit Ethernet Switch can be configured for high availability and fault tolerance when a second redundant switch module is added A single switch interconnects one of two gigabit Ethernet connections from each blade server blade up to eight total The second redundant switch interconnects each of the remaining gigabit connections for each server blade so a single switch failure allows networking operations to continue on the remaining switch If additional network bandwidth or connectivity is needed PCI slots can be utilized for additional NICs Switch features are listed in Table 9 www hitachi com BladeSymphony 1000 Architecture White Paper 33 Table 9 Embedded Gigabit Ethernet Switch features Item Description Port Backplane side 1 Gb sec x 8 External 10 BASE T 100 BASE T 1000 BASE t auto connection Auto learning of MAC address 16 384 entries Switch Layer 2 switch Bridge function Spanning tree protocol IEEE 802 1d compliant Netw
33. ation Repair CPI adapter Switch amp Management Module Power Module Cooling Fan Module while system is operating www hitachi com BladeSymphony 1000 Architecture White Paper 39 Serviceability Features Switch amp Management Module The Switch amp Management Module is designed to control the system unit and monitor the environment Figure 28 shows the block diagram of the module This module and other system components are connected through 12C or other busses DC VF Se UO Module 0 1 To CPU Module SerDes GB I F Management LAN SAFTE O0 3 to HDD Mod 0 7 From CPU From CPU module 0 7 module 0 7 Reserved 3ports for For Power Supply Control SVP redundant Extended switch To Panel lout GB Ether Switch Ether Switch GLANO 3 Port SVP LAN Port MAINT1 Port PWCTRL1 Port 1000Base T SVP LANO Exclusively for maintenance Serial I F for on Power control connector of LAN connection site service multiple chassis Not opened SVP LAN1 For management to user Figure 28 Switch amp Management Module components The Switch amp Management Module contains the service processor SVP which controls the system and monitors the environment A SVP is connected to the server blades or another SVP on the other Switch amp Management Module if mounting two Switch amp Management Modules through the backplane interfaced by 100M 10M Ethernet Further an SVP is conne
34. ation sent by SNMP e Windows Cluster Management a cluster environment can be created by Microsoft Cluster Server for each server node However Microsoft Cluster Server s cluster administrator can manage only clusters in the same domain while the BladeSymphony 1000 enables centralized management of clusters from a remote location regardless of the domains The following operations for cluster management are supported Viewing information about cluster groups and resources from a remote location Reporting to the administrator as events the changes in the status of cluster groups and resources BladeSymphony 1000 Architecture White Paper www hitachi com Setting a failover schedule for cluster groups based on specific dates or at specified times on a weekly schedule The user can achieve more detailed cluster management by combining this feature with a power control schedule Using alerts to predict future server shutdown and implementing automatic failover in the event of specific alerts e Power Scheduling Power control schedules can be set to turn the power on or off on specific dates or at specified times on a weekly schedule Remote Management The BladeSymphony 1000 can be operated from a remote management console Chat sessions can be used between a local operator and a remote system administrator to quickly resolve and recover from problems e IP KVM for Intel Xeon Server Blades keyboard mouse emulator th
35. ce and Features ccccssseceeeseeeeeeeeeeeeeeeeneeeseeneeeesseneeeaeeeeeeeeneeeeeseaeeeees 48 High VO PeO AOO gedd rrara ee EES EES dee 49 Fiber Channel Virtualization NEEN 50 Shared Virtuall NIG Functions e 2e 00 geed ee eege ees eit deve entuceevensiysveehbuceneeeerss 50 Integrated System Management for Virtual Machines cccsccceceseeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeees 50 Summary ia ais aa a ae Sede al aya aaa ar EE EE 51 FOr More Tuuten E de EE 51 2 BladeSymphony 1000 Architecture White Paper www hitachi com Chapter 1 Introduction Executive Summary Blade servers pack more compute power into a smaller space than traditional rack mounted servers This capability makes them an attractive alternative for consolidating servers balancing or optimizing data center workloads or simply running a wide range of applications at the edge or the Web tier However concerns about the reliability scalability power consumption and versatility of conventional blade servers keeps IT managers from adopting them in the enterprise data center Many IT professionals believe that blade servers are not intended for mission critical applications or compute intensive workloads Leveraging their vast experience in mainframe systems Hitachi set out to design a blade system that overcomes these perceptions The result is BladeSymphony 1000 the first true enterprise class blade server The system combines Virtage embedded virtualization tech
36. celeration Technology I OAT hardware and software supported I O acceleration that improves data throughput Unlike NIC centric solutions such as TCP Offload Engine I OAT is a platform level solution that addresses packet and payload processing bottlenecks by implementing parallel processing of header and payload It increases CPU efficiency and delivers data to and from applications faster with improved direct memory access DMA technologies that reduce CPU utilization and memory latencies associated with data movement Finally I OAT optimizes the TCP IP protocol stack to take advantage of the features of the high bandwidth rates of modern Intel processors thus diminishing the computation load on the processor www hitachi com BladeSymphony 1000 Architecture White Paper 21 e Intel VT Flex Migration Intel hardware assisted virtualization provides the ability to perform live virtual machine migration to enable fail over load balancing disaster recovery and real time server maintenance e New features include Error Correcting Code ECC system bus new memory mirroring and I O hot plug Intel Xeon 5400 Quad Core Processors The Quad Core Intel Xeon 5400 Series is designed for mainstream new business and HPC servers delivering increased performance energy efficiency and the ability to run applications with a smaller footprint Built with 45 nm enhanced Intel Core microarchitecture the Quad Core Intel Xeon 5400 Series delivers 8
37. cted to the server blades another SVP or the I O modules through the backplane by an DC interface The SVP performs tasks including system control monitoring and notice of a fault The SVP is equipped with a console interface which provides a user interface for maintaining and managing the system Table 12 defines the components of the Switch amp Management Module Two Switch amp Management Modules can be installed on one chassis In this case the main SVP normally performs the SVP function Health checking works between the main and sub SVP as they monitor each other If the main SVP fails the sub SVP takes over operation The Switch amp Management Module houses the gigabit Ethernet switch to which the gigabit Ethernet port of each server blade is connected over the backplane When two Switch amp Management Modules are installed each switch operates independently 40 BladeSymphony 1000 Architecture White Paper www hitachi com Table 12 Switch amp Management Module components Component Manufacturer Quantity Description Microproces Hitachi 1 sor RTC Epson 1 Battery backed up FPGA Xilinx 1 SDRAM 128 MB ECC Flash ROM 16 MB Stores the OS image NV SRAM 1 MB Battery backed up SRAM Saves system configuration information and fault logs Compact flash 128 MB Backs up the SAL and BMC firmware 512 MB Ether switch Broadcom 2 Connect
38. e 11 Intel Itanium I O Expansion Module in type A chassis provides up to four PCI slots per server blade C Server chassis Server blade 0 Backplane Type D PCI Exprebs gt x4 Link GG d 4 2 2 F S Ki i L il mn b i L P xH ii Switch amp Switch amp 1 t 7 j j 7 Management Management 4 module 1 module 0 vo module 1 type d vo mo EH type d PCI slot 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 pr Figure 12 Intel Itanium I O Expansion Module in type B chassis provides up to eight PCI slots per server blade www hitachi com BladeSymphony 1000 Architecture White Paper 19 Chapter 4 Intel Xeon Server Blade The eight slot BladeSymphony 1000 can accommodate a total of up to eight Dual Socket Dual Core or Quad Core Intel Xeon Server Blades for up to 64 cores per system Each Intel Xeon Server Blade supports up to four PCI slots and provides the option of adding Fibre Channel or SCSI storage Two on board gigabit Ethernet ports are also provided along with IP KVM for remote access virtual media support and front side VGA and USB ports for direct access to the server blade Figure 13
39. ed by a memory fault BladeSymphony 1000 supports the online spare memory function in the ten patterns of memory configurations listed in Table 5 The shaded sections represent spare banks Online spare memory excludes the use of the memory mirroring function Table 5 Online spare memory supported configurations Bank Bank Bank Bank Bank4 Slot Slot 1 Slot2 Slot3 Slot 4 Slot 5 Slot 6 Slot 7 Slot 8 Configuration 1 2 GB 2 GB 2 GB 2 GB 2 GB 2 GB 2 GB 2 GB Configuration 2 1 GB 1 GB 1 GB 1 GB 1 GB 1 GB 1 GB 1 GB Configuration 3 512 512 512 MB 512 512 MB 512 512 512 MB MB MB MB MB MB Configuration 4 2 GB 2 GB 2 GB 2 GB 2 GB 2 GB None None Configuration 5 1 GB 1 GB 1 GB 1 GB 1 GB 1 GB None None Configuration 6 512 512 512 MB 512 512 MB 512 None None MB MB MB MB www hitachi com BladeSymphony 1000 Architecture White Paper 23 Table 5 Online spare memory supported configurations Bank Bank Bank Bank3 Bank4 Configuration 7 2 GB 2 GB 2 GB 2 GB None None None None Configuration 8 1 GB 1 GB 1 GB 1 GB None None None None Configuration 9 512 512 512 MB 512 None None None None MB MB MB Configuration 256 256 256 MB 256 None None None None 10 MB MB MB fi For example in Configuration 1 the shaded BANK 4 is a spare bank Assume that the memo
40. ed for dual processor based platforms and clusters and includes the following features e Wide parallel hardware based on Itanium architecture for high performance Integrated on die cache of up to 24 MB cache hints for L1 L2 and L3 caches for reduced memory latency 128 general and 128 floating point registers supporting register rotation Register stack engine for effective management of processor resources Support for predication and speculation e Extensive RAS features for business critical applications Full SMBus compatibility Enhanced machine check architecture with extensive ECC and parity protection Enhanced thermal management Built in processor information ROM PIROM Built in programmable EEPROM Socket Level Lockstep Core Level Lockstep e High bandwidth system bus for multiprocessor scalability 6 4 GB sec bandwidth 28 bit wide data bus 400 MHz and 533 data bus frequency 50 bits of physical memory addressing and 64 bits of virtual addressing e Two complete 64 bit processing cores on one chip running at 104W Cache The processor supports up to 24 MB 12 MB per core of low latency on die L3 cache 14 cycles providing 102 GB sec aggregate bandwidth to the processor cores It also include separate 16 KB Instruction L1 and 16 KB Data L1 cache per core as well as separate 1 MB Instruction L2 and 256 KB Data L2 cache per core for higher speed and lower la
41. entiectventtvencastaecdvectivarstevsnsttervesalecetvent 34 Chassis Power and Cooling 0000000ee eee eee e eee eee 36 Module Connecti On se eiesg gier Zeek ge CERS ee k n Rekter ees E 37 Redundant Power Meodulee csisccc cde cceticeeceeeecestcndect cece dutscaecec Seeded suestiesictwasccnsccsnedseveanocsies 37 Redundant Cooling Fan Modules cccccssececeeseeeeeeeneeeeeeeneeeeeeneeeeseaeeeessaeeeeesneeeeeeeneeeneeeneees 38 Reliability and Serviceability Features 0000 00 eee eew eee 39 Reliability Feat leS se ite re uesreger eege euetEku SNE EEE ECKE EEN ENEE E NEE EEEECE 39 serviceability Features 221A ea eA cha egen eege ees 40 Management Software 0 002c cece eee eee ee 45 BladeSymphony Management Suite cccccseeceeeeeeeeeeeeeeeeeeeeeeeeeseeeeeeeseneeesseeeeeseeneeeeeeeeees 45 Operations Management ccceceeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeneeeeeeeneeeeeaeeeeeaeeeeeesaeeesseneeeeeeeneeeeees 46 Remote Management csssssseeecceeeeseeeeneeeeeeesenesseneeeeeeesenseseneeseeeesenseeneeseeeesenseneeeeeeesseseeanqaees 47 Network Management ccsssecceceeseseeeneeeeeeesnsecenneeeeeeeeeseceaneeseseasaasceneeeeeeeaaesscaneesesensaseenaeeeees 47 Rack Management 0s0 c 2sssecaccedecenedsadeeleceeceergsanecaueeaveunsssaWnnccas Geen chadtnccdaetacendeatdectserdeasuacesdues 47 Asset EE DEE 47 Virtage oasis Sech Z re e er ace ates elena Ee oats geen atacand arene EENE ES 48 High CPU Performan
42. f DIMMs are added they must be installed in two DIMM units The DIMMs in the same bank must be of the same type The DIMMs in different banks can be of different types FB DIMM Advantages Intel supports Fully Buffered DIMM FBDIMM technology in the Intel Xeon 5200 dual core and 5400 quad core processor series FBDIMM memory provides increased bandwidth and capacity for the Intel Xeon Server Blade It increases system bandwidth up to 21 GB sec with DDR2 667 FBD memory FBDIMM technology offers better RAS by extending the currently available ECC to include protection for commands and address data Additionally FBDIMM technology automatically retries when an error is detected allowing for uninterrupted operation in case of transient errors 22 BladeSymphony 1000 Architecture White Paper www hitachi com Advanced ECC Conventional ECC is intended to correct 1 bit errors and detect 2 bit errors Advanced ECC also known as Chipkill corrects up to four or eight bits of an error that occurs in a DRAM installed on a x4 DRAM or x8 DRAM type DIMM respectively Accordingly the system can operate normally even if one DRAM fails as illustrated in Figure 14 A DRAM fails Memory Controller Controller Figure 14 Advanced ECC Online Spare Memory Online spare memory provides the functionality to switch over to spare memory if correctable errors frequently occur This function is enabled to prevent system downtime caus
43. he Fibre Channel switch is managed through a 10 100M Ethernet RJ 45 or serial port Either port can be used to manage the switch The following software is supported to manage the Fibre Channel switch e Brocade Web Tools An easy to operate tool to monitor and manage the FC switch and SAN fabric Operated from a Web browser Brocade Fabric Watch SAN monitor for the switches made by Brocade It constantly monitors the SAN fabric to which the switch is connected detects any possible fault and gives the network manager a prior warning automatically Brocade ISL Trunking Groups ISLs between switches automatically to optimize the performance of the SAN fabric The Fibre Channel HBA supports Common HBA API version 1 0 partly 2 0 developed by SNIA Common HBA API is a low level HBA standard interface to access the information in the SAN environment and is provided as the API in the standard C language The network adapter supports SNMP and ACPI management software 32 BladeSymphony 1000 Architecture White Paper www hitachi com Embedded Gigabit Ethernet Switch The Embedded Gigabit Ethernet Switch is contained in the Switch amp Management Module and is a managed standards based Layer 2 switch that provides gigabit networking through cableless LAN connections The switch provides 12 single or 24 redundant gigabit Ethernet ports for connecting BladeSymphony 1000 Server Blades to other networked resources within the corpor
44. he total memory capacity shown when the system is running 24 BladeSymphony 1000 Architecture White Paper www hitachi com If an uncorrectable error occurs in a DIMM in the primary the mirror is used for both writing and reading data If an uncorrectable error occurs in a DIMM in the mirror the primary is used for both writing and reading data In this case the error is logged as a correctable error If the error is uncorrectable by the primary or mirror it is logged as an uncorrectable error On Module Storage Intel Xeon Server Blades support up to four internal 2 5 inch SAS hard drives The SAS architecture with its SCSI command set advanced command queuing and verification error correction is ideal for business critical applications running on BladeSymphony 1000 systems Traditional SCSI devices share a common bus At higher signaling rates parallel SCSI introduces clock skew and signal degradation Serial Attached SCSI SAS solves these problems with a point to point architecture where all storage devices connect directly to a SAS port Point to point links increase data throughput and improve the ability to find and repair disk failures The SAS command set is parallel SCSI frame formats are from Fibre Channel and physical characteristics are from Serial ATA SAS links are full duplex enabling them to send and receive information simultaneously which reduces latency The SAS interface also allows multiple links to be combined creat
45. hernet functions for the Embedded Fibre Channel Switch Module LAN RJ45 MAG FC Controller Figure 22 FC HBA Gigabit Ethernet Combo Card block diagram The card includes the following components e One Intel PCle to PCI X bridge chip e One Intel Gigabit LAN Controller e One Hitachi FC Controller FC HBA 1 port www hitachi com BladeSymphony 1000 Architecture White Paper EN The Hitachi FC Controller FC HBA supports the functions in Table 8 Table 8 Hitachi FC Controller FC HBA functions Function Details Number of ports 1 PCI hot plug Supported Port speed 1 2 4 Gb sec Supported standards FC PH rev 4 3 FC AL rev 5 8 Supported topology FC_AL point to point switched fabric Service class Class 2 3 Number of BB credits 256 Maximum buffer size 2048 RAS Error injection trace error detection Intel Xeon Server Blade boot support Supported BIOS Intel Itanium Server Blade boot support Supported BIOS EFI Management Software Developed exclusively for BladeSymphony 1000 the BladeSymphony management software manages all of the hardware components of BladeSymphony 1000 in a unified manner including the Embedded Fibre Channel Switch Module In addition Brocade management software is supported allowing the Embedded Fibre Channel Switch Module to be managed using existing SAN management software Each component can also be managed individually T
46. hite Paper 5 Chapter 2 System Architecture Overview BladeSymphony 1000 features a very modular design to maximize flexibility and reliability System elements are redundant and hot swappable so the system can be easily expanded without downtime or unnecessary disruption to service levels The key components of the system illustrated in Figure 2 consist of e Server Blades Up to eight depending on module available with Intel Xeon or Itanium processors e Storage Modules up to two modules supporting either three or six SCSI drives e O Modules available with PCI X slots PCle slots or Embedded Fibre Channel Switch up to two modules per chassis e Small footprint chassis containing a passive backplane eliminates a number of FC and network cables e Redundant Power Modules up to four hot swap 2 1 or 2 2 modules per chassis for high reliability and availability e Redundant Cooling Fan Modules four hot swap 3 1 per chassis standard configuration for high reliability and availability e Switch amp Management Modules hot pluggable system management board up to two modules per system for high reliability and availability Cooling fan module HO module Backplane Server blade ii management module Storage module Figure 2 Key BladeSymphony 1000 components The server blades and I O modules are joined together through a high speed backplane Two types of server blades are avail
47. ing 2x 3x or 4x connections to increase bandwidth www hitachi com BladeSymphony 1000 Architecture White Paper 25 26 Chapter 5 I O Sub System 1 0 Modules Hitachi engineers go to great lengths to design systems that provide high I O throughput BladeSymphony 1000 PCI I O Modules deliver up to 160 Gb sec throughput by providing a total of up to 16 PCI slots 8 slots per I O module WO modules accommodates industry standard PCle or PCI X cards supporting current and future technologies as well as helping to preserve investments in existing PCI cards In addition by separating I O from the server blades the BladeSymphony 1000 overcomes the space contraint issues of other blade server designs which can only support smaller PCI cards Three I O modules are available PCI X I O Module PCle I O Module and an Embedded Fibre Channel Switch Module Two I O modules are supported per chassis PCI X I O Module The PCI X I O Module supports eight PCI X cards in total with a maximum of two PCI X cards assigned to a single server blade for Chassis A and four for Chassis D In Chassis A eight PCI cards can be attached to four server blades at a two to one ratio In Chassis B four PCI cards can be attached to four server blades for a four to one ratio Hot plug is supported in specific conditions See the BladeSymphony 1000 Users Manual for more information The block diagram for the PCI X I O Module is shown in Figure 16
48. ings because guest operation systems can be simply and directly executed on the virtualized environment without host intervention Virtage fully utilizes the Hypervisor mode created by leveraging Intel s VT i technology which is embedded in the Itanium processor Therefore Virtage can capture any guest operation requiring host intervention with minimal performance impact to normal operations 48 BladeSymphony 1000 Architecture White Paper www hitachi com And the host intervention code is tuned for the latest Itanium hardware features minimizing the performance impact to guests Virtage offers two modes in which processor resources can be distributed among the different logical partitions dedicated mode and shared mode as illustrated in Figure 31 Dedicated Mode Shared Mode cpu cpu cpu cpu cpu i 1 I i Memory Memory l Memory Memory i i Nic Nic i Nic Nic NIC H Pi A J Partition 1 Partition 2 Partition 3 A Niy d g g 2 i Pet PCI PCI PCI PCI Figure 31 Share or isolate CPU and I O resources to any partition in the same environment Dedicated Mode Individual processor cores can be assigned to a specific logical partition Dedicating the core to an LPAR helps ensure that no other partition can take CPU resources away from the assigned partition This method is highly recommended
49. ium and Intel Xeon Server Blades SVP console is a console under SVP and provides a user interface for system management SVP console provides the following functions e Setup and display of the system s hardware information e Display and deletion of failure information RC and detail logs e Substitution of front panel operation e Display of console logs e Setting of remote failure reporting e Update of System Firmware normally using the EFI tool Under study for the IA82 CPU module e Setting of the SVP clock e Debugging of the system e Setting of IP address Remote Console When running Linux or Windows a graphical console is available as the OS or System Firmware console The graphical console consists of VGA a keyboard and a mouse In the Intel Itanium Server Blade a Windows remote desktop is used A keyboard and mouse can be connected to the USB port on the rear connector panel The USB port on the connector panel can be shared among multiple physical partitions by switchover operation In the Intel Xeon Server Blade a special KVM connector can be connected to the KVM port on the front of each server blade to connect the monitor keyboard and mouse 44 BladeSymphony 1000 Architecture White Paper www hitachi com Chapter 8 Management Software BladeSymphony 1000 delivers an exceptional range of choices and enterprise class versatility with multi OS support and comprehensive management software options Operati
50. mance BladeSymphony 1000 also delivers large I O capacity for high throughput e Scalability BladeSymphony 1000 is capable of scaling out to eight Intel Dual Core Itanium processor based server blades in the same chassis or scaling up to two 16 core SMP servers with Intel Dual Core Itanium processor based server blades 4 BladeSymphony 1000 Architecture White Paper www hitachi com e Reliability Reliability is increases through redundant components and components are hot swappable Other reliability features include Hitachi s mainframe class memory management Redundant switch and management modules Extremely reliable backplane and I O Multi configurable power supplies for N 1 or full redundancy options Failover protection following the N M model there are M backup servers for every N active servers so failover is cascading Inthe event of hardware failure the system automatically detects the fault and identifies the problem by indicating the faulty module allowing immediate failure recovery e Configuration Flexibility BladeSymphony 1000 supports Itanium and or Xeon processor based server blades Windows and or Linux and industry standard best of class PCI cards PCI X and PCI Express providing flexibility and investment protection The system is extremely expandable in terms of processor cores I O slots memory and other components Data Center Applications With its
51. n the enterprise data center The perception persists that they are not ready for enterprise class workloads Many people doubt that blade servers can deliver the levels of reliability scalability and performance needed to meet the most stringent workloads and service level agreements or that they are open and adaptable enough to keep pace with fast changing business requirements 1 This section and other sections of this chapter draw on content from 2010 Winning IT Management Strategy by Nikkei Solutions Business published by Nikkei BP August 2006 www hitachi com BladeSymphony 1000 Architecture White Paper 3 BladeSymphony 1000 Figure 1 is the first blade system designed specifically for enterprise class mission critical workloads It is a 10 rack unit RU system that combines Hitachi s Virtage embedded virtualization technology a choice of Intel Dual Socket Multi Core Xeon and or Intel Dual Core Itanium Server Blades running Windows or Linux centralized management capabilities high performance I O and sophisticated reliability availability and serviceability RAS features o00000 po O O Figure 1 BladeSymphony 1000 front view Enterprise Class Capabilities With BladeSymphony 1000 it is now possible for organizations to run mission critical applications and cons
52. nd complexity with blade servers BladeSymphony 1000 offers a unique solution with e A 10 RU chassis with hot swappable server blades that run both Windows and Linux e Support for dual core Intel Itanium 9000 Series processors e Support for Intel Xeon Server Blades within in the same chassis e Capability of scaling up or out to offer up to two16 core SMP servers in a single chassis In addition Virtage embedded virtualization technology brings the performance and reliability of mainframe class virtualization to blade computing enabling Hitachi to offer the first true enterprise class blade server Virtage provides an alternative to third party software solutions enabling companies to decrease overhead costs while increasing manageability and performance This powerful mix of flexibility integration and scalability makes BladeSymphony 1000 effective for any enterprise but particularly for those running large custom applications or running high growth applications In fact BladeSymphony 1000 represents a true breakthrough that finally delivers on the promise of blade technology For More Information Additional information about Hitachi America Ltd and BladeSymphony products technologies and services can be found at www bladesymphony com www hitachi com BladeSymphony 1000 Architecture White Paper 51 HITACHI AMERICA LTD SERVER SYSTEMS GROUP 2000 Sierra Point Parkway Brisbane CA 94005 1836 ph 1 866 HITACHI email Se
53. ng System Support With support for Microsoft Windows and Red Hat Linux Enterprise BladeSymphony 1000 gives companies the option of running two of the most popular operating systems at the same time and in the same chassis for multiple applications For example in a virtualized environment on a single active server blade customers could allocate 50 percent of CPU resources to Windows 50 percent to Linux applications and change the allocation dynamically as workload requirements shift This provides an unheard of level of flexibility for accommodating spikes in demand for specific application services BladeSymphony Management Suite BladeSymphony 1000 can be configured to operate across multiple chassis and racks and this extended system can be managed centrally with BladeSymphony Management Suite software shown in Figure 30 BladeSymphony Management Suite allows the various system components to be managed through a unified dashboard For example when rack management is used an overview of all BladeSymphony 1000 racks including which servers storage and network devices are mounted can be quickly and easily obtained In the event of any system malfunction the faulty part can be located at a glance Operation Remote Asset Management Management Management Deployment N 1 N M Network Management Management Management Figure 30 BladeSymphony Management Suite
54. nology a choice of industry standard Intel processor based blade servers integrated management capabilities and powerful reliable scalable system resources enabling companies to consolidate infrastructure optimize workloads and run mission critical applications in a reliable scalable environment For organizations interested in reducing the cost risk and complexity of IT infrastructure whether at the edge of the network the application tier the database tier or all three BladeSymphony 1000 is a system that ClOs can rely on Introducing BladeSymphony 1000 BladeSymphony 1000 provides enterprise class service levels and unprecedented configuration flexibility using open industry standard technologies BladeSymphony 1000 overcomes the constraints of previous generation blade systems to deliver new capabilities and opportunities in the data center Blade systems were originally conceived as a means of increasing compute density and saving space in overcrowded data centers They were intended primarily as a consolidation platform A single blade enclosure could provide power cooling networking various interconnects and management and individual blades could be added as needed to run applications and balance workloads Typically blade servers have been deployed at the edge or the Web tier and used for file and print or other non critical applications However blade servers are not yet doing all they are capable of i
55. olidate systems and workloads with confidence at the edge the application tier the database tier or all three BladeSymphony 1000 allows companies to run any type of workload with enterprise class performance reliability manageability scalability and flexibility For example e BladeSymphony 1000 can be deployed at the edge tier similar to dual socket blade and rack server offerings from Dell HP IBM and others but with far greater reliability and scalability than competitive systems e BladeSymphony 1000 can be deployed at the application tier similar to quad socket blade server offerings from HP and IBM but with greater reliability and scalability e BladeSymphony 1000 ideal for the database tier similar to the IBM p Series or HP rack mount servers but with a mainframe class virtualization solution Designed with to be the first true enterprise class blade server the BladeSymphony 1000 provides outstanding levels of performance scalability reliability and configuration flexibility e Performance BladeSymphony 1000 supports both Intel Dual Core Itanium and Dual Core or Quad Core Xeon processors in the same chassis Utilizing Intel Itanium processors it delivers 64 bit processing and large memory capacity up to 256 GB in an SMP configuration as well as single Intel Xeon blade configurations allowing organizations to optimize for 64 bit or 32 bit workloads and run all applications at extremely high perfor
56. ores and 256 GB of memory e Intel Xeon Server Blade on page 20 provides details on the Intel Xeon Server Blades e I O Sub System on page 26 provides details on the PCI X PCle and Embedded Fibre Channel Switch modules e Chassis Power and Cooling on page 36 provides details on the two chassis models as well as Power and Cooling Fan Modules e Reliability and Serviceability Features on page 39 discusses the various reliability availability and serviceability features of the BladeSymphony 1000 e Management Software on page 45 discuss software management features e Virtage on page 48 provides technical details on Virtage embedded virtualization technology www hitachi com BladeSymphony 1000 Architecture White Paper 7 Chapter 3 Intel Itanium Server Blade The BladeSymphony 1000 can support up to eight blades for a total of up to 16 Itanium CPU sockets or 32 cores running Microsoft Windows or Linux Up to four Intel Itanium Server Blades can be connected via the high speed backplane to form a high performance SMP server of up to 16 cores Each Intel Itanium Server Blade illustrated in Figure 4 includes 16 DDR2 main memory slots Using 4 GB DIMMs this equates to 64 GB per server blade 16 GB per core or 256 GB in a 16 core SMP configuration making it an ideal candidate for large in memory databases and very large data sets Each server blade also includes two gigabit Etherne
57. ork function Link aggregation IEEE 802 3aqd Trunking up to 4 ports 24 groups Jumbo frame packet size 9216 bytes VLAN Port VLAN TagVLAN IEEE 802 1q Maximum number of definitions 4096 Management function SNMP v2c agent MIB II RFC1213 compliant Interface extending MIB FRC1573 RFC2233 compliant SCSI Hard Drive Modules The BladeSymphony 1000 supports two types of storage modules containing 73 or 146 GB 15K RPM Ultra820 SCSI drives in the type B chassis only The 3x HDD Module can have up to three drives installed The 6x HDD module utilizes two server slots and can house up to six drives Both HDD Modules are illustrated in Figure 24 Four out of eight server slots of a server chassis can be used to install storage modules for a total of either four 3x modules or two 6x modules The storage modules are mountable only on server slots 4 to 7 in Chassis B Disk drives installed in HDD modules can be hot swapped in a RAID configuration with the RAID controller installed on the PCI card The HDD Modules support RAID 1 5 and 0 1 and spare disk Figure 24 HDD Modules 34 BladeSymphony 1000 Architecture White Paper 1 3x HDD Unit 2 6x HDD Unit www hitachi com A SCSI or RAID type PCI card must be installed in a PCI slot in an I O module to act as the controller of the storage module A PCI card is combined with a storage module as shown in Figure 25 by connecting the SCSI cable from the PCI
58. oss Intel Itanium Server Blades are kept in sync by using a snooping cache coherency protocol When one of the Intel Itanium processors needs to access memory the requested address is broadcast by the Hitachi Node Controller The other Node Controllers that are part of that partition SMP listen for snoop those broadcasts The Node Controller keeps track of the memory addresses currently cached in each processor s on chip caches by assigning a tag for each cache entry If one of the processors contains the requested data in its cache it initiates a cache to cache transfer This reduces latency by avoiding the penalty to retrieve data from main memory and helps maintain consistency by sending the requesting processor the most current data In order to save bandwidth on the processors front side bus the Node Controller is able to use the L8 Cache Copy Tags to determine which memory address broadcasts its two local processors need to see If a requested address is not in the processors cache the Node Controller filters the request and does not forward the request to the local processors This process is illustrated in Figure 10 1 Cache consistency control within a local node 2 Memory address broadcasting B DI Cache e b Parallel consistency ged Processing q control over remote nodes OSS ee d 4 Memory data transfer li or Main Memory Main Memory Cache data transfer Figure 10 L3 cache copy tag process
59. penalty for accessing remote memory a number of operating systems are enhanced to improve the performance of NUMA system designs These operating systems take into account where data is located when scheduling tasks to run on CPUs using the closest CPU where possible Some operating systems are able to rearrange the location of data in memory to move it closer to the processors where its needed For operating systems that are not NUMA aware the BladeSymphony 1000 offers a number of memory interleaving options that can improve performance The Node Controllers can connect to up to three other Node Controllers providing a point to point connection between each Node Controller The advantage of the point to point connections is it eliminates a bus which would be prone to contention and eliminates the cross bar switch which reduces contention as a bus but adds complexity and latency A remote memory access is streamlined because it only needs to pass through the two Node Controllers this provides less latency when compared to other SMP systems www hitachi com BladeSymphony 1000 Architecture White Paper 15 SMP Configuration Options BladeSymphony 1000 supports two socket four core Intel Itanium Server Blades that can be scaled to offer up to two 16 core servers in a single chassis or eight four core servers or a mixture of SMP and single module systems thus reducing footprint and power consumption while increasing utilization and flexibility S
60. ponents the BladeSymphony 1000 utilizes multiple industry standard components to cost effectively increase reliability Redundant components also increase the serviceability of the system by allowing the system to continue operating while new components are added or failed components are replaced The BladeSymphony 1000 is designed with features to help ensure the system does not crash due to a failure and to minimize the effects from a failure These features are listed in Table 11 Table 11 Reliability features Function Feature Quickly detect diagnose failed part BIOS self diagnostic function Memory scrubbing function Intel Itanium Server Blade Failure recovery by retry and correc ECC function memory CPU bus SMP link Intel Itanium tion Server Blade CRC retry function PCle SCSI Dynamic isolation of failed part Advanced ECC online spare memory Redundant configurations HDD Modules redundant Switch amp Management Modules Power Modules and Cooling Fan Modules Memory mirroring Intel Xeon Server Blades Redundant system configurations Redundant LAN FC modules Cluster system configuration N 1 N M configurations Obtain failure information Isolation of failed part using System Event Log BladeSym phony Management Suite and Storage Manager Automatic notification of failure by ASSIST via email Block failed part Isolation of failed part upon system boot Repair failed part during oper
61. re Channel standard Eight ports are connected internally to the FC HBA of up to eight FC HBA Gigabit Ethernet Combo Cards and six of the ports are external ports used to connect to external storage Figure 19 depicts the back view of the module and a blow up of the Fibre Channel switch The block diagram for the module is shown in Figure 20 2 RJ 45 connector 1 000000000000 o 000000000000 H S e a e i Fee KO KO ieie 3 Error LED 1 Serial Port Fiber channel switch close up 4Gbps FC SW 7 Option LED Green 10 Fibre Channel port status LED ri green orange green orange 8 Fibre Channel switch status LED green orange 9 Power status LED green SFP SFP SFP SFP SFP F oe oe ee CAMO 4 LAN LED Link speed green 5 LAN LED Link status orange 6 SFP port SFP module Figure 19 Back view of Embedded Fibre Channel Switch Module with blow up of the Fibre Channel switch 28 BladeSymphony 1000 Architecture White Paper www hitachi com 48V Glacier 12V main 5V Standby Total 8 modules mountable See a FC SW Processor CPLD else RJ45 For management Data Bus PCleX4 Server blade PCleX4 Server blade 1 PCleX4 Server blade PCleX4 Server blade 3 PCleX4 Server blade 4 PCleX4 Server blade PCleX4 Server blade 6
62. rease virtualization efficiency and broaden operating system compatibility Intel Virtualization Technology Intel VT enables one hardware platform to function as multiple virtual platforms Virtualization solutions enhanced by Intel VT allow a software hypervisor to concurrently run multiple operating systems and applications in independent partitions Demand Based Switching The Demand Based Switching DBS function reduces power consumption by enabling the processor to move to power saving mode when under a low system load The DBS function must be supported by the operating system Hitachi Node Controller The Hitachi Node Controller controls various kinds of system busses including the front side bus FSB a PCle link and the node link The Hitachi Node Controller is equipped with three node link ports to combine up to four server blades The server blades connect to each other through the node link maintain cache coherence collectively and can be combined to form a ccNUMA type multiprocessor configuration The Hitachi Node Controller is connected to memory modules through memory controllers The Hitachi Node Controller provides the interconnection between the two processors two memory controllers three PCI bus interfaces and connection to up to three other Intel Itanium Server Blades Three x 5 3 GB sec links can connect up to three other Intel Itanium Server Blades over the backplane in order to provide 8 12 or 16 core SMP capabili
63. rverSales hal hitachi com web www BladeSymphony com HITACHI Inspire the Next 2008 Hitachi America Ltd All rights reserved Descriptions and specifications contained in this document are subject to change without notice and may differ from country to country Intel Itanium and Xeon are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries Linux is a registered trademark or trademark of Linus Torvalds in the United States and other countries Windows is the registered trademark of Microsoft Corporation in the United States and other countries Hitachi is a registered trademark of Hitachi Ltd and or its affiliates Blad eSymphony is a registered trademark of Hitachi Ltd in the United States Other trademarks service marks company names may be trademarks or registered trademarks of their respective owners 08 08
64. ry correctable errors occur frequently in a memory on BANK 3 BIOS which keeps counting the memory correctable errors on each bank activates the online copy function automatically upon the incidence of the fourth error All of the data on BANK 3 is copied to spare BANK 4 At the same time a log is recorded explaining that the data is copied to the spare bank and the system displays a message when the online sparing is complete at which time the system operates with BANK1 BANK2 and BANK4 12 GB the same capacity as before the online spare memory operation occurred Memory Mirroring Mirroring the memory provides a level of redundancy that enables the system to continue operating without the going down in case of a memory fault including a plural bits error Mirror Primary Mirror Primary Lindenhurst MCH Mirror Primary Mirror Primary Figure 15 Memory mirroring When operating in normal conditions data first writes in the primary slots 1 2 5 and 6 then in the mirror slots 3 4 7 and 8 The arrows in Figure 15 show the relationship between the mirroring source and the destination When data is read out it is read out of either the primary or mirror No memory testing of the mirror is carried out when the system is booted up after the mirroring is set Accordingly only half of the total capacity of the memory installed is displayed both in the memory test screen shown when the system is booted and in t
65. s The Cooling Fan Modules cool the system with variable speed fans and are installed redundantly as illustrated in Figure 27 The fans cool the system by pulling air from the front of the chassis to the back The modules can be hot plugged enabling a failed Cooling Fan Module to be replaced without disrupting system operations Cooling Fan Modules support the following functions e Control rotation e Detect abnormal rotation e Indicate the faulty location with LEDs e Built in fuse Cooling fan 1 Server blade slot No Cooling fan 0 PCI slot No NOobwn o BEE OBRWN oO O 0 Cooling fan 2 Cooling fan 3 Figure 27 Top view and cooling fan modules numbers 38 BladeSymphony 1000 Architecture White Paper www hitachi com Chapter 6 Reliability and Serviceability Features Reliability availability and serviceability are key requirements for platforms running business critical application services In today s globally competitive environment where users access applications round the clock downtime is unacceptable and can result in lost customers revenue and reputation The BladeSymphony 1000 is designed with a number of features intended to increase the uptime of the system Reliability Features Intended to execute core business operations the BladeSymphony 1000 s modular design increases reliability through the high availability of redundant components Rather than focus on creating individual highly available com
66. t Humidity no condensation 20 to 80 percent Module Connections Chassis A can have up to eight server blades mounted with two PCI X slots per server blade Storage modules cannot be mounted on these chassis Chassis B can have three types of I O modules mounted If a PCI X I O Module is installed the chassis can have up to four server blades with four PCI X slots connected to each server blade and up to four storage modules or server blades with no PCI X connection If a PCle I O Module is installed the chassis can have up to eight server blades with up to two PCle slots per server blade If a Embedded Fibre Channel Switch Module is installed the chassis can have up to eight server blades with up to two Fibre Channel ports and two gigabit Ethernet ports connected per server blade Redundant Power Modules The Power Module includes a 200 240 VAC input and supplies the sub and main power to the system Up to four Power Modules are installable in a chassis They are installed redundantly and support hot swapping The service processor SVP checks the power capacity when it starts up If SVD detects redundant power capacity it boots the system in the normal way If SVP cannot detect www hitachi com BladeSymphony 1000 Architecture White Paper power redundancy it boots the system after issuing a warning by illuminating the Warning LED Hot swapping is not possible in the absence of redundant power Redundant Cooling Fan Module
67. t ports which connect to the internal gigabit Ethernet switch in the chassis as well as two front side accessible USB 1 1 ports for local media connectivity and one RS 282 port for debugging purposes Figure 4 Intel Itanium Server Blade Intel Itanium Server Blades include the features listed in Table 1 Item Processors Processor model and maximum number of installed processors Table 1 Intel Itanium Server Blade features Specifications FSB 667 MHz 2 Intel Itanium Processor 9100 series 1 66 GHz L8 18 M FSB 667 MHz 2 Intel Itanium Processor 9100 series 1 42 GHz L8 12 M FSB 400 MHz 2 Intel Itanium 2 Processor 9100 series 1 66 GHz L8 24 MB B B SMP configuration Maximum 16 cores with four server blade configuration 8 BladeSymphony 1000 Architecture White Paper www hitachi com Table 1 Intel Itanium Server Blade features Item Specifications Memory Capacity Max 64 GB server blade if 4 GB DIMM is used Type DDR2 240 pin registered DIMM 1 rank 2 rank Frequency DDR2 400 3 3 3 Capacity 512 MB 1 GB 2 GB 4 GB DDR2 583 Configuration 4 bit x 18 devices 36 devices Availability Advanced ECC on line spare memory and scrubbing supported Backplane Node link for Three interconnect ports interface SMP PCI Express x 4 links 2 ports Gigabit Ether GbE SerDes 1 25 Gb sec 2 ports Wake on LAN supported net USB Two ports per parti
68. tency memory access Hyper Threading Technology Hyper Threading Technology HT Technology enables one physical processor to transparently appear and behave as two virtual processors to the operating system With HT Technology one dual core processor is able to simultaneously run four software threads HT Technology provides thread level parallelism on each processor resulting in more efficient use of processor resources higher processing throughput and improved performance on multi threaded software as well as increasing the number of users a server can support In order to leverage HT Technology SMP support in the operating system is required Intel Cache Safe Technology and Enhanced Machine Check Architecture Intel Cache Safe Technology is an automatic cache recovery capability that allows the processor and server to continue normal operation in case of cache error It automatically disables cache lines in the event of a cache memory error providing higher levels of uptime www hitachi com BladeSymphony 1000 Architecture White Paper 11 Enhanced Machine Check Architecture provides extensive error detection and address data path correction capabilities as well as system wide ECC protection It detects bit level errors and manages data corruption thereby providing better reliability and uptime Intel VT Virtualization Technology The Dual Core Intel Itanium processor includes hardware assisted virtualization support that helps inc
69. ties These direct connections provide a distinct performance advantage by eliminating the need for a cross bar switch found in most SMP system designs which reduces memory access latency across server blades The Hitachi Node Controller is equipped with three PCle ports to connect to I O devices Two of the PCle ports are used to connect to the I O modules The remaining port connects to an onboard I O device installed on the server blade which serves a gigabit Ethernet controller USB controller and COM ports The Hitachi Node Controller is designed for high performance processors and memory Throughput numbers to the processors memory and other nodes are listed in Table 3 Table 3 Bus throughput from the Hitachi Node Controller Bus Throughput Processor bus 400 MHz FSB 6 4 GB sec 667 MHz FSB 10 6 GB sec Memory bus 400 MHz FSB 4 8 GB sec 667 MHz FSB 5 3 GB sec 12 BladeSymphony 1000 Architecture White Paper www hitachi com Table 3 Bus throughput from the Hitachi Node Controller Throughput 400 MHz FSB 4 8 GB sec 667 MHz FSB 5 3 GB sec Connection between nodes Baseboard Management Controller The Baseboard Management Controller BMC is the main controller for Intelligent Platform Management Interface IPMI a common interface to hardware and firmware used to monitor system health and manage the system The BMC manages the interface between system management software
70. tion Fast Ethernet Two100Base 1OBase ports LAN manage ment DC One port Interface USB Two ports per physical partition on the front Compatible with USB 1 1 of module Serial One RS 232C port for debugging only I O function SCSI or RAID None I O module required for this function VGA None I O module required for this function Table 2 provides details on each of the components in the Itanium Blade Table 2 Main components of the Intel Itanium Server Blade Component Manufacturer Quantity Description Processor Intel Maximum 2 Intel Itanium Node Control Hitachi 1 Node controller controls each system ler NDC bus MC Hitachi 2 Memory controller DIMM Maximum 16 DDR2 SDRAM www hitachi com BladeSymphony 1000 Architecture White Paper 9 Table 2 Main components of the Intel Itanium Server Blade Component Manufacturer Quantity Description Bridge Intel 1 PCle to PCI X bridge South Bridge Intel 1 South bridge connects legacy devices SIO SMSC 1 Super WO chip contains the COM port and other legacy devices FW ROM ATMEL 8 MB A flash ROM storing the images of system STMicro firmware Also used as NVRAM under the control of the system firmware Gigabit Intel 1 Gigabit Ethernet interface controller two Ethernet ports SerDes connection Wake on LAN supported TagVLAN supported PXE Boot supported USB controller VIA 1 Compatible to UHCI and EHCI B
71. tures including a one to one relationship between server blades and I O modules as well as duplicate paths to I O and switches In addition although the backplane is the only single point of failure in the BladeSymphony 1000 it intentionally uses a passive design that eliminates active components that might fail The backplane provides connections between server blades SCSI HDD Modules PCI slots FC HBAs and LAN switch thus eliminating a large number of cables which reduces costs and complexity Two types of chassis are available chassis A and chassis B Chassis A provides connections between each server blade slot and two slots in a PCI module or Embedded Fibre Channel Switch Module Chassis B provides four connections from server blade slots 1 to 4 to two slots in a PCI module or Embedded Fibre Channel Switch Module The connections for both chassis types are illustrated in Figure 26 The specifications for each chassis type are listed in Table 10 Chassis Model A Chassis Model B Server blade 1 Server blade 1 Server blade Server blade 2 Server blade Server blade 3 Server blade Server blade 4 Server blade Server blade 5 j SAIEL SIS Server blade Server blade 6 Server blade 7 Server blade 7 amp Server blade Server blade 8 Switch Module H 1 LANO__ Switch
72. tworks The physical NIC shared between VNICs is called a shared physical NIC in the virtualization feature Integrated System Management for Virtual Machines Hitachi provides secure and integrated system management capabilities to reduce the total cost of ownership TCO of BladeSymphony 1000 with Virtage Hitachi offers integrated system management functionality with Virtage virtualization Administrators can access the integrated remote console via IP to mange and configure the virtualized environments remotely Virtual partitions can be created re configured and deleted through an integrated console screen that is remotely accessible An integrated shell console lets users access the guest operating system directly from the console screen for ease of use The system also monitors CPU utilization rates and allows processor CPU utilization changes dynamically for partitions operating in CPU shared mode BladeSymphony 1000 Architecture White Paper www hitachi com Chapter 10 Summary In the past inadequate scalability compromises in I O and other capabilities excessive heat generation and increased complexity in blade environments caused many data center managers to shy away from using blade servers for enterprise applications BladeSymphony 1000 overcomes these issues proving a blade solutions that delivers server consolidation centralized administration reduced cabling and simplified configuration For companies seeking to lower cost a
73. y system As processors are added to a system the amount of contention for memory access quickly increases to the point where the intended throughput improvement of more processors is significantly diminished The processors spend more time waiting for data to be supplied from memory than performing useful computing tasks Conventional uniform memory systems are not capable of scaling to larger numbers of processors due to memory bus contention Traditional large SMP systems introduce cross bar switches in order to overcome this problem However this approach adds to the memory hierarchy system complexity and physical size of the system SMP systems typically do not possess the advantages of blade systems e g compact packaging and flexibility Leveraging their extensive mainframe design experience Hitachi employs a number of advanced design techniques to create a blade based SMP system allowing the BladeSymphony 1000 to scale up to an eight socket 16 core system with as much as 256 GB of memory The heart of the design is the Hitachi custom designed Node Controller which effectively breaks a large system into smaller more flexible nodes or server blades in blade format These server blades can act as complete independent systems or up to four server blades can be connected to form a single efficient multi processor system as illustrated in Figure 6 BladeSymphony 1000 Architecture White Paper www hitachi com Processor Bus i 6 4 GB s F

Download Pdf Manuals

image

Related Search

Related Contents

instructivo para la prestación de servicios de control de plagas  English-T series-NOVA-User manual-F [兼容模式]  Hardware User`s Manual  Crowcon Gasmaster - Crowcon Detection Instruments  Samsung AM18A1C09 User's Manual  Melag Vacuklav 40B+/44B+  EUROGRAND EG8280USB  User Manual - www.solar  

Copyright © All rights reserved.
Failed to retrieve file