Home
IBM Z10 User's Manual
Contents
1. Provisioning Manager Support 5 tests per record Up to 15 per record No expiration Specific term length No Yes Reliability Availability and Serviceability RAS In today s on demand environment downtime is not only unwelcome it s costly If your applications aren t consis tently available your business suffers The damage can extend well beyond the financial realm into key areas of customer loyalty market competitiveness and regulatory compliance High on the list of critical business require ments today is the need to keep applications up and run ning in the event of planned or unplanned disruptions to your systems While some servers are thought of offering weeks or even months of up time System z thinks of this in terms of achieving years The 210 BC continues our commitment to deliver improvements in hardware Reliability Availability and Serviceability RAS with every new System z server They include microcode driver enhancements dynamic segment sparing for memory and fixed HSA as well as a new I O drawer design The z10 BC is a server that can help keep applications up and running in the event of planned or unplanned disruptions to the system The System z10 BC is designed to deliver industry lead ing reliability availability and security our customers have come to expect from System z servers System z10 BC RAS is designed to reduce all sources of outages by reducing unsche
2. Message Time Ordering Sysplex Timer Connectivity to Coupling Facilities As processor and Coupling Facility link technologies have improved the requirement for time synchronization toler ance between systems in a Parallel Sysplex environment has become ever more rigorous In order to enable any exchange of timestamped information between systems in a sysplex involving the Coupling Facility to observe the correct time ordering time stamps are now included in the message transfer protocol between the systems and the Coupling Facility Therefore when a Coupling Facility is configured on any System z10 or System z9 the Cou pling Facility will require connectivity to the same 9037 Sysp Coordinated Timing Network CTN that the systems in its ex Timer or Server Time Protocol STP configured Parallel Sysplex cluster are using for time synchroniza tion If the ICF is on the same server as a member of its Parallel Sysplex environment no additional connectivity is required since the server already has connectivity to the Sysp ex Timer However when an ICF is configured on any z10 which does not host any systems in the same Parallel Sysplex cluster it is necessary to attach the server to the 9037 Sysplex Timer or implement STP HMC System Support The new functions available on the Hardware Management Console HMC version 2 10 1 as described apply exclu sively to System z10 However the HMC version 2 10 1 will continue to
3. FICON Express2 with e Increased data transfer rates bandwidth e Improved performance Increased number of start Os Reduced backup windows Channel aggregation to help reduce infrastructure costs For more information about FICON visit the IBM Redbooks Web site at http www redbooks ibm com search for SG24 5444 There are also various FICON I O Connectivity information at www 03 ibm com systems z connectivity 20 Concurrent Update The FICON Express4 SX and LX features may be added to an existing z10 BC concurrently This concurrent update capability allows you to continue to run workloads through other channels while the new FICON Express4 features are being added This applies to CHPID types FC and FCP Continued Support of Spanned Channels and Logical Partitions The FICON Express4 and FICON Express2 FICON and FCP CHPID types FC and FCP channel types can be defined as a spanned channel and can be shared among logical partitions within and across LCSSs Modes of Operation There are two modes of operation supported by FICON Express4 and FICON Express2 SX and LX These modes are configured on a channel by channel basis each of the four channels can be configured in either of two sup ported modes e Fibre Channel CHPID type FC which is native FICON or FICON Channel to Channel server to server e Fibre Channel Protocol CHPID type FCP which sup ports attachment to SCSI devices via Fibre Channe
4. combined number of ZAAPs and or zlIPs can not be more than 2x the 11 0 0 U number of general purpose processors CPs Drawer 1 CPC Drawer Maximum Memory DIMM sizes 2 GB and 4 GB Fixed HSA not included up to 248 GB for customer use June 30 2009 System z CF Link Connectivity Peer Mode only 210 z9 z990 z890 210 29 2990 z8 z9 with PSIFB N A N A N A 3 GBps 710 ix PSIFB N A N A 5Gbps N A _71012xPSIFB_ NA NA NA 6GBps e N 2 Server generation connections allowed e Theoretical maximum rates shown e 1x PSIFBs support single data rate SDR at 2 5 Gbps when connected to a DWDM capable of SDR speed and double data rate DDR at 5 Gbps when connected to a DWDM capable of DDR speed e System z9 does NOT support 1x IB DDR or SDR InfiniBand Coupling Links Note The InfiniBand link data rate of 6 GBps 3 GBps or 5 Gbps does not represent the performance of the link The actual performance is depen dent upon many factors including latency through the adapters cable lengths and the type of workload With InfiniBand coupling links while the link data rate may be higher than that of ICB the service times of coupling operations are greater and the actual throughput may be less than with ICB links 63 Coupling Facility CF Level of Support eve unction Z B 289 CF Duplexing Enhancements X List Notification Improvements n Structu
5. http www 03 ibm com systems z gdps 60 Fiber Quick Connect for FICON LX Environments Fiber Quick Connect FQC an optional feature on z10 BC is offered for all FICON LX single mode fiber chan nels in addition to the current support for ESCON 62 5 micron multimode fiber channels FQC is designed to significantly reduce the amount of time required for on site installation and setup of fiber optic cabling FQC facilitates adds moves and changes of ESCON and FICON LX fiber optic cables in the data center and may reduce fiber con nection time by up to 80 FQC is for factory installation of Fiber Transport System FTS fiber harnesses for connection to channels in the O drawer FTS fiber harnesses enable connection to FTS direct attach fiber trunk cables from IBM Global Technol ogy Services FQC coupled with FTS is a solution designed to help minimize disruptions and to isolate fiber cabling activities away from the active system as much as possible IBM provides the direct attach trunk cables patch panels and Central Patching Location CPL hardware as well as the planning and installation required to complete the total structured connectivity solution An ESCON example Four trunks each with 72 fiber pairs can displace up to 240 fiber optic jumper cables the maximum quantity of ESCON channels in one I O drawer This significantly reduces fiber optic jumper cable bulk At CPL panels you can select the con
6. number of CPs e Total 130 Capacity Indicators for software settings e A00 for systems with IFL s or ICF s only Memory DIMM sizes 2 GB and 4 GB e Maximum physical memory 256 GB per system Minimum physical installed 16 GB of which 8 GB is for Fixed HSA e For 8 to 32 4 GB increments from 32 to 248 8 GB increments 15 z10 BC model upgrades The z10 BC provides for the dynamic and flexible capac ity growth for mainframe servers There are full upgrades within the 210 BC and upgrades from any z9 BC or z890 to any z10 BC Temporary capacity upgrades are available through On Off Capacity on Demand CoD z10 BC z10 EC z890 For the z10 BC models there are twenty six capacity settings per engine for central processors CPs Sub capacity processors have availability of z10 BC features functions and any to any upgradeability is available within the sub capacity matrix All CPs must be the same capac ity setting size within one z10 BC All specialty engines run at full speed The one for one entitlement to purchase one ZAAP and or one zIIP for each CP purchased is the same for CPs of any speed 16 210 BC Model Capacity IDs e A00 A071 to Z01 AO1 to Z02 A03 to Z03 A04 to 204 and A05 to Z05 Capacity setting AOO does not have any CP engines Nxx where n the capacity setting of the engine and Xx the number of PU characterized as CPs in the CPC Z03 Y03 X03 wo3 vo3 uo 1
7. CHPID There are two PCle adapters per feature OSA Express3 10 GbE LR is designed to support attachment to a 10 Gigabits per second Gbps Ethernet Local Area Net work LAN or Ethernet switch capable of 10 Gbps OSA Express3 10 GbE LR supports CHPID type OSD exclusively It can be defined as a spanned channel and can be shared among LPARs within and across LCSSs 27 OSA Express3 10 Gigabit Ethernet SR The OSA Express3 10 Gigabit Ethernet GbE short reach LR feature has two ports Each port resides on a PCle adapter and has its own channel path identifier CHPID There are two PCle adapters per feature OSA Express3 10 GbE SR is designed to support attachment to a 10 Gigabits per second Gbps Ethernet Local Area Net work LAN or Ethernet switch capable of 10 Gbps OSA Express3 10 GbE SR supports CHPID type OSD exclusively It can be defined as a spanned channel and can be shared among LPARs within and across LCSSs OSA Express3 Gigabit Ethernet LX The OSA Express3 Gigabit Ethernet GbE long wave length LX feature has four ports Two ports reside on a PCle adapter and share a channel path identifier CHPID There are two PCle adapters per feature Each port sup ports attachment to a one Gigabit per second Gbps Eth ernet Local Area Network LAN OSA Express3 GbE LX supports CHPID types OSD and OSN It can be defined as a spanned channel and can be shared among LPARs within and across LCSSs OSA Express3 Gigabit Ethern
8. FICON Express4 features are available in long wavelength LX and short wavelength SX For customers exploiting LX there are two options available for unre peated distances of up to 4 kilometers 2 5 miles or up to 10 kilometers 6 2 miles Both LX features use 9 micron single mode fiber optic cables The SX feature uses 50 or 62 5 micron multimode fiber optic cables Each FICON Express4 feature has four independent channels ports and can be configured to carry native FICON traffic or Fibre Channel SCSI traffic LX and SX cannot be inter mixed on a single feature The receiving devices must cor respond to the appropriate LX or SX feature The maximum number of FICON drawers Express4 features is 32 using four I O Exclusive to the 210 BC and z9 BC is the availability of a lower cost FICON Express4 2 port feature the FICON Express4 2C 4KM LX and FICON Express4 2C SC These features support two FICON 4 Gbps LX and SX chan nels respectively The FICON Express4 2 port cards are designed to operate like the 4 port card but with the flex ibility of having fewer ports per card FICON Express2 Channels The z10 BC supports carrying forward FICON Express2 channels each one operating at 1 or 2 Gb sec auto negotiated The FICON Express2 features are available in long wavelength LX using 9 micron single mode fiber optic cables and short wavelength SX using 50 and 62 5 micron multimode fiber optic cables Each FICON Express2 f
9. Inter Switch Links has the potential for fiber infrastructure cost savings by reducing the number of channels for inter connecting the two sites 21 wa Two site non cascaded director topology Each CEC connects to directors in both sites With Inter Switch Links ISLs less fiber cabling may be needed for cross site connectivity Two Site cascaded director topology Each CEC connects to local directors only FCP Channels z10 BC supports FCP channels switches and FCP SCSI disks with full fabric connectivity under Linux on System z and z VM 5 2 or later for Linux as a guest under z VM under z VM 5 2 or later and under z VSE 3 1 for system usage including install and IPL Support for FCP devices means that z10 BC servers are capable of attaching to select FCP attached SCSI devices and may access these devices from Linux on z10 BC and z VSE This expanded attachability means that enterprises have more choices for new storage solutions or may have the ability to use existing storage devices thus leveraging existing invest ments and lowering total cost of ownership for their Linux implementations The same FICON features used for native FICON chan nels can be defined to be used for Fibre Channel Protocol FCP channels FCP channels are defined as CHPID type FCP The 4 Gb sec capability on the FICON Express4 channel means that 4 Gb sec link data rates are available for FCP channels as well FCP increased perf
10. InterSystem Channel 3 ISC 3 supports communi cation over unrepeated distances of up to 10 km 6 2 miles using 9 micron single mode fiber optic cables and even greater distances with System z qualified opti cal networking solutions ISC 3s are supported exclu sively in peer mode CHPID type CFP 4 12x InfiniBand coupling links 12x IB SDR or 12x IB DDR offer an alternative to ISC 3 in the data center and facilitate coupling link consolidation physical links can be shared by multiple systems or CF images on a single system The 12x IB links support distances up to 150 meters 492 feet using industry standard OM3 50 micron fiber optic cables 52 System z now supports 12x InfiniBand single data rate 12x IB SDR coupling link attachment between System 210 and System z9 general purpose no longer limited to standalone coupling facility 5 Long Reach 1x InfiniBand coupling links 1x IB SDR or 1x IB DDR are an alternative to ISC 3 and offer greater distances with support for point to point unrepeated connections of up to 10 km 6 2 miles using 9 micron single mode fiber optic cables Greater distances can be supported with System z qualified optical networking solutions Long reach 1x InfiniBand coupling links support the same sharing capability as the 12x InfiniBand version allowing one physical link to be shared across multiple CF images on a system Note The InfiniBand link data rates do not represent the performance of the l
11. Introducing long reach InfiniBand coupling links Now InfiniBand can be used for Parallel Sysplex coupling and STP communication at unrepeated distances up to 10 km 6 2 miles and even greater distances when attached to a qualified optical networking solution InfiniBand cou pling links supporting extended distance are referred to as 1x one pair of fiber IB SDR or 1x IB DDR e Long reach 1x InfiniBand coupling links support single data rate SDR at 2 5 gigabits per second Gbps when connected to a DWDM capable of SDR e Long reach 1x InfiniBand coupling links support double data rate DDR at 5 Gbps when connected to a DWDM capable of DDR Depending on the capability of the attached DWDM the link data rate will automatically be set to either SDR or DDR The IBM System z10 introduces InfiniBand coupling link technology designed to provide a high speed solution and increased distance 150 meters compared to ICB 4 10 meters InfiniBand coupling links also provide the ability to define up to 16 CHPIDs on a single PSIFB port allowing physi cal coupling links to be shared by multiple sysplexes This also provides additional subchannels for Coupling Facility communication improving scalability and reduc ing contention in heavily utilized system configurations It also allows for one CHPID to be directed to one CF and another CHPID directed to another CF on the same target server using the same port Like other coupling link
12. LAN The feature supports auto negotiation and automatically adjusts to 10 100 or 1000 Mbps depending upon the LAN When the feature is set to autonegotiate the target device must also be set to auto negotiate The feature supports the following settings 10 Mbps half or full duplex 100 Mbps half or full duplex 1000 Mbps 1 Gbps full duplex OSA Express3 1000BASE T Ethernet supports CHPID types OSC OSD OSE and OSN It can be defined as a spanned channel and can be shared among LPARs within and across LCSSs Software updates are required to exploit both ports When configured at 1 Gbps the 1000BASE T Ethernet feature operates in full duplex mode only and supports jumbo frames when in QDIO mode CHPID type OSD OSA Express QDIO data connection isolation for the z VM environment Multi tier security zones are fast becoming the network configuration standard for new workloads Therefore it is essential for workloads servers and clients hosted in a virtualized environment shared resources to be protected from intrusion or exposure of data and processes from other workloads With Queued Direct Input Output QDIO data connection isolation you e Have the ability to adhere to security and HIPAA security guidelines and regulations for network isolation between the operating system instances sharing physical network connectivity e Can establish security zone boundaries that have been defined by your network administrators
13. The plat z10 BC advances the innovation of the System z10 f orm and brings value to a wider audience It is built using a redesigned air cooled drawer package which replaces the prior book concept in order to reduce cost and increase flexibility A redesigned I O drawer offers 14 higher availability and can be concurrently added or replaced when at least two drawers are installed Reduced capacity and priced I O features will continue to be offered on the z10 BC to help lower your total cost of acquisition The quad core design z10 processor chip delivers higher frequency and will be introduced at 3 5 GHz which can help improve the execution of CPU intensive workloads on the z10 BC These design approaches facilitate the high availability dynamic capabilities and lower cost that differ entiate this 210 BC from other servers The z10 BC supports from 4 GB up to 248 GB of real customer memory This is almost four times the maximum memory available on the z9 BC The increased available memory on the server can help to benefit workloads that perform better with larger memory configurations such as DB2 WebSphere and Linux In addition to the cus tomer purchased memory an additional 8 GB of memory is included for the Hardware System Area HSA The HSA holds the I O configuration data for the server and is entirely fenced from customer memory High speed connectivity and high bandwidth out to the data and the network ar
14. a system uses when assigning WWPNs for channels utilizing N_Port Identifier Virtualization NPIV The tool needs to know the FCP specific I O device defini tions in the form of a csv file This file can either be cre ated manually or exported from Hardware Configuration Definition Hardware Configuration Manager HCD HCM The tool will then create the WWPN assignments which are required to set up your SAN The tool will also create a binary configuration file that can later on be imported by your system The WWPN prediction tool can be downloaded from Resource Link and is applicable to all FICON channels defined as CHPID type FCP for communication with SCSI devices Check Preventive Service Planning PSP buck ets for required maintenance http www ibm com servers resourcelink Extended distance FICON improved performance at extended distance An enhancement to the industry standard FICON architec ture FC SB 3 helps avoid degradation of performance at extended distances by implementing a new protocol for persistent Information Unit IU pacing Control units that exploit the enhancement to the architecture can increase the pacing count the number of IUs allowed to be in flight from channel to control unit Extended distance FICON also allows the channel to remember the last pacing update for use on subsequent operations to help avoid degradation of performance at the start of each new operation Improved I
15. an external Ethernet or to connect the HiperSockets Layer 2 networks of different servers The HiperSockets Multiple Write Facility for 210 BC is also supported for Layer 2 HiperSockets devices thus allowing performance improvements for large Layer 2 datastreams HiperSockets Layer 2 support is exclusive to System z10 and is supported by z OS Linux on System z environ ments and z VM for Linux guest exploitation HiperSockets Multiple Write Facility for increased performance Though HiperSockets provides high speed internal TCP IP connectivity between logical partitions within a System z server the problem is that HiperSockets draws excessive CPU utilization for large outbound messages This may lead to increased software licensing cost HiperSock ets large outbound messages are charged to a general CPU which can incur high general purpose CPU costs This may also lead to some performance issues due to synchronous application blocking HiperSockets large outbound messages will block a sending application while synchronously moving data A solution is HiperSockets Multiple Write Facility HiperSockets performance has been enhanced to allow for the streaming of bulk data over a HiperSockets link between logical partitions LPARs The receiving LPAR can now process a much larger amount of data per O interrupt This enhancement is transparent to the operating system in the receiving LPAR HiperSockets Multiple Write Facility w
16. analysis The informa tion can be displayed and is saved in the system log Serviceability Enhancements Requests Node Identification Data RNID is designed to facilitate the resolution of fiber optic cabling problems You can now request RNID data for a device attached to a native FICON channel Local Area Network LAN connectivity OSA Express3 the newest family of LAN adapters The third generation of Open Systems Adapter Express OSA Express3 features have been introduced to help reduce latency and overhead deliver double the port den sity of OSA Express2 and provide increased throughput Choose the OSA Express3 features that best meet your business requirements To meet the demands of your applications provide granu larity facilitate redundant paths and satisfy your infrastruc ture requirements there are seven features from which to choose In the 10 GbE environment Short Reach SR is being offered for the first time Infrastructure Ports per Feature 4 Feature OSA Express3 GbE LX Single mode fiber OSA Express3 10 GbE LR Single mode fiber OSA Express3 GbE SX Multimode fiber Multimode fiber OSA Express3 10 GbE SR OSA Express3 2P GbE SX Multimode fiber Copper OSA Express3 1000BASE T P pm p amp Pp OSA Express3 2P 1000BASE T Copper Note that software PTFs or a new release may be required depending on CHPID type to support all ports 25 O
17. and should always be referred to for detailed planning information itos Integrated TH Battery System Rewer supply integrated Battery _ system power supply Central Processor Complex CPC drawer Central Processor Complex CPC drawer Support Elements 1 O drawer 3 VO drawer 3 le 1 O drawer 2 VO drawer 2 I O drawer 1 VO drawer 1 la I O drawer 4 O drawer 4 A Frame Front View A Frame Rear View 61 z10 BC System Power normal room lt 28 deg 11 0 Drawer 3 686 kW 21 0 Drawers 4 542 kW 41 0 Drawers 5 308 kW 6 253 kW warm room o deg 4 339 kW 5 315 kW 6 291 kW 7 266 kW 210 BC Highlights and Physical Dimensions z10 BC z9 BC Number of Fram Width with covers eight Reduction Service Clearance 1 Frame 1 Frame 201 5 cm 79 3 in 42 EIA 77 0 cm 30 3 in 180 6 cm 71 1 in 194 1 cm 76 4 in 40 EIA 78 5 cm 30 9 in 157 7 cm 62 1 in 180 9 cm 71 2 in EIA None 178 5 cm 70 3 in EIA None 1 42 sq m 15 22 sq ft 3 50 sq m 37 62 sq ft IBF Contained w in Frame 1 24 sq m 13 31 sq ft 3 03 sq m 32 61 sq ft IBF Contained w in Frame Maximum of 480 CHPIDs four I O drawers 32 I O
18. both clear key and secure key operations Crypto Express2 1P An option of one PCI X adapter per feature in addition to the current two PCI X adapters per feature is being offered for the z10 BC to help satisfy small and midrange security requirements while maintaining high performance The Crypto Express2 1P feature with one PCI X adapter can continue to be defined as either a Coprocessor or an Accelerator A minimum of two features must be ordered Additional cryptographic functions and features with Crypto Express2 and Crypto Express2 1P Key management Added key management for remote loading of ATM and Point of Sale POS keys The elimina tion of manual key entry is designed to reduce downtime due to key entry errors service calls and key manage ment costs Improved key exchange Added Improved key exchange with non CCA cryptographic systems New fea tures added to IBM Common Cryptographic Architecture CCA are designed to enhance the ability to exchange keys between CCA systems and systems that do not use control vectors by allowing the CCA system owner to define permitted types of key import and export while preventing uncontrolled key exchange that can open the system to an increased threat of attack These are supported by z OS and by z VM for guest exploitation Support for ISO 16609 Support for ISO 16609 CBC Mode T DES Message Authentication MAC requirements ISO 16609 CBC Mode T DES MAC is accessi
19. by some customers The STP design provides continuous availability of ETS while maintaining the special roles of PTS and BTS as signed by the cus tomer The availability improvement is available when the ETS is configured as an NTP server or an NTP server using PPS NTP Server on Hardware Management Console Improved security can be obtained by providing NTP server support on the HMC If an NTP server with or with out PPS is configured as the ETS device for STP it needs to be attached directly to the Support Element SE LAN The SE LAN is considered by many users to be a private dedicated LAN to be kept as isolated as possible from the intranet or Internet Since the HMC is normally attached to the SE LAN pro viding an NTP server capability on the HMC addresses the potential security concerns most users may have for 55 attaching NTP servers to the SE LAN The HMC via a separate LAN connection can access an NTP server avail able either on the intranet or Internet for its time source Note that when using the HMC as the NTP server there is no pulse per second capability available Therefore you should not configure the ETS to be an NTP server using PPS Enhanced STP recovery when Internal Battery Feature is in use Improved availability can be obtained when power has failed for a single server PTS CTS or when there is a site power outage in a multi site configuration where the PTS CTS is installed the site with
20. capabilities are being made available allowing selected virtual resources to be defined In addition further enhancements have been made for managing defined virtual resources Enhancements are designed to deliver out of the box integrated graphical user interface based GUI based management of selected parts of z VM This is especially targeted to deliver ease of use for enterprises new to System z This helps to avoid the purchase and installa tion of additional hardware or software which may include complicated setup procedures You can more seamlessly perform hardware and selected operating system man agement using the HMC Web browser based user inter face Support for HMC z VM tower systems management enhancements is exclusive to z VM 5 4 and the System z10 58 Enhanced installation support for z VM using the HMC HMC version 2 10 1 along with Support Element SE version 2 10 1 on z10 BC and corresponding z VM 5 4 sup port will now give you the ability to install Linux on System z in a z VM virtual machine using the HMC DVD drive This new function does not require an external network con nection between z VM and the HMC but instead uses the existing communication path between the HMC and SE This support is intended for customers who have no alter native such as a LAN based server for serving the DVD contents for Linux installations The elapsed time for instal lation using the HMC DVD drive can be an order of
21. help cus tomers scale their Crypto Express2 investments for their business needs Crypto Express2 is also available on z10 BC as a single PCI X adapter which may be defined as either a coprocessor or an accelerator System z security is one of the many reasons why the world s top banks and retailers rely on the IBM mainframe to help secure sensitive business transactions z Can Do IT securely 36 Cryptography The z10 BC includes both standard cryptographic hard ware and optional cryptographic features for flexibility and growth capability IBM has a long history of providing hard ware cryptographic solutions from the development of Data Encryption Standard DES in the 1970s to delivering integrated cryptographic hardware in a server to achieve the US Government s highest FIPS 140 2 Level 4 rating for secure cryptographic hardware The IBM System z10 BC cryptographic functions include the full range of cryptographic operations needed for e business e commerce and financial institution applica tions In addition custom cryptographic functions can be added to the set of functions that the z10 BC offers New integrated clear key encryption security features on 210 BC inc standard and more secure hashing algorithms Performing ude support for a higher advanced encryption these functions in hardware is designed to contribute to improved performance Enhancements to eliminate preplanning in the cryptogra phy area
22. include the System z10 function to dynamically add Crypto to a logical partition Changes to image pro files to support Crypto Express2 features are available without an outage to the logical partition Crypto Express2 features can also be dynamically deleted or moved CP Assist for Cryptographic Function CPACF CPACF supports clear key encryption All CPACF func tions can be invoked by problem state instructions defined by an extension of System z architecture The function is activated using a no charge enablement feature and offers the following on every CPACF that is shared between two Processor Units PUs and designated as CPs and or Inte grated Facility for Linux IFL e DES TDES AES 128 AES 192 AES 256 e SHA 1 SHA 224 SHA 256 SHA 384 SHA 512 e Pseudo Random Number Generation PRNG Enhancements to CP Assist for Cryptographic Func tion CPACF CPACF has been enhanced to include support of the fol lowing on CPs and IFLs e Advanced Encryption Standard AES for 192 bit keys and 256 bit keys e SHA 384 and SHA 512 bit for message digest SHA 1 SHA 256 and SHA 512 are shipped enabled and do not require the enablement feature Support for CPACF is also available using the Integrated Cryptographic Service Facility ICSF ICSF is a com ponent of z OS and is designed to transparently use the available cryptographic functions whether CPACF or Crypto Express2 to balance the workload and help address the b
23. information on the U S government requirements can be found at http Avww whitehouse gov omb memoranda fy2005 m05 22 pdf and http Awww whitehouse gov omb egov documents IPv6_ FAQs pdf HMC SE Console Messenger On systems prior to System z9 the remote browser capa bility was limited to Platform Independent Remote Console PIRC with a very small subset of functionality Full func tionality via Desktop On Call DTOC was limited to one user at a time it was slow and was rarely used With System z9 full functionality to multiple users was delivered with a fast Web browser solution You liked this but requested the ability to communicate to other remote users There is now a new Console Manager task that offers basic messaging capabilities to allow system operators or administrators to coordinate their activities The new task may be invoked directly or via a new option in Users and Tasks This capability is available for HMC and SE local and remote users permitting interactive plain text com munication between two users and also allowing a user to broadcast a plain text message to all users This feature is a limited instant messenger application and does not inter act with other instant messengers HMC z VM Tower System Management Enhancements Building upon the previous z VM Systems Management support from the Hardware Management Console HMC which offered management support for already defined virtual resources new HMC
24. key entry errors Reduces service call and key management costs Improves the ability to manage ATM conversions and upgrades Integrated Cryptographic Service Facility ICSF together with Crypto Express2 support the basic mechanisms in Remote Key Loading The implementation offers a secure bridge between the highly secure Common Cryptographic Architecture CCA environment and the various formats and encryption schemes offered by the ATM vendors The following ICSF services are offered for Remote Key loading e Trusted Block Create CSNDTBC This callable service is used to create a trusted block containing a public key and some processing rules e Remote Key Export CSNDRKX This callable service uses the trusted block to generate or export DES keys for local use and for distribution to an ATM or other remote device Refer to Application Programmers Guide SA22 7522 for additional details Improved Key Exchange With Non CCA Cryptographic Systems IBM Common Cryptographic Architecture CCA employs Control Vectors to control usage of cryptographic keys Non CCA systems use other mechanisms or may use keys that have no associated control information This enhancement provides the ability to exchange keys between CCA systems and systems that do not use Con trol Vectors Additionally it allows the CCA system owner to define permitted types of key import and export which can help to prevent uncontrolled key exchange that can open the
25. management enhancements for Linux and other virtual images For the most current information on z VM refer to the z VM Web site at http www vm ibm com ZNSE z VSE 4 1 the latest advance in the ongoing evolution of VSE is designed to help address needs of VSE clients with growing core VSE workloads and or those who wish to exploit Linux on System z for new Web based business solutions and infrastructure simplification Z VSE 4 1 is designed to support e 2 Architecture mode only e 64 bit real addressing and up to 8 GB of processor storage e System z encryption technology including CPACF con figurable Crypto Express2 and TS1120 encrypting tape e Midrange Workload License Charge MWLC pricing including full capacity and sub capacity options IBM has previewed z VSE 4 2 When available z VSE 4 2 is designed to help address the needs of VSE clients with growing core VSE workloads z VSE V4 2 is designed to support e More than 255 VSE tasks to help clients grow their CICS workloads and to ease migration from CS VSE to CICS Transaction Server for VSE ESA e Up to 32 GB of processor storage e Sub Capacity Reporting Tool running natively e Encryption Facility for Z VSE as an optional priced feature e IBM System Storage TS3400 Tape Library via the TS1120 Controller e IBM System Storage TS7740 Virtualization Engine Release 1 3 Z VSE V4 2 plans to continue the focus on hybrid solutions exploiting z VSE and
26. module 1 is a prerequisite for this module 3 WebSphere Application Server health check For a detailed description of this service refer to Services Announcement 608 041 RFA47367 dated June 24 2008 Implementation Services for Parallel Sysplex DB2 Data Sharing To assist with the assessment planning implementation testing and backup and recovery of a System z DB2 data sharing environment IBM Global Technology Services announced and made available the IBM Implementation Services for Parallel Sysplex Middleware DB2 data shar ing on February 26 2008 59 This DB2 data sharing service is designed for clients who want to 1 Enhance the availability of data 2 Enable applications to take full utilization of all servers resources 3 Share application system resources to meet business goals 4 Manage multiple systems as a single system from a single point of control 5 Respond to unpredicted growth by quickly adding com puting power to match business requirements without disruption 6 Build on the current investments in hardware software applications and skills while potentially reducing com puting costs The offering consists of six selectable modules each is a stand alone module that can be individually acquired The first module is an infrastructure assessment module followed by five modules which address the following DB2 data sharing disciplines 1 DB2 data sharing planning 2 DB2 data sharing imple
27. passwords and password phrases The z VM SSL server now operates in a CMS environment instead of requiring a Linux distribution thus allowing encryption ser vices to be deployed more quickly and helping to simplify installation service and release to release migration The z VM hypervisor is designed to help clients extend the business value of mainframe technology across the enter prise by integrating applications and data while providing exceptional levels of availability security and operational ease z VM virtualization technology is designed to provide the capability for clients to run hundreds to thousands of Linux servers in a single mainframe together with other System z operating systems such as z OS or as a large scale Linux only enterprise server solution z VM V5 4 can also help to improve productivity by hosting non Linux workloads such as z OS z VSE and z TPF August 5 2008 IBM announced z VM 5 4 Enhancements in Z VM 5 4 include e Increased flexibility with support for new z VM mode logical partitions e Dynamic addition of memory to an active 2 VM LPAR by exploiting System z dynamic storage reconfiguration capabilities e Enhanced physical connectivity by exploiting all OSA Express3 ports e Capability to install Linux on System z from the HMC without requiring an external network connection e Enhancements for scalability and constraint relief e Operation of the SSL server in a CMS environment e Systems
28. s Management Service and Directory Service It will register e Platform Worldwide node name node name for the platform same for all channels Platform type host computer Platform name includes vendor ID product ID and vendor specific data from the node descriptor e Channel s Worldwide port name WWPN Node port identification N_PORT ID FC 4 types supported always 0x1B and additionally Ox1C if any Channel to Channel CTC control units are defined on that channel Classes of service support by the channel 23 Platform registration is a service defined in the Fibre Chan nel Generic Services 4 FC GS 4 standard INCITS ANSI T11 group Platform and name server registration applies to all of the FICON Express4 FICON Express2 and FICON Express features CHPID type FC This support is exclusive to System z10 and is transparent to operating systems Preplanning and setup of SAN for a System z10 environment The worldwide port name WWPN prediction tool is now available to assist you with preplanning of your Storage Area Network SAN environment prior to the installation of your System z10 server This standalone tool is designed to allow you to setup your SAN in advance so that you can be up and running much faster once the server is installed The tool assigns WWPNs to each virtual Fibre Channel Protocol FCP channel port using the same WWPN assignment algo rithms
29. slots 8 I O slots per I O drawer 210 BC Configuration Detail Features Increments Purchase per Feature Increments 16 po 0 32 480 channels 16 channels 4 channels ESCON reserved as as a spare man 0 32 64 128 2 4 2 4 a channels channels channels 0 20 80 channels 4 channels 4 channels Express2 aah 0 20 40 channels 2 channels 2 channels re kk ICB 4 0 6 12 links 2 links 1 link ISC 3 0 12 48 links 4 links 1 link 1x PSIFB 0 6 12 links 2 links 2 links neel 0 6 12 links 2 links 2 links OSA 0 24 48 96 2o0r4 2 ports Express3 ports 4 ports 0 24 24 48 1or2 2 ports aan ports 1 port rypto 0 8 8 16 PCI X 1 2 PCI X 2 PCI X Express2 adapters adapters adapters 1 Minimum of one I O feature ESCON FICON or Coupling Link PSIFB ICB 4 ISC 3 required 2 The maximum number of external Coupling Links combined cannot exceed 56 per server There is a maximum of 64 coupling link CHPIDs per server ICs ICB 4s active ISC 3 links and IFBs 3 ICB 4 and 12x IB DDR are not included in the maximum feature count for I O slots but are included in the CHPID count 4 Initial order of Crypto Express2 is 2 4 PCI X adapters two features Each PCI X adapter can be configured as a coprocessor or an accelera tor FICON Express4 2C 4KM LX has two channels per feature OSA Express3 GbE and 1000BASE T have 2 and 4 p
30. support Delivering the technologies required to address today s IT challenges also takes much more than just a server it requires all of the system elements to be working together IBM System z10 operating systems and servers are designed with a collaborative approach to exploit each other s strengths The z10 BC is also able to exploit numerous operating sys tems concurrently on a single server these include z OS Z VM 2 VSE z TPF TPF and Linux for System z These operating systems are designed to support existing appli cation investments without anticipated change and help you realize the benefits of the z10 BC z10 BC the new business equation 2 08 August 5 2008 IBM announced z OS V1 10 This release of the z OS operating system builds on leadership capa bilities enhances time tested technologies and leverages deep synergies with the IBM System z10 and IBM System Storage family of products z OS V1 10 supports new capabilities designed to provide e Storage scalability Extended Address Volumes EAVs enable you to define volumes as large as 223 GB to relieve storage constraints and help you simplify storage management by providing the ability to manage fewer large volumes as opposed to many small volumes e Application and data serving scalability Up to 64 engines up to 1 5 TB per server with up to 1 0 TB of real memory per LPAR and support for large 1 MB pages on the System z10 can help provide scale a
31. support the systems as shown The 2 10 1 HMC will continue to support up to two 10 100 Mbps Ethernet LANs Token Ring LANs are not supported The 2 10 1 HMC applications have been updated to sup port HMC hardware without a diskette drive DVD RAM CD ROM and or USB flash memory drive media will be used Family Machine Type Firmware Driver SE Version z10 BC 2098 76 2 10 1 z10 EC 2097 73 2 10 0 z9 BC 2096 67 2 9 2 z9 EC 2094 67 2 9 2 z890 2086 55 1 8 2 z990 2084 55 1 8 2 z800 2066 3G 1 7 3 ees z900 2064 3G 1 7 3 9672 G6 9672 9674 26 1 6 2 9672 G5 9672 9674 26 1 6 2 Internet Protocol Version 6 IPv6 HMC version 2 10 1 and Support Element SE version 2 10 1 can now communicate using IP Version 4 IPv4 IP Version 6 IPv6 or both It is no longer necessary to assign a static IP address to an SE if it only needs to com municate with HMCs on the same subnet An HMC and SE can use IPv6 link local addresses to communicate with each other 57 HMC SE support is addressing the following requirements e The availability of addresses in the IPv4 address space is becoming increasingly scarce e The demand for IPv6 support is high in Asia Pacific countries since many companies are deploying IPv6 e The U S Department of Defense and other U S govern ment agencies are requiring IPv6 support for any prod ucts purchased after June 2008 More
32. the BTS is a different site not affected by the power outage If an Internal Battery Feature IBF is installed on your System z server STP now has the capability of receiving notification that customer power has failed and that the IBF is engaged When STP receives this notification from a server that has the role of the PTS CTS STP can automati cally reassign the role of the CTS to the BTS thus automat ing the recovery action and improving availability STP configuration and time information saved across Power on Resets POR or power outages This enhancement delivers system management improvements by saving the STP configuration across PORs and power failures for a single server STP only CTN Previously if the server was PORed or experienced a power outage the time and assignment of the PTS and CTS roles would have to be reinitialized You will no longer need to reinitial ize the time or reassign the role of PTS CTS across POR or power outage events Note that this enhancement is also available on the z990 and z890 servers Application Programming Interface API to automate STP CTN reconfiguration The concept of a pair and a spare has been around since the original Sysplex Couple Data Sets CDSs If the primary CDS becomes unavailable the backup CDS would take over Many sites have had automation routines bring a new backup CDS online to avoid a single point of failure This idea is being extended to STP With thi
33. to service the network and fewer required resources Fewer CHPIDs to define and manage Reduction in the number of required I O slots Possible reduction in the number of I O drawers Double the port density of OSA Express2 A solution to the requirement for more than 48 LAN ports now up to 96 ports The OSA Express3 features are exclusive to System z10 OSA Express2 availability OSA Express2 Gigabit Ethernet and 1000BASE T Ethernet continue to be available for ordering for a limited time if you are not yet in a position to migrate to the latest release of the operating system for exploitation of two ports per PCI E adapter and if you are not resource constrained Historical summary Functions that continue to be sup ported by OSA Express3 and OSA Express2 e Queued Direct Input Output QDIO uses memory queues and a signaling protocol to directly exchange data between the OSA microprocessor and the network software for high speed communication QDIO Layer 2 Link layer for IP IPv4 IPv6 or non IP AppleTalk DECnet IPX NetBIOS or SNA work loads Using this mode the Open Systems Adapter OSA is protocol independent and Layer 3 indepen dent Packet forwarding decisions are based upon the Medium Access Control MAC address QDIO Layer 3 Network or IP layer for IP workloads Packet forwarding decisions are based upon the IP address All guests share OSA s MAC address e Jumbo frames
34. z10 BC z9 EC and z9 BC Smart Card Reader Support for an optional Smart Card Reader attached to the TKE 5 3 workstation allows for the use of smart cards that contain an embedded microprocessor and associated memory for data storage Access to and the use of con fidential data on the smart cards is protected by a user defined Personal Identification Number PIN TKE 5 3 LIC has added the capability to store key parts on DVD RAMs and continues to support the ability to store key parts on paper or optionally on a smart card TKE 5 3 LIC has limited the use of floppy diskettes to read only The TKE 5 3 LIC can remotely control host cryptographic coprocessors using a password protected authority signa ture key pair either in a binary file or on a smart card The Smart Card Reader attached to a TKE workstation with the 5 3 level of LIC will support System z10 BC z10 EC z9 EC and z9 BC However TKE workstations with 5 0 5 1 and 5 2 LIC must be upgraded to TKE 5 3 LIC TKE additional smart cards new feature You have the capability to order Java based blank smart cards which offers a highly efficient cryptographic and data management application built in to read only memory for storage of keys certificates passwords applications and data The TKE blank smart cards are compliant with FIPS 140 2 Level 2 When you place an order for a quantity of one you are shipped 10 smart cards System z10 BC cryptographic migration Cl
35. 6 provides CF Duplexing enhancements described previously in the sec tion titled Coupling Facility Control Code CFCC Level 16 A robust failure recovery capability Parallel Sysplex Coupling Connectivity The Coupling Facilities communicate with z OS images in the Parallel Sysplex environment over specialized high speed links As processor performance increases it is important to also use faster links so that link performance does not become constrained The performance avail ability and distance requirements of a Parallel Sysplex environment are the key factors that will identify the appro priate connectivity option for a given configuration When connecting between System z10 System z9 and z990 z890 servers the links must be configured to operate in Peer Mode This allows for higher data transfer rates to and from the Coupling Facilities The peer link acts simultaneously as both a CF Sender and CF Receiver link reducing the number of links required Larger and more data buffers and improved protocols may also improve long distance performance 12x PSIFB Up to 150 meters 1x PSIFB Up to 10 100 Km 210 EC z10 BC 12x PSIFB l Up to 150 meters z9 EC and z9 BC S07 New ICB 4 cable lil ll ICB 4 10 meters 210 EC 210 BC z9 EC z9 BC z990 z890 Up to 10 100 Km 1 O Drawer 10 z10 EC z10 BC z9 EC z z9 BC z990 z890 51
36. AAP with z OS V1 8 include all Java processed via the IBM Solution Developers Kit SDK and XML processed locally via z OS XML System Services The System z10 Integrated Information Processor ZIIP is designed to support select data and transaction process ing and network workloads and thereby make the consoli dation of these workloads on to the System z platform more cost effective Workloads eligible for the zlIP with z OS V1 7 or later include remote connectivity to DB2 to help support these workloads Business Intelligence BI Enterprise Relationship Management ERP Customer Relationship Management CRM and Extensible Markup Language XML applications In addition to supporting remote connectivity to DB2 via DRDA over TCP IP the zIIP also supports DB2 long running parallel queries a workload integral to Business Intelligence and Data Ware housing solutions The zIIP with z OS V1 8 also supports IPSec processing making the zlIIP an IPSec encryption engine helpful in creating highly secure connections in an enterprise In addition zIIP with z OS V1 10 sup ports select z OS Global Mirror formerly called Extended Remote Copy XRC disk copy service functions z OS V1 10 also introduces zlIP Assisted HiperSockets for large messages available on System z10 servers only The new capability provided with z VM Mode partitions increases flexibility and simplifies systems management by allowing z VM 5 4 to manage guests t
37. Actual results may vary Performance information is provided AS IS and no warranties or guarantees are expressed or implied by IBM Photographs shown are of engineering prototypes Changes may be incorporated in production models This equipment is subject to all applicable FCC rules and will comply with them upon delivery Information concerning non IBM products was obtained from the suppli ers of those products Questions concerning those products should be directed to those suppliers All customer examples described are presented as illustrations of how those customers have used IBM products and the results they may have achieved Actual environmental costs and performance characteristics ZS003021 USEN 02
38. BM z10 Enterprise Quad Core processor chip run ning at 3 5 GHz designed to help improve CPU intensive workloads The z10 BC like its predecessors supports 24 31 and 64 bit addressing as well as multiple arithmetic for mats High performance logical partitioning via Processor Resource Systems Manager PR SM is achieved by industry leading virtualization support provided by z VM A change to the z Architecture on z10 BC is designed to allow memory to be extended to support large 1 mega byte MB pages Use of large pages can improve CPU utilization for exploiting applications Large page support is primarily of benefit for long running applications that are memory access intensive Large page is not recommended for general use Short lived processes with small working sets are normally not good candidates for large pages Large page support is exclusive to System 210 running either z OS or Linux on System z 210 BC Architecture Rich CISC Instruction Set Architecture ISA e 894 instructions 668 implemented entirely in hardware e Multiple address spaces robust inter process security e Multiple arithmetic formats Architectural extensions for 210 BC e 50 instructions added to z10 BC to improve compiled code efficiency e Enablement for software hardware cache optimization e Support for 1 MB page frames e Full hardware support for Hardware Decimal Floating point Unit HDFU z Architecture operating system
39. BP CFP per server For each MBA fanout installed for ICB 4s the number of possible cus tomer HCA fanouts is reduced by one Each link supports definition of multiple CIB CHPIDs up to 16 per fanout 710 negotiates to 3 GBps 12x IB SDR when connected to a System z9 3 meters 10 feet reserved for internal routing and strain relief Note The InfiniBand link data rates of 6 GBps 3 GBps 2 5 Gbps or 5 Gbps do not represent the performance of the link The actual performance is dependent upon many factors including latency through the adapters cable lengths and the type of workload With InfiniBand coupling links while the link data rate may be higher than that of ICB 12x IB SDR or 12x IB DDR or ISC 3 1x IB SDR or 1x IB DDR the service times of coupling operations are greater and the actual throughput may be less than with ICB links or ISC 3 links Time synchronization and time accuracy on z10 BC If you require time synchronization across multiple servers for example you have a Parallel Sysplex environment or you require time accuracy either for one or more System z servers or you require the same time across heterogeneous platforms System z UNIX AIX etc you can meet these requirements by either installing a Sysplex Timer Model 2 9037 002 or by implementing Server Time Protocol STP 53 The Sysplex Timer Model 2 is the centralized time source that sets the Time Of Day TOD clocks in all attached servers
40. IBM System z10 Business Class z10 BC Reference Guide N The New Face of Enterprise Computing April 2009 Table of Contents IBM System z10 Business Class z10 BC Overview z Architecture z10 BC z10 BC Design and Technology z10 BC Model z10 BC Performance z10 BC I O Subsystem 210 BC Channels and I O Connectivity HiperSockets Security Cryptography On Demand Capabilities Reliability Availability and Serviceability RAS Availability Functions Environmental Enhancements Parallel Sysplex Cluster Technology HMC System Support Implementation Services for Parallel Sysplex Fiber Quick Connect for FICON LX Environments z10 BC Physical Characteristics z10 BC Configuration Detail Coupling Facility CF Level of Support Statement of Direction Publications page 3 page 6 page Il page 14 page 15 page 17 page 18 page 19 page 34 page 36 page 36 page 41 page 45 page 46 page 48 page 49 page 57 page 59 page 60 page 61 page 62 page 64 page 65 page 66 IBM System z10 Business Class z10 BC Overview In today s world IT is woven in to almost everything that a business does and consequently is pivotal to a busi ness Yet technology leaders are challenged to manage sprawling complex distributed infrastructures and the ever growing flow of data while remaining highly responsive to the demands of the business And they must continually evaluate and decide when and h
41. Interconnect Multiplexer STI MP card There are two types of HCA fanout cards HCA2 C is copper and is always used to connect to I O IFB MP card and the HCA2 0 which is optical and used for customer InfiniBand coupling The z10 BC has been designed to offer high performance and efficient I O structure The 210 BC ships with a single frame the A Frame which supports the installation of up to four I O drawers Each drawer supports up to eight I O cards four in front and four in the rear providing support for up to 480 channels 32 I O features To increase the I O device addressing capability the I O subsystem has been enhanced by introducing support for multiple subchannels sets MSS which are designed to allow improved device connectivity for Parallel Access Volumes PAVs To support the highly scalable system design the z9 BC I O subsystem uses the Logical Chan nel SubSystem LCSS which provides the capability to install up to 512 CHPIDs across the I O drawers 256 per operating system image The Parallel Sysplex Coupling Link architecture and technology continues to support high speed links providing efficient transmission between the Coupling Facility and z OS systems HiperSockets provides high speed capability to communicate among virtual servers and logical partitions HiperSockets is now improved with the IP version 6 IPv6 support this is based on high speed TCP IP memory speed transfers and pro vides value in allowi
42. LAN administrators can con figure and maintain the mainframe environment the same as they do a non mainframe environment With support of the new Layer 2 interface by HiperSockets packet forwarding decisions are now based upon Layer 2 infor mation instead of Layer 3 information The HiperSockets device performs automatic MAC address generation and assignment to allow uniqueness within and across logical partitions LPs and servers MAC addresses can also be locally administered The use of Group MAC addresses for multicast is supported as well as broadcasts to all other Layer 2 devices on the same HiperSockets network Datagrams are only delivered between HiperSockets devices that are using the same transport mode Layer 2 with Layer 2 and Layer 3 with Layer 3 A Layer 2 device cannot communicate directly with a Layer 3 device in another LPAR 34 A HiperSockets device can filter inbound datagrams by Virtual Local Area Network identification VLAN ID IEEE 802 1q the Ethernet destination MAC address or both Filtering can help reduce the amount of inbound traf fic being processed by the operating system helping to reduce CPU utilization Analogous to the respective Layer 3 functions HiperSockets Layer 2 devices can be configured as primary or secondary connectors or multicast routers This is designed to enable the creation of high performance and high availability Link Layer switches between the internal HiperSockets network and
43. Linux on System z service oriented architecture SOA and security It is the preferred replace ment for z VSE V4 1 z VSE V3 or VSE ESA It is designed to protect and leverage existing VSE information assets z TPF z TPF is a 64 bit operating system that allows you to move legacy applications into an open development environ ment leveraging large scale memory spaces for increased speed diagnostics and functionality The open develop ment environment allows access to commodity skills and enhanced access to open code libraries both of which can be used to lower development costs Large memory spaces can be used to increase both system and appli cation efficiency as I Os or memory management can be eliminated z TPF is designed to support e 64 bit mode e Linux development environment GCC and HLASM for Linux e 32 processors cluster Up to 84 engines orocessor 40 000 modules Workload License Charge Linux on System z The System z10 BC supports the following Linux on System z distributions most recent service levels Operating System pots itt 0 OS V1R8 9 and 10 z OS V1R7 with BM Lifecycle xtension for z 0 inux on System z Red Hat RHEL 4 amp Nove 9 J Linux on System z Red Hat RHEL 5 amp Nove 0 PF VIRI No Yes FV4R No 2 OS V1 7 support on the z10 BC requires the Lifecycle Extension for 2 OS V1 7 5637 A01 The Lifecycle Extension for z OS R1 7 zllIP Web De
44. SA Express3 for reduced latency and improved throughput To help reduce latency the OSA Express3 features now have an Ethernet hardware data router what was previ ously done in firmware packet construction inspection and routing is now performed in hardware With direct memory access packets flow directly from host memory to the LAN without firmware intervention OSA Express3 is also designed to help reduce the round trip networking time between systems Up to a 45 reduction in latency at the TCP IP application layer has been measured The OSA Express3 features are also designed to improve throughput for standard frames 1492 byte and jumbo frames 8992 byte to help satisfy the bandwidth require ments of your applications Up to a 4x improvement has been measured compared to OSA Express2 The above statements are based on OSA Express3 perfor mance measurements performed in a laboratory environ ment on a System z10 and do not represent actual field measurements Results may vary Port density or granularity The OSA Express3 features have Peripheral Component Interconnect Express PCI E adapters The previous table identifies whether the feature has 2 or 4 ports for LAN con nectivity Select the density that best meets your business requirements Doubling the port density on a single feature helps to reduce the number of I O slots required for high speed connectivity to the Local Area Network The OSA Express3 10 GbE featur
45. U pacing can help to optimize the utilization of the link for example help keep a 4 Gbps link fully utilized at 50 km and allows channel extenders to work at any dis tance with performance results similar to that experienced when using emulation The requirements for channel extension equipment are simplified with the increased number of commands in flight This may benefit z OS Global Mirror Extended Remote Copy XRC applications as the channel exten sion kit is no longer required to simulate specific channel commands Simplifying the channel extension require ments may help reduce the total cost of ownership of end to end solutions Extended distance FICON is transparent to operating sys tems and applies to all the FICON Express2 and FICON Express4 features carrying native FICON traffic CHPID type FC For exploitation the control unit must support the new IU pacing protocol The channel will default to cur rent pacing values when operating with control units that cannot exploit extended distance FICON 24 Exploitation of extended distance FICON is supported by IBM System Storage DS8000 series Licensed Machine Code LMC level 5 3 1xx xx bundle version 63 1 xx xx or later To support extended distance without performance degra dation the buffer credits in the FICON director must be set appropriately The number of buffer credits required is dependent upon the link data rate 1 Gbps 2 Gbps or 4 Gbps the maxim
46. U utilization within a group will not exceed the group s defined capacity Each LPAR in a group can still optionally continue to define an individual LPAR capacity limit The z10 BC has one model with a total of 130 capacity settings available as new build systems and as upgrades from the z9 BC and z890 The z10 BC model is designed with a Central Processor Complex CPC drawer with Single Chip Modules SCM that provides up to 10 Processor Units PUs that can be characterized as either Central Processors CPs IFLs ICFs ZAAPs or zIIPs Some of the significant enhancements in the z10 BC that help bring improved performance availability and function to the platform have been identified The following sections highlight the functions and features of the z10 BC z10 BC Design and Technology The System z10 BC is designed to provide balanced system performance From processor storage to the system s I O and network channels end to end bandwidth is provided and designed to deliver data where and when it is needed The processor subsystem is comprised of one CPC which houses the processor units PUs Storage Controllers SCs memory Self Time Interconnects ST1 InfiniBand IFB and Oscillator External Time Reference ETR The z10 BC design provides growth paths up to a 10 engine system where each of the 10 PUs has full access to all system resources specifically memory and I O The z10 BC uses the same processor chip as th
47. Wey IL Z05 YO5 X05 wo5 Vvo5 U05 TOS s05 R05 Q05 POS 005 NOS M05 L05 KOS J05 105 H05 G05 F05 E05 Dos c05 B05 A05 5 way R03 Q03 P03 003 o3 K03 J3 Jo 103 Hos co3 Fo BBa Do3 C03 B03 A01 1 way sway away Specialty Engine Specialty Engine Specialty Engine Specialty Engine Specialty Engine Specialty Engine Specialty Engine Specialty Engine Specialty Engine Specialty Engine 210 BC Performance The performance design of the z Architecture can enable the server to support a new standard of performance for applications through expanding upon a balanced system approach As CMOS technology has been enhanced to support not only additional processing power but also more PUs the entire server is modified to support the increase in processing power The I O subsystem supports a greater amount of bandwidth than previous generations through internal changes providing for larger and faster volume of data movement into and out of the server Sup port of larger amounts of data within the server required improved management of storage configurations made available through integration of the operating system and hardware suppo
48. a foundation to help enterprises simplify their network infrastructure while supporting traditional Systems Network Architecture SNA functions such as SNA Network Interconnect SNI Communication Controller for Linux on System z Program Number 5724 J38 is the solution for companies that want to help improve network availability by replacing Token Ring networks and ESCON channels with an Ether net network and integrated LAN adapters on System z10 OSA Express3 or OSA Express2 GbE or 1000BASE T OSA Express for NCP is supported in the z OS z VM ZNSE TPF z TPF and Linux on System z environments OSA Integrated Console Controller The OSA Express Integrated Console Controller OSA ICC support is a no charge function included in Licensed Internal Code LIC on z10 BC z10 EC z9 EC z9 BC z990 and z890 servers It is available via the OSA Express2 and OSA Express 1000BASE T Ethernet features and supports Ethernet attached TN3270E con soles The OSA ICC provides a system console function at IPL time and operating systems support for multiple logical partitions Console support can be used by z OS z OS e ZINM Z VSE z TPF and TPF The OSA ICC also supports local non SNA DFT 3270 and 328x printer emulation for TSO E CICS IMS or any other 3270 application that communicates through VTAM With the OSA Express3 and OSA Express2 1000BASE T Ethernet features the OSA ICC is configured on a port by port basis using the C
49. anced security technologies in the industry helping you to meet rigid regulatory requirements that include encryption solutions access control management and extensive auditing features It also provides disaster recov ery configurations and is designed to deliver 99 999 application availability to help avoid the downside of planned downtime equipment failure or the complete loss of a data center When you need to be more secure more resilient z Can Do IT The z10 processor chip has on board cryp tographic functions Standard clear key integrated crypto graphic coprocessors provide high speed cryptography for protecting data in storage CP Assist for Cryptographic Function CPACF supports DES TDES Secure Hash Algo rithms SHA for up to 512 bits Advanced Encryption Stan dard AES for up to 256 bits and Pseudo Random Number Generation PRNG Audit logging has been added to the new TKE workstation to enable better problem tracking System z is investing in accelerators that provide improved performance for specialized functions The Crypto Express2 feature for cryptography is an example The Crypto Express2 feature can be configured as a secure key coprocessor or for Secure Sockets Layer SSL accel eration The feature includes support for 13 14 15 16 17 18 and 19 digit Personal Account Numbers for stronger protection of data And the tamper resistant cryptographic coprocessor is certified at FIPS 140 2 Level 4 To
50. andwidth requirements of your applications The enhancements to CPACF are exclusive to the System z10 and supported by z OS z VM z VSE and Linux on System z Configurable Crypto Express2 The Crypto Express2 feature has two PCI X adapters Each of the PCI X adapters can be defined as either a Coprocessor or an Accelerator Crypto Express2 Coprocessor for secure key encrypted transactions default is e Designed to support security rich cryptographic func tions use of secure encrypted key values and User Defined Extensions UDX e Designed to support secure and clear key RSA opera tions e The tamper responding hardware and lower level firm ware layers are validated to U S Government FIPS 140 2 standard Security Requirements for Cryptographic Modules at Level 4 37 Crypto Express2 Accelerator for Secure Sockets Layer SSL acceleration e Is designed to support clear key RSA operations e Offloads compute intensive RSA public key and private key cryptographic operations employed in the SSL pro tocol Crypto Express2 features can be carried forward on an upgrade to the System z10 BC so users may con tinue to take advantage of the SSL performance and the configuration capability The configurable Crypto Express2 feature is supported by z OS z VM z VS support for clear key operations only Current versions of E and Linux on System z z VSE offers z OS z VM and Linux on System z offer support for
51. ary resources are active These capabilities allow you to access and manage pro cessing capacity on a temporary basis providing increased flexibility for on demand environments The CoD offerings are built from a common Licensed Internal Code Configu ration Code LIC CC record structure These Temporary Entitlement Records TERs contain the information neces sary to control which type of resource can be accessed and to what extent how many times and for how long and under what condition test or real workload Use of this information gives the different offerings their personality Capacity Back Up CBU Temporary access to dormant processing units PUs intended to replace capacity lost within the enterprise due to a disaster CP capacity or any and all specialty engine types zIIP ZAAP SAP IFL ICF can be added up to what the physical hardware model can contain for up to 10 days for a test activation or 90 days for a true disaster recovery On system z10 the CBU entitlement records contain an expiration date that is established at the time of order and is dependent upon the quantity of CBU years You will now have the capability to extend your CBU entitle ments through the purchase of additional CBU years The number of CBU years per instance of CBU entitlement remains limited to five and fractional years are rounded up to the near whole integer when calculating this limit For instance if there are two years and eigh
52. ata sharing and resource sharing among all the z OS images in a cluster All images are also connected to a Sysplex Timer or by implementing the Server Time Protocol STP so that all events can be prop erly sequenced in time 3 A Parallel Sysplex Resource Sharing enables multiple system resources to be managed as a single logical resource shared among all of the images Some examples of resource sharing include JES2 Checkpoint GRS star and Enhanced Catalog Sharing all of which provide sim plified systems management increased performance and or scalability 49 Although there is significant value in a single footprint and multi footprint environment with resource sharing those customers looking for high availability must move on to a database data sharing configuration With the Paral lel Sysplex environment combined with the Workload Manager and CICS TS be dynamically routed to the z OS image most capable DB2 or IMS incoming work can of handling the work This dynamic workload balancing along with the capability to have read write access data from anywhere in the Parallel Sysplex cluster provides scalability and availability When configured properly a Parallel Sysplex cluster is designed with no single point of failure and can provide customers with near continuous application availability over planned and unplanned outages With the introduction of the z10 EC we have the co
53. ayer 2 VLAN Ids associated to a single unit address One group MAC can be associated to multiple unit addresses For additional information view IBM Redbooks IBM System z Connectivity Handbook SG24 5444 at www redbooks ibm com HiperSockets The HiperSockets function also known as internal Queued Direct Input Output iDQIO or internal QDIO is an inte grated function of the z10 BC server that provides users with attachments to up to sixteen high speed virtual Local Area Networks LANs with minimal system and network overhead HiperSockets eliminates the need to utilize I O subsystem operations and the need to traverse an external network connection to communicate between logical partitions in the same z10 BC server Now the HiperSockets internal networks on z10 BC can support two transport modes Layer 2 Link Layer as well as the current Layer 3 Network or IP Layer Traffic can be Internet Protocol IP version 4 or version 6 IPv4 IPv6 or non IP AppleTalk DECnet IPX NetBIOS or SNA HiperSockets devices are now protocol independent and Layer 3 independent Each HiperSockets device has its own Layer 2 Media Access Control MAC address which is designed to allow the use of applications that depend on the existence of Layer 2 addresses such as DHCP servers and firewalls Layer 2 support can help facilitate server consolidation Complexity can be reduced network configuration is simplified and intuitive and
54. be the last server to support Dynamic ICF expansion This is consistent with the System z9 hard ware announcement 107 190 dated April 18 2007 IBM System z9 Enterprise Class z9 EC and System z9 Busi ness Class z9 BC Delivering greater value for every one in which the following Statement of Direction was made IBM intends to remove the Dynamic ICF expansion function from future System z servers 65 The System z10 will be the last server to support connec tions to the Sysplex Timer 9037 Servers that require time synchronization such as to support a base or Parallel Sys plex will require Server Time Protocol STP STP has been available since January 2007 and is offered on the System 210 System z9 and zSeries 990 and 890 servers ESCON channels to be phased out It is IBM s intent for ESCON channels to be phased out System z10 EC and System z10 BC will be the last servers to support greater than 240 ESCON channels ICB 4 links to be phased out Restatement of SOD from RFA46507 IBM intends to not offer Integrated Cluster Bus 4 ICB 4 links on future servers IBM intends for System Z10 to be the last server to support ICB 4 links Publications The following Redbook publications are available now Hardware Management Console 210 BC Technical Overview Operations Guide V2 10 1 SC28 6873 SG24 7632 IOCP User s Guide SB10 7037 z10 BC Technical Guide SG24 7516 Maintenance Information for Fiber System z Conn
55. ble through ICSF function calls made in the PCI X Cryptographic Adapter segment 3 Common Cryptographic Architecture CCA code This is supported by z OS and by z VM for guest exploita tion Support for RSA keys up to 4096 bits The RSA services in the CCA API are extended to sup port RSA keys with modulus lengths up to 4096 bits The services affected include key generation RSA based key management digital signatures and other functions related to these Refer to the ICSF Application Programmers Guide SA22 7522 for additional details 38 Cryptographic enhancements to Crypto Express2 and Crypto Express2 1P Dynamically add crypto to a logical partition Today users can preplan the addition of Crypto Express2 features to a logical partition LP by using the Crypto page in the image profile to define the Cryptographic Candidate List Cryptographic Online List and Usage and Control Domain Indexes in advance of crypto hardware installation With the change to dynamically add crypto to a logical partition changes to image profiles to support Crypto Express2 features are available without outage to the logical partition Users can also dynamically delete or move Crypto Express2 features Preplanning is no longer required This enhancement is supported by z OS z VM for guest exploitation Z VSE and Linux on System z Secure Key AES The Advanced Encryption Standard AES is a National Institute of Standards and T
56. by reducing unscheduled scheduled and planned outages Planned outages are further designed to be reduced by reducing preplanning requirements z10 BC preplanning improvements are designed to avoid planned outages and include Reduce pre planning to avoid POR Fixed HSA amount Dynamic I O enabled by default Add Logical Channel Subsystem LCSS Change LCSS Subchannel Sets Add Delete logical partitions Reduce pre planning to avoid LPAR deactivate Change partition logical processor configuration Change partition crypto coprocessor configuration CoD Flexible activation deactivation Elimination of unnecessary CBU passwords Enhanced Driver Maintenance EDM upgrades Multiple from sync point support Improved control of channel LIC levels Plan ahead memory Concurrent I O drawer add repair Additionally several service enhancements have also been designed to avoid unscheduled outages and include continued focus on firmware quality reduced chip count on Single Chip Module SCM and memory subsystem improvements In the area of scheduled outage enhance ments include redundant 100Mb Ethernet service network with VLAN rebalance of PSIFB and I O fanouts and single processor core sparing and checkstop Exclusive to the System z10 is the ability to hot swap ICB 4 and InfiniBand hub cards Enterprises with IBM System z9 BC and IBM z890 may upgrade to any z10 Business Class mod
57. cking algorithm is adjusted to maximize throughput The z OS TCP IP stack can dynamically detect the application requirements making the necessary adjustments to the blocking algo rithm The monitoring of the application and the blocking algorithm adjustments are made in real time dynamically adjusting the application s LAN performance System administrators can authorize the z OS TCP IP stack to enable a dynam setting The z OS TCP IP stack is able to help determine the best setting for the current running application based ic setting which was previously a static on system configuration inbound workload volume CPU utilization and traffic patterns Link aggregation for z VM in Layer 2 mode z VM Virtual Switch controlled VSWITCH controlled link aggregation IEEE 802 3ad allows you to dedicate an 30 OSA Express2 or OSA ing system when the port is participating in an aggregated Express port to the z VM operat group when configured in Layer 2 mode Link aggregation trunking is designed to allow you to combine multiple physical OSA Express3 and OSA Express2 ports of the same type for example 1GbE or 10GbE into a single logi cal link for increased throughput and for nondisruptive failover in the event that a port becomes unavailable e Aggregated link viewed as one logical trunk and con taining all of the Virtual LANs VLANs required by the LAN segment Load balance communications across several link
58. coupling communications for IMS Shared Queue and WebSphere MQ Shared Queue environments The Coupling Facility notifies only one connector in a sequential fashion If the shared queue is processed within a fixed period of time the other connectors do not need to be notified saving the cost of the false scheduling If a shared queue is not read within the time limit then the other connectors are notified as they were prior to CFCC Level 16 When migrating CF levels lock list and cache structure sizes might need to be increased to support new function For example when you upgrade from CFCC Level 15 to Level 16 the required size of the structure might increase This adjustment can have an impact when the system allocates structures or copies structures from one coupling facility to another at different CF levels The coupling facility structure sizer tool can size struc tures for you and takes into account the amount of space needed for the current CFCC levels Access the tool at http www ibm com servers eserver zseries cfsizer CFCC Level 16 is exclusive to System 210 and is sup ported by z OS and z VM for guest exploitation 50 Coupling Facility Configuration Alternatives IBM offers multiple options for configuring a functioning Coupling Facility e Standalone Coupling Facility The standalone CF provides the most robust CF capability as the CPC is wholly dedicated to running the CFCC microcode all of the proc
59. d in the rule array for both CSNBCSG and CSNBCSV to indicate that the PAN data is comprised of 14 15 17 or 18 PAN digits respectively Support for 13 through 19 digit PANs is exclusive to System z10 and is offered by z OS and z VM for guest exploitation TKE 5 3 workstation The Trusted Key Entry TKE workstation and the TKE 5 3 level of Licensed Internal Code are optional features on the System z10 BC The TKE 5 3 Licensed Internal Code LIC is loaded on the TKE workstation prior to ship ment The TKE workstation offers security rich local and remote key management providing authorized persons a method of operational and master key entry identification exchange separation and update The TKE workstation supports connectivity to an Ethernet Local Area Network LAN operating at 10 or 100 Mbps Up to ten TKE work stations can be ordered 39 Enhancement with TKE 5 3 LIC The TKE 5 3 level of LIC includes support for the AES encryption algorithm adds 256 bit master keys and includes the master key management functions required to load or generate AES master keys to cryptographic copro cessors in the host Also included is an imbedded screen capture utility to permit users to create and to transfer TKE master key entry instructions to diskette or DVD Under Service Manage ment a Manage Print Screen Files utility will be available to all users The TKE workstation and TKE 5 3 LIC are available on the z10 EC
60. determine how many tokens you need to purchase for different acti vation scenarios Resource tokens within an On Off CoD record may also be replenished For more information on the use and ordering of resource tokens refer to the Capacity on Demand Users Guide SC28 6871 Capacity Provisioning Hardware working with software is critical The activation of On Off CoD on z10 EC can be simplified or automated by using z OS Capacity Provisioning available with z OS V1 10 and z OS V1 9 This capability enables the monitor ing of multiple systems based on Capacity Provisioning and Workload Manager WLM definitions When the defined conditions are met z OS can suggest capacity changes for manual activation from a z OS console or the system can add or remove temporary capacity automatically and with out operator intervention 210 BC Can Do IT better z OS Capacity provisioning allows you to set up rules defining the circumstances under which additional capac ity should be provisioned in order to fulfill a specific busi ness need The rules are based on criteria such as a specific application the maximum additional capacity that should be activated time and workload conditions This support provides a fast response to capacity changes and ensures sufficient processing power will be available with the least possible delay even if workloads fluctuate An installed On Off CoD record is a necessary prerequisite for automated control of temp
61. dium enter prises that give you a whole new world of capabilities to run modern applications Ideally suited in a Dynamic Infratructure this competitively priced server delivers unparalleled qualities of service to help manage growth and reduce cost and risk in your business The z10 BC further extends the leadership of System z by delivering expanded granularity and optimized scalability for growth enriched virtualization technology for consoli dation of distributed workloads improved availability and security to help increase business resiliency and just in time management of resources The 210 BC is at the core of the enhanced System z platform and is the new face of System z The z10 BC has the machine type of 2098 with one model E10 offering between one to ten configurable Processor Units PUs This model design offers increased flexibility over the two model IBM System z9 Business Class z9 BC by delivering seamless growth within a single model both temporary and permanent The 210 BC delivers improvements in both the granular increments and total scalability compared to previous System z midrange servers achieved by both increasing the performance of the individual PU as well as increasing E10 is gned to provide up to 1 5 times the total system capac the number of PUs per server The 210 BC Model desi ity for general purpose processing and over 40 more f configurable processors than the z9 BC Model S07
62. duled scheduled and planned outages Planned outages are further designed to be reduced with the introduction of concurrent I O drawer add and eliminating pre planning requirements These features are designed to reduce the need for a Power on Reset POR and help eliminate the need to deactivate activate IPL a logical partition RAS Design Focus High Availability HA The attribute of a system designed to provide service during defined periods at acceptable or agreed upon levels and masks UNPLANNED OUTAGES from end users It employs fault tolerance auto mated failure detection recovery bypass reconfiguration testing problem and change management Continuous Operations CO The attribute of a system designed to continuously operate and mask PLANNED OUTAGES from end users It employs non disruptive hard ware and software changes non disruptive configuration and software coexistence Continuous Availability CA The attribute of a system designed to deliver non disruptive service to the end user 7 days a week 24 HOURS A DAY there are no planned or unplanned outages It includes the ability to recover from a site disaster by switching computing to a second site Availability Functions With the z10 BC significant steps have been taken in the area of server availability with a focus on reducing pre planning requirements Pre planning requirements are minimized by delivering and reserving 8 GB for HSA so the maxim
63. e Have a mechanism to isolate a QDIO data connection on an OSA port ensuring all internal OSA routing between the isolated QDIO data connections and all other shar ing QDIO data connections is disabled In this state only external communications to and from the isolated QDIO data connection are allowed If you choose to deploy an external firewall to control the access between hosts on an isolated virtual switch and sharing LPARs then an external firewall needs to be configured and each indi vidual host and or LPAR must have a route added to their TCP IP stack to forward local traffic to the firewall 29 Internal routing can be disabled on a per QDIO connec tion basis This support does not affect the ability to share an OSA Express port Sharing occurs as it does today but the ability to communicate between sharing QDIO data connections may be restricted through the use of this sup port You decide whether an operating system s or z VM s Virtual Switch OSA Express QDIO connection is to be non isolated default or isolated QDIO data connection isolation applies to the device statement defined at the operating system level While an OSA Express CHPID may be shared by an operating system the data device is not shared QDIO data connection isolation applies to the z VM 5 3 and 5 4 with PTFs environment and to all of the OSA Express3 and OSA Express2 features CHPID type OSD on System z10 and to the OSA Express2 features on S
64. e critical in achieving high levels of transaction throughput and enabling resources inside and outside the server to maximize application requirements The 210 BC has a host bus interface with a link data rate of 6 GB using the industry standard InfiniBand protocol to help satisfy requirements for coupling ICF and server to server connectivity cryptography Crypto Express2 with secure coprocessors and SSL transactions I O ESCON FICON or FCP and LAN OSA Express3 Gigabit 10 Gigabit and 1000BASE T Ethernet features High Perfor mance FICON for System z ZHPF also brings new levels of performance when accessing data on enabled storage devices such as the IBM System Storage DS8000 PUs defined as Internal Coupling Facilities ICFs Inte grated Facility for Linux IFLs System z10 Application Assist Processor zAAPs and System z10 Integrated Infor mation Processor zIIPs are no longer grouped together in one pool as on the IBM eServer zSeries 890 z890 but are grouped together in their own pool where they can be managed separately The separation significantly simpli fies capacity planning and management for LPAR and can have an effect on weight management since CP weights and ZAAP and zIIP weights can now be managed sepa rately Capacity BackUp CBU features are available for IFLs ICFs zAAPs and ZIIPs LAN connectivity has been enhanced with the introduction of the third generation of Open Systems Adapter Expr
65. e z10 EC relying only on 3 out of 4 functional cores per chip Each chip is individually packaged in an SCM Four SCMs will be plugged in the processor board providing the 12 PUs f or the design Clock frequency will be 3 5 GHz There are three active cores per PU an L1 cache divided into a 64 KB cache for instructions and a 128 KB cache for data Each PU also has an L1 5 cache This cache is 3 MB in size Each L1 cache has a Translation Look aside Buffer TLB of 512 entries associated with it The PU which uses a high frequency z Architecture microprocessor core is built on CMOS 11S chip technology and has a cycle time of approximately 0 286 nanoseconds The PU chip includes data compression and crypto graphic functions Hardware data compression can play a significant role in improving performance and saving costs over doing compression in software Standard clear key cryptographic processors right on the processor translate to high speed cryptography for protecting data in storage integrated as part of the PU 14 Speed and precision in numerical computing are important for all our customers The z10 BC offers improvements for decimal floating point instructions because each z10 processor chip has its own hardware decimal floating point unit designed to improve performance over that provided by the System z9 Decimal calculations are often used in financial applications and those done using other floating point facilit
66. eature has four independent channels ports and each can be configured to carry native FICON traffic or Fibre Channel SCSI traffic LX and SX cannot be inter mixed on a single feature The maximum number of FICON Express2 features is 20 using four I O drawers FICON Express Channels The z10 BC also supports carrying forward FICON Express LX and SX channels from z9 BC and z990 each channel operating at 1 or 2 Gb sec auto negotiated Each FICON Express feature has two independent channels ports The System z10 BC Model E10 is limited to 32 features any combination of FICON Express4 FICON Express2 and FICON Express LX and SX features The FICON Express4 FICON Express2 and FICON Express feature conforms to the Fibre Connection FICON architecture and the Fibre Channel FC architecture pro viding connectivity between any combination of servers directors switches and devices in a Storage Area Network SAN Each of the four independent channels FICON Express only supports two channels per feature is capa ble of 1 Gigabit per second Gb sec 2 Gb sec or 4 Gb sec only FICON Express4 supports 4 Gbps depend ing upon the capability of the attached switch or device The link speed is auto negotiated point to point and is transparent to users and applications Not all switches and devices support 2 or 4 Gb sec link data rates FICON Express4 and FICON Express2 Performance Your enterprise may benefit from FICON Express4 and
67. echnology specification for the encryption of electronic data It is expected to become the accepted means of encrypting digital information includ ing financial telecommunications and government data AES is the symmetric algorithm of choice instead of Data Encryption Standard DES or Triple DES for the encryp tion and decryption of data The AES encryption algorithm will be supported with secure encrypted keys of 128 192 and 256 bits The secure key approach similar to what is supported today for DES and TDES provides the ability to keep the encryption keys protected at all times including the ability to import and export AES keys using RSA public key technology Support for AES encryption algorithm includes the master key management functions required to load or generate AES master keys update those keys and re encipher key tokens under a new master key Support for 13 thru 19 digit Personal Account Numbers Credit card companies sometimes perform card security code computations based on Personal Account Number PAN data Currently ICSF callable services CSNBCSV VISA CVV Service Verify and CSNBCSG VISA CVV Service Generate are used to verify and to generate a VISA Card Verification Value CVV or a MasterCard Card Verification Code CVC The ICSF callable services cur rently support 13 16 and 19 digit PAN data To provide additional flexibility new keywords PAN 14 PAN 15 PAN 17 and PAN 18 are implemente
68. ectivity Handbook SG24 5444 Optic Links SY27 2597 Server Time Protocol Planning Guide SG24 7280 OSA Express Customer s Guide SA22 7935 Server Time Protocol Implementation Guide SG24 7281 OSA ICC User s Guide SA22 7990 Planning for Fiber Optic Links GA23 0367 The following publications are shipped with the product and PR SM Planning Guide SB10 7153 available in the Library section of Resource Link SCSI IPL Machine Loader Messages SC 28 6839 210 BC Installation Manual GC28 6874 Service Guide for HMCs and SEs GC28 6861 210 BC Service Guide GC28 6878 Service Guide for Trusted Key Entry z10 BC Safety Inspection Guide GC28 6877 Workstations GC28 6862 System Safety Notices G229 9054 Standalone IOCP User s Guide SB10 7152 Support Element Operations Guide The following publications are available in the Library section of Version 2 10 0 SC28 6879 Resource Link System z Functional Matrix ZSWO 1335 Agreement for Licensed Machine Code SC28 6872 TKE PCIX Workstation User s Guide SA23 2211 a Programming Interfaces AA z10 BC Parts Catalog GC28 6876 z10 BC System Overview SA22 1085 Application Programming Interfaces SB10 7030 710 BC Installation Manual Physical Capacity on Demand User s Guide SC28 687 1 Planning IMPP GC28 6875 CHPID Mapping Tool User s Guide GC28 6825 Common Information Model CIM Publications for System z10 Business Class can be Management Interface SB10 7154 obtained at Resource Link by accessing the following Web Coup
69. eed to provide the same accurate time across heterogeneous platforms in an enter prise The STP design has been enhanced to include support for a Simple Network Time Protocol SNTP client on the Support Element By configuring an NTP server as the 54 STP External Time Source ETS the time of an STP only Coordinated Timing Network CTN can track to the time provided by the NTP server and maintain a time accuracy of 100 milliseconds Note NTP client support has been available since October 2007 Enhanced accuracy to an External Time Source The time accuracy of an STP only CTN has been improved by adding the capability to configure an NTP server that has a pulse per second PPS output signal as the ETS device This type of ETS device is available worldwide from sev eral vendors that provide network timing solutions STP has been designed to track to the highly stable accurate PPS signal from the NTP server and maintain an accuracy of 10 microseconds as measured at the PPS input of the System z server A number of variables such as accuracy of the NTP server to its time source GPS radio signals for example and cable used to connect the PPS signal will determine the ultimate accuracy of STP to Coordinated Universal Time UTC In comparison the IBM Sysplex Timer is designed to maintain an accuracy of 100 microseconds when attached to an ETS with a dial out time service or an NTP server without PPS it is a PPS ou
70. el Model upgrades within the z10 BC are concurrent If you desire a consolidation platform for your mainframe and Linux capable applications you can add capacity and even expand your current application workloads in a cost effec tive manner If your traditional and new applications are growing you may find the z10 BC a good fit with its base qualities of service and its specialty processors designed for assisting with new workloads Value is leveraged with improved hardware price performance and System z10 BC software pricing strategies The z10 BC is specifically designed and optimized for full z Architecture compatibility New features enhance enterprise data serving performance industry leading virtualization capabilities energy efficiency at system and data center levels The z10 BC is designed to further extend and integrate key platform characteristics such as dynamic flexible partitioning and resource management in 13 mixed and unpredictable workload environments provid ing scalability high availability and Qualities of Service QoS to emerging applications such as WebSphere Java and Linux With the logical partition LPAR group capacity limit on z10 BC z10 EC z9 EC and z9 BC you can now specify LPAR group capacity limits allowing you to define each LPAR with its own capacity and one or more groups of LPARs on a server This is designed to allow z OS to manage the groups in such a way that the sum of the LPARs CP
71. end and reuse them well beyond their original scope of design The ultimate implementation of flexibility for today s On Demand Business is a Service Oriented Architecture an IT architectural style that allows you to design your applications to solve real business problems The z10 BC along with the inherent strengths and capa bilities of multiple operating system choices and innovative System z software solutions from WebSphere CICS Rational and Lotus strengthen the flexibility of doing SOA and strengthen System z as an enterprise hub Special workloads Specialty engines affordable technology The z10 BC continues the long history of providing inte grated technologies to optimize a variety of workloads The use of specialty engines can help users expand the use of the mainframe for new workloads while helping to lower the cost of ownership The IBM System z specialty engines can run independently or complement each other For example the AAP and ZIIP processors enable you to purchase additional processing capacity exclusively for specific workloads without affecting the MSU rating of the IBM System z model designation This means that adding a specialty engine will not cause increased charges for IBM System z software running on general purpose pro cessors in the server In order of introduction The Internal Coupling Facility ICF processor was intro duced to help cut the cost of Coupling Facility functions by reduc
72. es support Long Reach LR using 9 micron single mode fiber optic cabling and Short Reach SR using 50 or 62 5 micron multimode fiber optic cabling The connector is new it is now the small form factor LC Duplex connector Previously the SC Duplex connector was supported for LR The LC Duplex connector is common with FICON ISC 3 and OSA Express2 Gigabit Ethernet LX and SX The OSA Express3 features are exclusive to System 210 There are operating system dependencies for exploitation of two ports in OSD mode per PCI E adapter Whether it is a 2 port or a 4 port feature only one of the ports will be visible on a PCI E adapter if operating system exploitation updates are not installed OSA Express3 Ethernet features Summary of benefits OSA Express3 10 GbE LR single mode fiber 10 GbE SR multimode fiber GbE LX single mode fiber GbE SX multimode fiber and 1000BASE T copper are designed for use in high speed enterprise backbones for local area network connectivity between campuses to connect server farms to System z10 and to consolidate file servers onto System z10 With reduced latency improved throughput and up to 96 ports of LAN connectivity when all are 4 port features 24 features per server you can do more with less The key benefits of OSA Express3 compared to OSA Express2 are e Reduced latency up to 45 reduction and increased throughput up to 4x for applications e More physical connectivity
73. ess OSA Express3 This new family of LAN adapters have been introduced to reduce latency and overhead deliver double the port density of OSA Express2 and provide increased throughput The z10 BC continues to support OSA Express2 1000BASE T and GbE Ethernet features and supports IP version 6 IPv6 on HiperSockets While OSA Express2 OSN OSA for NCP is still available on System z10 BC to support the Channel Data Link Control CDLC protocol the OSA Express3 will also provide this function Additional channel and networking improvements include support for Layer 2 and Layer 3 traffic FCP management facility for z VM and Linux for System z FCP security improvements and Linux support for HiperSockets IPv6 STP enhancements include the additional support for NTP clients and STP over InfiniBand links Like the System z9 BC the z10 BC offers a configurable Crypto Express2 feature with PCI X adapters that can 12 be individually configured as a Secure coprocessor or an accelerator for SSL the TKE workstation with optional Smart Card Reader and provides the following CP Assist for Cryptographic Function CPACF e DES TDES AES 128 AES 192 AES 256 e SHA 1 SHA 224 SHA 256 SHA 384 SHA 512 e Pseudo Random Number Generation PRNG z10 BC is designed to deliver the industry leading Reli ability Availability and Serviceability RAS customers expect from System z servers RAS is designed to reduce all sources of outages
74. essors links and memory are for CF use only A natural benefit of this characteristic is that the standalone CF is always failure isolated from exploiting 2 OS software and the server that z OS is running on for environments without System Managed CF Structure Duplexing The z10 BC with capacity indicator AOO is used for systems with ICF s only There are no software charges associated with such a configuration Internal Coupling Facility ICF Customers consider ing clustering technology can get started with Parallel Sysplex technology at a lower cost by using an ICF instead of purchasing a standalone Coupling Facility An ICF feature is a processor that can only run Coupling Facility Control Code CFCC in a partition Since CF LPARs on ICFs are restricted to running only CFCC there are no IBM software charges associated with ICFs ICFs are ideal for Intelligent Resource Director and resource sharing environments as well as for data shar ing environments where System Managed CF Structure Duplexing is exploited System Managed CF Structure Duplexing System Managed Coupling Facility CF Structure Duplex ing provides a general purpose hardware assisted easy to exploit mechanism for duplexing CF structure data This provides a robust recovery mechanism for failures such as loss of a single structure or CF or loss of connectivity to a single CF through rapid failover to the backup instance of the duplexed structure pair CFCC Level 1
75. et SX The OSA Express3 Gigabit Ethernet GbE short wave length SX feature has four ports Two ports reside on a PCle adapter and share a channel path identifier CHPID There are two PCle adapters per feature Each port sup ports attachment to a one Gigabit per second Gbps Eth ernet Local Area Network LAN OSA Express3 GbE SX supports CHPID types OSD and OSN It can be defined as a spanned channel and can be shared among LPARs within and across LCSSs OSA Express3 2P Gigabit Ethernet SX The OSA Express3 2P Gigabit Ethernet GbE short wavelength SX feature has two ports which reside on a single PCle adapter and share one channel path identifier CHPID Each port supports attachment to a one Gigabit per second Gbps Ethernet Local Area Network LAN OSA Express3 GbE SX supports CHPID types OSD and OSN It can be defined as a spanned channel and can be shared among LPARs within and across LCSSs Four port exploitation on OSA Express3 GbE SX and LX For the operating system to recognize all four ports on an OSA Express3 Gigabit Ethernet feature a new release and or PTF is required If software updates are not applied only two of the four ports will be visible to the operating system Activating all four ports on an OSA Express3 feature pro vides you with more physical connectivity to service the network and reduces the number of required resources I O slots I O cages fewer CHPIDs to define and manage Four port
76. existing CBU customers via the IBM Customer Agreement Amend ment for IBM System z Capacity Backup Upgrade Tests in the US this is form number 2125 8145 This amendment can be executed at any time and separate from any par ticular order Capacity for Planned Event CPE Temporary access to dormant PUs intended to replace capacity lost within the enterprise due to a planned event such as a facility upgrade or system relocation This offering is available only on the System z10 CPE is similar to CBU in that it is intended to replace lost capacity however it differs in its scope and intent Where CBU addresses disaster recovery scenarios that can take up to three months to remedy CPE is intended for short duration events lasting up to three days maximum Each CPE record once activated gives you access to all dormant PUs on the machine that can be configured in any combination of CP capacity or specialty engine types zIIP ZAAP SAP IFL ICF On Off Capacity on Demand On Off CoD Temporary access to dormant PUs intended to augment the existing capacity of a given system On Off CoD helps you contain workload spikes that may exceed permanent capacity such that Service Level Agreements cannot be met and business conditions do not justify a permanent upgrade An On Off CoD record allows you to temporarily add CP capacity or any and all specialty engine types zIIP ZAAP SAP IFL ICF up to the following limits The quantity of
77. exploitation is supported by z OS z VM z VSE Z TPF and Linux on System z OSA Express3 1000BASE T Ethernet The OSA Express3 1000BASE T Ethernet feature has four ports Two ports reside on a PCle adapter and share a channel path identifier CHPID There are two PCle adapters per feature Each port supports attachment to either a 1OBASE T 10 Mbps 100BASE TX 100 Mbps or 1000BASE T 1000 Mbps or 1 Gbps Ethernet Local Area Network LAN The feature supports auto negotiation and 28 automatically adjusts to 10 100 or 1000 Mbps depending upon the LAN When the feature is set to autonegotiate the target device must also be set to autonegotiate The feature supports the following settings 10 Mbps half or full duplex 100 Mbps half or full duplex 1000 Mbps 1 Gbps full duplex OSA Express3 1000BASE T Ethernet supports CHPID types OSC OSD OSE and OSN It can be defined as a spanned channel and can be shared among LPARs within and across LCSSs When configured at 1 Gbps the 1000BASE T Ethernet feature operates in full duplex mode only and supports jumbo frames when in QDIO mode CHPID type OSD OSA Express3 2P 1000BASE T Ethernet The OSA Express3 2P 1000BASE T Ethernet feature has two ports which reside on a single PCle adapter and share one channel path identifier CHPID Each port supports attachment to either a 1OBASE T 10 Mbps 100BASE TX 100 Mbps or 1OOOBASE T 1000 Mbps or 1 Gbps Ether net Local Area Network
78. failure that affects both servers of a two server STP only Coordinated Timing Network CTN To enable this function the customer has to select an option that will assure than no other servers can join the two server CTN Previously if both the Preferred Time Server PTS and the Backup Time Server BTS experienced a simultaneous power outage site failure or both experienced a POR reinitialization of time and special roles PTS BTS and CTS was required With this enhancement you will no longer need to reinitialize the time or reassign the roles for these events Preview Improved STP System Management with new z OS Messaging This is a new function planned to generate z OS messages when various hardware events that affect the External Time Sources ETS configured for an STP only CTN occur This may improve problem deter mination and correction times Previously the messages were generated only on the Hardware Management Con sole HMC The ability to generate z OS messages will be supported on IBM System z10 and System z9 servers with z OS 1 11 with enabling support rolled back to z OS 1 9 in the second half of 2009 The following Server Time Protocol STP enhancements are available on the z10 EC z10 BC z9 EC and 210 BC The prerequisites are that you install STP feature and that the latest MCLs are installed for the applicable driver NTP client support This enhancement addresses the requirements of customers who n
79. hannel Path Identifier CHPID type OSC Each port can support up to 120 console session connections can be shared among logical partitions using Multiple Image Facility MIF and can be spanned across multiple Channel Subsystems CSSs Remove L2 L3 LPAR to LPAR Resiriction OSA port sharing between virtual switches can communi cate whether the transport mode is the same Layer 2 to Layer 2 or different Layer 2 to Layer 3 This enhance ment is designed to allow seamless mixing of Layer 2 and Layer 3 traffic helping to reduce the total cost of network ing Previously Layer 2 and Layer 3 TCP IP connections through the same OSA port CHPID were unable to com municate with each other LPAR to LPAR using the Multiple Image Facility MIF This enhancement is designed to facilitate a migration from Layer 3 to Layer 2 and to continue to allow LAN administrators to configure and manage their mainframe network topology using the same techniques as their non mainframe topology OSA SF Virtual MAC and VLAN id Display Capability The Open Systems Adapter Support Facility OSA SF has the capability to support virtual Medium Access Control MAC and Virtual Local Area Network VLAN identifica tions IDs associated with OSA Express2 feature config ured as a Layer 2 interface This information will now be displayed as a part of an OSA Address Table OAT entry This information is independent of IPv4 and IPv6 formats There can be multiple L
80. have the capability to perform protocol or Layer 3 independent that is not IP only With the Layer 2 interface packet forwarding decisions are based upon Link Layer Layer 2 information instead of Network Layer Layer 3 information Each operating system attached to the Layer 2 interface uses its own MAC address This means the traffic can be IPX NetBIOS SNA IPv4 or IPv6 An OSA Express3 feature can filter inbound datagrams by Virtual Local Area Network identification VLAN ID IEE 802 1q and or the Ethernet destination MAC address Fil tering can reduce the amount of inbound traffic being pro cessed by the operating system reducing CPU utilization Layer 2 transport mode is supported by z VM and Linux on System z 31 OSA Layer 3 Virtual MAC for z OS To simplify the infrastructure and to facilitate load balanc ing when an LPAR is sharing the same OSA Media Access Control MAC address with another LPAR each operating system instance can now have its own unique logical or virtual MAC VMAC address All IP addresses associ ated with a TCP IP stack are accessible using their own VMAC address instead of sharing the MAC address of an OSA port This applies to Layer 3 mode and to an OSA port shared among Logical Channel Subsystems This support is designed to Improve IP workload balancing Dedicate a Layer 3 VMAC to a single TCP IP stack e Remove the dependency on Generic Routing Encapsu la
81. ic connectivity enables mul tiple FCP switches directors on a fabric to share links and therefore provides improved utilization of inter site con nected resources and infrastructure 22 FICON and FCP for connectivity to disk tape and printers High Performance FICON improvement in performance and RAS Enhancements have been made to the z Architecture and the FICON interface architecture to deliver optimiza tions for online transaction processing OLTP workloads When exploited by the FICON channel the z OS operating system and the control unit High Performance FICON for System z ZHPF is designed to help reduce overhead and improve performance Additionally the changes to the architectures offer end to end system enhancements to improve reliability avail ability and serviceability RAS ZHPF channel programs can be exploited by the OLTP 1 O workloads DB2 VSAM PDSE and zFS which transfer small blocks of fixed size data 4K blocks ZHPF implementation by the DS8000 is exclusively for I Os that transfer less than a single track of data The maximum number of I Os is designed to be improved up to 100 for small data transfers that can exploit ZHPF Realistic production workloads with a mix of data transfer sizes can see up to 30 to 70 of FICON Os utilizing ZHPF resulting in up to a 10 to 30 savings in channel utilization Sequential Os transferring less than a single track size for example 12x4k byte
82. ients using a User Defined Extension UDX of the Common Cryptographic Architecture should contact their UDX provider for an application upgrade before order ing a new System z10 BC machine or before planning to migrate or activate a UDX application to firmware driver level 73 and higher e The Crypto Express2 feature is supported on the z9 BC and can be carried forward on an upgrade to the System z10 BC e You may continue to use TKE workstations with 5 3 licensed internal code to control the System z10 BC e TKE 5 0 and 5 1 workstations 0839 and 0859 may be used to control z9 EC z9 BC 2890 and IBM eServer zSeries 990 z990 servers Remote Loading of Initial ATM Keys Typically a new ATM has none of the financial institution s keys installed Remote Key Loading refers to the pro cess of loading Data Encryption Standard DES keys to Automated Teller Machines ATMs from a central admin istrative site without the need for personnel to visit each machine to manually load DES keys This has been done by manually loading each of the two clear text key parts individually and separately into ATMs Manual entry of keys is one of the most error prone and labor intensive activities that occur during an installation making it expen sive for the banks and financial institutions 40 Remote Key Loading Benefits e Provides a mechanism to load initial ATM keys without the need to send technical staff to ATMs Reduces downtime due to
83. ies have typically been performed by software through the use of libraries With a hardware decimal floating point unit some of these calculations may be done directly and accelerated The design of the 210 BC provides the flexibility to con figure the PUs for different uses There are 12 PUs per system two are designated as System Assist Processors SAPs standard per system The remaining 10 PUs are available to be characterized as either CPs ICF proces sors for Coupling Facility applications or IFLs for Linux applications and z VM hosting Linux as a guest System z10 Application Assist Processors zAAPs System z10 Integrated Information Processors zIIPs or as optional SAPs and provide you with tremendous flexibility in estab lishing the best system for running applications The z10 BC can support from the 4 GB minimum memory up to 248 GB of available real memory per server for grow ing application needs A new 8 GB fixed HSA which is managed separately from customer memory This fixed HSA is designed to improve availability by avoiding out ages that were necessary on prior models to increase its size There are up to 12 I O interconnects per system at 6 GBps each The z10 BC supports a combination of Memory Bus Adapter MBA and Host Channel Adapter HCA fanout cards New MBA fanout cards are used exclusively for ICB 4 New ICB 4 cables are needed for 210 BC The InfiniBand Multiplexer IFB MP card replaces the Self Timed
84. in QDIO mode 8992 byte frame size when operating at 1 Gbps fiber or copper and 10 Gbps fiber e 640 TCP IP stacks per CHPID for hosting more images e Large send for IPv4 packets for TCP IP traffic and CPU efficiency offloading the TCP segmentation processing from the host TCP IP stack to the OSA Express feature e Concurrent LIC update to help minimize the disrup tion of network traffic during an update when properly configured designed to avoid a configuration off or on applies to CHPID types OSD and OSN e Multiple Image Facility MIF and spanned channels for sharing OSA among logical channel subsystems The OSA Express3 and OSA Express2 Ethernet features support the following CHPID types CHPID OSA Express3 Purpose Traffic Type OSA Express2 Features OSC 1000BASE T OSA Integrated Console Controller OSA ICC T N3270E non SNA DFT IPL to CPC and LPARs Operating system console operations 08D 1000BASE T Queued Direct Input Output QD10 GbE TCP IP traffic when Layer 3 10 GbE Protocol independent when Layer 2 OSE 1000BASE T Non QDIO SNA APPN HPR and or TCP IP thru LCS passthru LCS OSN 1000BASE T OSA for NCP _ GbE Supports channel data link control CDLC OSA Express3 10 GbE OSA Express3 10 Gigabit Ethernet LR The OSA Express3 10 Gigabit Ethernet GbE long reach LR feature has two ports Each port resides on a PCle adapter and has its own channel path identifier
85. ing the need for an external Coupling Facility IBM System z Parallel Sysplex technology allows for greater scalability and availability by coupling mainframes together Using Parallel Sysplex clustering System z serv ers are designed for up to 99 999 availability The Integrated Facility for Linux IFL processor offers support for Linux and brings a wealth of available appli cations that can be run in a real or virtual environment on the 210 BC An example is the z VSE strategy which supports integration between the IFL z VSE and Linux on System z to help customers integrate timely production of z VSE data into new Linux applications such as data warehouse environments built upon a DB2 data server To consolidate distributed servers onto System z the IFL with Linux and the System z virtualization technologies fulfill the qualifications for business critical workloads as well as for infrastructure workloads For customers interested to use a z10 BC only for Linux workload the 210 BC can be config ured as a server with IFLs only The System z10 Application Assist Processor ZAAP is designed to help enable strategic integration of new appli cation technologies such as Java technology based Web applications and XML based data interchange services with core business database environments This helps provide a more cost effective specialized z OS applica tion Java execution environment Workloads eligible for the Z
86. ink The actual performance is depen dent upon many factors including latency through the adapters cable lengths and the type of workload Specifi cally with 12x InfiniBand coupling links while the link data rate can be higher than that of ICB the service times of coupling operations are greater and the actual throughput is less Refer to the Coupling Facility Configuration Options white paper for a more specific explanation of when to continue using the current ICB or ISC 3 technology versus migrat ing to InfiniBand coupling links The whitepaper is available at http Awww ibm com systems z advantages pso whitepaper html z10 Coupling Link Options Type Description Use Link Distance z10 BC z10 data rate z10 EC Max Max SS O PSIFB 1x IB DDR LR z10toz10 5Gbps 10 km unrepeated 12 32 6 2 miles P 100 km repeated PSIFB 12x IB DDR z10toz10 6GBps 150 meters 12 32 210 toz9 3 GBps 492 ft Ic Internal Internal Internal N A 32 32 64 Coupling Communi Speeds CHPIDS _ Channel cation ICB 4 Copper 210 z9 2GBps_ 10 meters 12 16 connection z990 z890 33 ft between OS and CF ISC 3 Fiber 210 z9 2Gbps 10 km 48 48 connection z990 z890 unrepeated between OS 6 2 miles __ and CF 100 km repeated The maximum number of Coupling Links combined cannot exceed 64 per server PSIFB ICB 4 ISC 3 There is a maximum of 64 Coupling CHPIDs CIB ICP C
87. ith fewer I O interrupts is designed to reduce CPU utilization of the sending and receiving LPAR The HiperSockets Multiple Write solution moves multiple output data buffers in one write operation If the function is disabled then one output data buffer is moved in one write operation This is also how HiperSockets functioned in the past If the function is enabled then multiple output data buf fers are moved in one write operation This reduces CPU utilization related to large outbound messages When enabled HiperSockets Multiple Write will be used anytime a message spans an IQD frame requiring multiple output data buffers SBALs to transfer the message Spanning multiple output data buffers can be affected by a number of factors including e IQD frame size e Application socket send size e TCP send size e MTU size The HiperSockets Multiple Write Facility is supported in the z OS environment For a complete description of the System z10 connectivity capabilities refer to IBM System z Connectivity Handbook SG24 5444 HiperSockets Enhancement for zlIP Exploitation In z OS V1 10 specifically the z OS Communications Server allows the HiperSockets Multiple Write Facility processing for outbound large messages originating from z OS to be performed on a ZIIP The combination of 35 HiperSockets Multiple Write Facility and zlIP enablement is described as zlIP Assisted HiperSockets for large mes sages zlIP Assisted HiperS
88. l switches or directors in z VM z VSE and Linux on System z10 environments Native FICON Channels Native FICON channels and devices can help to reduce bandwidth constraints and channel contention to enable easier server consolidation new application growth large business intelligence queries and exploitation of On Demand Business The FICON Express4 FICON Express2 and FICON Express channels support native FICON and FICON Channel to Channel CTC traffic for attachment to servers disks tapes and printers that comply with the FICON architecture Native FICON is supported by all of the z10 BC operating systems Native FICON and FICON CTC are defined as CHPID type FC Because the FICON CTC function is included as part of the native FICON FC mode of operation FICON CTC is not limited to intersystem connectivity as is the case with ESCON but will support multiple device definitions FICON Support for Cascaded Directors Native FICON FC channels support cascaded directors This support is for a single hop configuration only Two director cascading requires a single vendor high integrity fabric Directors must be from the same vendor since cascaded architecture implementations can be unique This type of cascaded support is important for disaster recovery and business continuity solutions because it can help provide high availability extended distance connec tivity and particularly with the implementation of 2 Gb sec
89. ling Links I O Interface Physical Layer SA23 0395 site www ibm com servers resourcelink ESCON and FICON CTC Reference SB10 7034 ESCON O Interface Physical Layer SA23 0394 FICON I O Interface Physical Layer SA24 7172 66 67 e Copyright IBM Corporation 2009 IBM Systems and Technology Group Route 100 Somers NY 10589 U S A Produced in the United States of America 04 09 All Rights Reserved References in this publication to IBM products or services do not imply that IBM intends to make them available in every country in which IBM operates Consult your local IBM business contact for information on the products features and services available in your area IBM IBM eServer the IBM logo the e business logo AIX APPN CICS Cognos Cool Blue DB2 DRDA DS8000 Dynamic Infrastructure ECKD ESCON FICON Geographically Dispersed Parallel Sysplex GDPS HiperSockets HyperSwap IMS Lotus MQSeries MVS OS 390 Parallel Sysplex PR SM Processor Resource Systems Manager RACF Rational Redbooks Resource Link RETAIN REXX RMF Scalable Architecture for Financial Reporting Sysplex Timer Systems Director Active Energy Manager System Storage System z System z9 System z10 Tivoli TotalStorage VSE ESA VTAM WebSphere z9 210 z10 BC z10 EC z Architecture z OS z VM z VSE and zSeries are trademarks or registered trademarks of the International Business Machines Corporation in the Unites States and other countries I
90. liverable required for z10 to enable HiperDispatch on z10 does not require a zIIP z OS V1 7 support was withdrawn September 30 2008 The Lifecycle Extension for z OS V1 7 5637 A01 makes fee based cor rective service for z OS V1 7 available through September 2009 With this Lifecycle Extension z OS V1 7 supports the z10 BC server Certain functions and features of the z10 BC server require later releases of z OS For a complete list of software support see the PSP buckets and a the Software Requirements section of the z10 BC announcement letter Novell SUSE SLES 9 2 aarti eran a listed releases Compatibility support allows OS Novell SUSE SLES 10 3 aN E tee aan which allows z VM to IPL and operate on Red Hat RHEL 4 the System z10 providing IBM System z9 functionality for the base OS Red Hat RHEL 5 and Guests z VM supports 31 bit and 64 bit guests 4 2 VSE V3 31 bit mode only It does not implement z Architecture and specifically does not implement 64 bit mode capabilities z VSE is designed to exploit select features of System z10 System z9 and IBM eServer zSeries hardware 5 2 VSE V4 is designed to exploit 64 bit real memory addressing but will not support 64 bit virtual memory addressing Note Refer to the z OS z VM z VSE subsets of the 2098DEVICE Preventive Planning PSP bucket prior to installing a z10 BC 10 z10 BC The IBM System z10 Business Class z10 BC delivers innovative technologies for small and me
91. magni tude or more longer than the elapsed time for LAN based alternatives Using the legacy support and the z VM 5 4 support z VM can be installed in an LPAR and both z VM and Linux on System z can be installed in a virtual machine from the HMC DVD drive without requiring any external network setup or a connection between an LPAR and the HMC This addresses security concerns and additional configura tion efforts using the only other previous solution of the exter nal network connection from the HMC to the z VM image Support for the enhanced installation support for z VM using the HMC is exclusive to z VM 5 4 and the System z10 Implementation Services for Parallel Sysplex IBM Implementation Services for Parallel Sysplex CICS and WAS Enablement IBM Implementation Services for Parallel Sysplex Middle ware CICS enablement consists of five fixed price and fixed scope selectable modules 1 CICS application review 2 z OS CICS infrastructure review module 1 is a prerequi site for this module 3 CICS implementation module 2 is a prerequisite for this module 4 CICS application migration 5 CICS health check IBM Implementation Services for Parallel Sysplex Mid dleware WebSphere Application Server enablement consists of three fixed price and fixed scope selectable modules 1 WebSphere Application Server network deployment planning and design 2 WebSphere Application Server network deployment implementation
92. mentation 3 Adding additional data sharing members 4 DB2 data sharing testing 5 DB2 data sharing backup and recovery For more information on these services contact your IBM representative or refer to www ibm com services server GDPS Geographically Dispersed Parallel Sysplex GDPS is designed to provide a comprehensive end to end con tinuous availability and or disaster recovery solution for System z servers Geographically Dispersed Open Clusters GDOC is designed to address this need for open systems When available GDPS 3 5 will support GDOC for coordinated disaster recovery across System z and non System z servers if Veritas Cluster Server is already installed GDPS and the new Basic HyperSwap available with z OS V1 9 solutions help to ensure system failures are invisible to employees partners and customers with dynamic disk swapping capabilities that ensure appli cations and data are available 210 BC big on service low on cost GDPS is a multi site or single site end to end application availability solution that provides the capability to manage remote copy configuration and storage subsystems including IBM TotalStorage to automate Parallel Sysplex operation tasks and perform failure recovery from a single point of control GDPS helps automate recovery procedures for planned and unplanned outages to provide near continuous avail ability and disaster recovery capability For additional information on GDPS visit
93. munication Controller for Linux CCL CCL is designed to help eliminate hardware dependen cies such as 3745 3746 Communication Controllers ESCON channels and Token Ring LANs by providing a software solution that allows the Network Control Program NCP to be run in Linux on System z freeing up valuable data center floor space 32 CCL helps preserve mission critical SNA functions such as SNI and z OS applications workloads which depend upon these functions allowing you to collapse SNA inside a z10 BC while exploiting and leveraging IP The OSA Express3 and OSA Express2 GbE and 1000BASE T Ethernet features provide support for CCL This support is designed to require no changes to operat ing systems does require a PTF to support CHPID type OSN and also allows TPF to exploit CCL Supported by Z VM for Linux and z TPF guest environments OSA Express3 and OSA Express2 OSN OSA for NCP OSA Express for Network Control Program NCP Channel path identifier CHPID type OSN is now available for use with the OSA Express3 GbE features as well as the OSA Express3 1000BASE T Ethernet features OSA Express for NCP supporting the channel data link control CDLC protocol provides connectivity between System z operating systems and IBM Communication Con troller for Linux CCL CCL allows you to keep your busi ness data and applications on the mainframe operating systems while moving NCP functions to Linux on System z CCL provides
94. ncept of n 2 on the hardware as well as the software The z10 BC participates in a Sysplex with System z10 EC System z9 z990 and z890 only and currently supports z OS 1 8 and higher and z VM 5 2 for a guest virtualization coupling facility test environment For detailed information on IBM s Parallel Sysplex technol ogy visit our Parallel Sysplex home page at http www 03 ibm com systems z pso Coupling Facility Control Code CFCC Level 16 CFCC Level 16 is being made available on the IBM System z10 BC Improved service time with Coupling Facility Duplex ing enhancements Prior to Coupling Facility Control Code CFCC Level 16 System Managed Coupling Facility CF Structure Duplexing required two duplexing protocol exchanges to occur synchronously during pro cessing of each duplexed structure request CFCC Level 16 allows one of these protocol exchanges to complete asynchronously This allows faster duplexed request ser vice time with more benefits when the Coupling Facilities are further apart such as in a multi site Parallel Sysplex environment List notification improvements Prior to CFCC Level 16 when a shared queue subsidiary list changed state from empty to non empty the CF would notify ALL active con nectors The first one to respond would process the new message but when the others tried to do the same they would find nothing incurring additional overhead CFCC Level 16 can help improve the efficiency of
95. nd performance for your critical workloads e Intelligent and optimized dispatching of workloads HiperDispatch can help provide increased scalability and performance of higher n way System z10 systems by improving the way workload is dispatched within the server e Low cost high availability disk solution The Basic HyperSwap capability enabled by TotalStorage Productivity Center for Replication Basic Edition for System z provides a low cost single site high availability disk solution which allows the configuration of disk replication services using an intuitive browser based graphical user interface GUI served from z OS e Improved total cost of ownership zIiP Assisted HiperSockets for Large Messages IBM Scalable Architecture for Financial Reporting enabled for zIIP a service offering of IBM Global Business Services zlIP Assisted z OS Global Mirror XRC and additional z OS XML System Services exploitation of zlIP and ZAAP help make these workloads more attractive on System z e Improved management of temporary processor capac ity Capacity Provisioning Manager which is available on z OS V1 10 and on z OS V1 9 with PTFs can monitor 2 OS systems on System 210 servers Activation and deactivation of temporary capacity can be suggested or performed automatically based on user defined sched ules and workload criteria RMF or equivalent function is required to use the Capacity Provisioning Manager e Improved net
96. nector to best meet your data center requirements Small form factor connec tors are available to help reduce the floor space required for patch panels CPL planning and layout is done prior to arrival of the server on site using the default CHannel Path IDdentifier CHPID placement report and documentation is provided showing the CHPID layout and how the direct attach har nesses are plugged FQC supports all of the ESCON channels and all of the FICON LX channels in the I O drawer of the server On an upgrade from a z890 or z9 BC ESCON channels that are NOT using FQC cannot be used on the z10 BC FQC feature 210 BC Physical Characteristics Physical Planning A System z10 BC feature may be ordered to allow use of the z10 BC in a non raised floor environment This capabil ity may help ease the cost of entry into the z10 BC a raised floor may not be necessary for some infrastructures The non raised floor z10 BC implementation is designed to meet all electromagnetic compatibility standards Feature 7998 must be ordered if the z10 BC is to be used in a non raised floor environment A Bolt down kit 7992 is also available for use with a non raised floor 210 BC providing frame stabilization and bolt down hardware to help secure a frame to a non raised floor Bolt down kit 7992 may be ordered for initial box or MES starting January 28 2009 The Installation Manual for Physical Planning GC28 6875 is available on Resource Link
97. nfiniBand is a trademark and service mark of the InfiniBand Trade Asso ciation Java and all Java based trademarks and logos are trademarks or regis tered trademarks of Sun Microsystems Inc in the United States or other countries Linux is a registered trademark of Linus Torvalds in the United States other countries or both UNIX is a registered trademark of The Open Group in the Unites States and other countries Microsoft Windows and Windows NT are registered trademarks of Micro soft Corporation In the United States other countries or both Intel is a trademark of the Intel Corporation in the United States and other countries Other trademarks and registered trademarks are the properties of their respective companies IBM hardware products are manufactured from new parts or new and used parts Regardless our warranty terms apply Performance is in Internal Throughput Rate ITR ratio based on measure ments and projections using standard IBM benchmarks in a controlled environment The actual throughput that any user will experience will vary depending upon considerations such as the amount of multiprogramming in the user s job stream the I O configuration the storage configuration and the workload processed Therefore no assurance can be given that an individual user will achieve throughput improvements equivalent to the performance ratios stated here All performance information was determined in a controlled environment
98. ng applications running in one partition to communicate with applications running in another with out dependency on an external network Industry standard and openness are design objectives for I O in z9 BC 210 BC Model The z10 BC has one model the E10 Machine Type 2098 offering between 1 to 10 processor units PUs which can be configured to provide a highly scalable solution designed to meet the needs of both high transaction pro cessing applications and On Demand business The PUs can be characterized as either CPs IFLs ICFs zAAPs zlIPs or option SAPs An easy to enable ability to turn off CPs or IFLs is available on z10 BC allowing you to purchase capacity for future use with minimal or no impact on software billing An MES feature will enable the turned off CPs or IFLs for use where you require the increased capacity There are a wide range of upgrade options avail able in getting to and within the z10 BC The z10 BC hardware model number E10 on its own does not indicate the number of PUs which are being used as CPs For software billing purposes only there will be a Capacity Indicator associated with the number PUs that are characterized as CPs This number will be reported by the Store System Information STSI instruction for soft ware billing purposes only There is no affinity between the hardware model and the number of CPs 210 BC capacity identifiers nxx where n subcapacity engine size and xx
99. ntended to prevent unauthorized application programs subsystems and users from bypassing z OS security that is to prevent them from gaining access circumventing disabling altering or obtaining control of key z OS system processes and resources unless allowed by the installation Specifically z OS System Integrity is defined as the inability of any program not authorized by a mechanism under the installation s control to circumvent or disable store or fetch protection access a resource protected by the z OS Security Server RACF or obtain control in an authorized state that is in supervisor state with a protection key less than eight 8 or Authorized Program Facility APF authorized In the event that an IBM System Integrity problem is reported IBM will always take action to resolve it IBM s long term commitment to System Integrity is unique in the industry and forms the basis of the z OS industry leadership in system security z OS is designed to help you protect your system data transactions and applications from accidental or malicious modification This is one of the many reasons System z remains the industry s premier data server for mission critical workloads zNM z VM V5 4 is designed to extend its System z virtualization technology leadership by exploiting more capabilities of System z servers including e Greater flexibility with support for the new z VM mode logical partitions allowing all Sys
100. ntralized policy based networking e Expanded IBM Health Checker e Simplified RACF Administration e Hardware Decimal Floating Point e Parallel Sysplex support for InfiniBand Coupling Links e NTP Support for STP e HiperSockets Multiple Write Facility e OSA Express3 support e Advancements in ease of use for both new and existing IT professionals coming to z OS e Support for zllP assisted IPSec System Data Mover SDM offload to zllP_ and support for eligible portions of DB2 9 XML parsing workloads to be offloaded to zZAAP processors e Expanded options for AT TLS and System SSL network security e Improved creation and management of digital certifi cates with RACF SAF and z OS PKI Services e Additional centralized ICSF encryption key management functions for applications e Improved availability with Parallel Sysplex and Coupling Facility improvement e Enhanced application development and integration with new System REXX facility Metal C facility and z OS UNIX System Services commands e Enhanced Workload Manager in managing discretionary work and zlIP and ZAAP workloads Commitment to system integrity First issued in 1973 IBM s MVS System Integrity State ment and subsequent statements for OS 390 and z OS stand as a symbol of IBM s confidence and commitment to the z OS operating system Today IBM reaffirms its com mitment to z OS system integrity IBM s commitment includes designs and development practices i
101. o operate Linux on System z on IFLs to operate z VSE and z OS on CPs to offload z OS system software overhead such as DB2 workloads on zIIPs and to offer an economical Java exe cution environment under z OS on ZAAPs all in the same Z VM LPAR The New Face Of System z IBM s mainframe capabilities are legendary Customers deploy systems that remain available for years because they are expected to and continue to work above expec tations However these systems have seen significant innovative improvements for running new applications and consolidating workloads in the last few years and custom ers can see real gains in price performance by taking advantage of this new technology IBM provides affordable world class technology to help today s enterprises respond to business conditions quickly and with flexibility From automation to advanced virtualiza tion technologies to new applications supported by open industry standards such as SOA IBM servers teamed with IBM s Storage Systems Global Technology Services and IBM Global Financing help deliver competitive advantages for a Dynamic Infrastructure z Can Do IT The future runs on IBM System z and the future begins today 2 Architecture The 210 BC continues the line of upward compatible main frame processors and retains application compatibility since 1964 The z10 BC supports all z Architecture com pliant Operating Systems The heart of the processor unit is the I
102. ockets can help make highly secure available virtual HiperSockets networking a more attractive option z OS application workloads based on XML HTTP SOAP Java etc as well as traditional file transfer can benefit from zIIP enablement by helping to lower general purpose processor utilization for such TCP IP traffic Only outbound z OS TCP IP large messages which origi nate within a z OS host are eligible for HiperSockets zlIP Assisted processing Other types of network traffic such as IP forwarding Sysplex Distributor inbound processing small messages or other non TCP IP network protocols are not eligible for zllIP Assisted HiperSockets When the workload is eligible then the TCP IP HiperSockets device driver layer write processing is redirected to a ZIIP which will unblock the sending application zIIP Assisted HiperSockets for large messages is available with z OS V1 10 with PTF and System 210 only This feature is unsup ported if z OS is running as a guest in a z VM environment and is supported for large outbound messages only To estimate potential offload use PROJECTCPU for current and existing workloads This is accurate and very simple but you have to be on z OS 1 10 with the enabling PTFs AND System z10 server AND you need to be performing HiperSockets Multiple Write workload already on z OS Security Today s world mandates that your systems are secure and available 24 7 The z10 BC employs some of the most adv
103. omains per drawer and four I O cards per domain I O cards are horizontal and may be added concurrently Concurrent replacement and or repair is available with systems containing more than one O drawer Drawers may be added concurrently should the need for more con nectivity arise ESCON FICON Express4 FICON Express2 FICON Express OSA Express3 OSA Express2 and Crypto Express2 features plug into the z10 BC I O drawer along with any ISC 3s and InfiniBand Multiplexer IFB MP cards All I O features and their support cards can be hot plugged in the I O drawer Each model ships with one I O drawer as standard in the A Frame the A Frame also contains the Central Processor Complex CPC where the I O drawers are installed Each IFB MP has a bandwidth up to 6 GigaBytes per second GB sec for I O domains and MBA fanout cards provide 2 0 GB sec for ICB 4s The z10 BC continues to support all of the features announced with the System z9 BC such as e Logical Channel Subsystems LCSSs and support for up to 30 logical partitions Increased number of Subchannels 63 75k Multiple Subchannel Sets MSS Redundant I O Interconnect Physical Channel IDs PCHIDs System Initiated CHPID Reconfiguration Logical Channel SubSystem LCSS Spanning 18 System 1 0 Configuration Analyzer Today the information needed to manage a system s I O configuration has to be obtained from many separate applications The System s I O Configu
104. on System z for a single view of actual energy usage across multiple heterogeneous IBM platforms within the infrastructure AEM for Linux on System z will allow tracking of trends for both the z10 BC as well as multiple server platforms With this trend analysis a data center administrator will have the data to help properly estimate power inputs and more accurately plan data center con solidation or modification projects On System z10 the HMC will now provide support for the Active Energy Manager AEM which will display power consumption air input temperature as well as exhaust temperature AEM will also provide some limited status configuration information which might assist in explaining changes to the power consumption AEM is exclusive to System z10 Parallel Sysplex Cluster Technology IBM System z servers stand alone against competition and have stood the test of time with our business resiliency solutions Our coupling solutions with Parallel Sysplex technology allow for greater scalability and availability Parallel Sysplex clustering is designed to bring the power of parallel processing to business critical System 210 System z9 z990 or z890 applications A Parallel Sysplex cluster consists of up to 32 z OS images coupled to one or more Coupling Facilities CFs or ICFs using high speed specialized links for communication The Coupling Facili ties at the heart of the Parallel Sysplex cluster enable high speed read write d
105. or your business Whether you want to deploy new applica tions quickly grow your business without growing IT costs or consolidate your infrastructure for reduced complexity look no further z Can Do IT Think Big Virtually Limitless The Information Technology industry has recognized the business value of exploiting virtualization technologies on any and all server platforms The leading edge virtualization capabilities of System z backed by over 40 years of tech nology innovation are the most advanced in the industry With utilization rates of up to 100 it s the perfect platform for workload consolidation both traditional and new e Want to deploy dozens or hundreds of applications on a single server for lower total cost of ownership Want a more simplified responsive infrastructure e Want investment protection where new generation tech nology typically allows application growth at no extra cost The virtualization technology found in z VM with the System z platform may help clients achieve all of these operational goals while also helping to maximize the finan cial return on their System z investments The z10 BC can have big advantages over traditional server farms The z10 BC is designed to reduce energy usage and save floor space when used to consolidate x86 servers With increased capacity the z10 BC virtualization capabilities can help to support hundreds of virtual servers in a single 1 42 square mete
106. orary capacity through z OS Capacity Provisioning See z OS MVS Capacity Provisioning User s Guide SA33 8299 for more information On Off CoD Test On Off CoD allows for a no charge test No IBM charges are assessed for the test including IBM charges associated with temporary hardware capacity IBM software or IBM maintenance This test can be used to validate the processes to download stage install acti vate and deactivate On Off CoD capacity non disruptively Each On Off CoD enabled server is entitled to only one no charge test This test may last up to a maximum duration of 24 hours commencing upon the activation of any capac ity resources contained in the On Off CoD record Activa tion levels of capacity may change during the 24 hour test period The On Off CoD test automatically terminates at the end of the 24 hours period In addition to validating the On Off CoD function within your environment you may choose to use this test as a training session for your per sonnel who are authorized to activate On Off CoD SNMP API Simple Network Management Protocol Appli cation Programming Interface enhancements have also been made for the new Capacity On Demand features More information can be found in the System z10 Capacity On Demand User s Guide SC28 6871 44 Capacity on Demand Permanent Capacity Customer Initiated Upgrade CIU facility When your business needs additional capacity quickly Customer Initiated U
107. ormance for small block sizes The Fibre Channel Protocol FCP Licensed Internal Code has been modified to help provide increased O operations per second for small block sizes With FICON Express4 there may be up to 57 000 I O operations per second all reads all writes or a mix of reads and writes an 80 increase compared to System z9 These results are achieved in a laboratory environment using one channel configured as CHPID type FCP with no other processing occurring and do not represent actual field measurements A significant increase in I O operations per second for small block sizes can also be expected with FICON Express2 This FCP performance improvement is transparent to oper ating systems that support FCP and applies to all the FICON Express4 and FICON Express2 features when con figured as CHPID type FCP communicating with SCSI devices SCSI IPL now a base function The SCSI Initial Program Load IPL enablement feature first introduced on z990 in October of 2003 is no longer required The function is now delivered as a part of the server Licensed Internal Code SCSI IPL allows an IPL of an operating system from an FCP attached SCSI disk FCP Full fabric connectivity FCP full fabric support means that any number of single vendor FCP directors switches can be placed between the server and an FCP SCSI device thereby allowing many hops through a Storage Area Network SAN for I O connectivity FCP full fabr
108. ort options and Crypto Express2 1P has 1 coprocessor Available only when carried forward on an upgrade from z890 or or z9 BC Limited availability for OSA Express2 GbE features 62 210 BC Concurrent PU Conversions e Must order characterize one PU as a CP an ICF or an IFL e Concurrent model upgrade is supported e Concurrent processor upgrade is supported if PUs are available Add CP IFL unassigned IFL ICF ZAAP zIIP or optional SAP e PU Conversions Standard SAP cannot be converted to other PU types s manimi Unassigned Optional om m IFL SAP cP X Yes Yes Yes Yes Yes Yes el Yes X Yes Yes Yes Yes Yes am Yes Yes X Yes Yes Yes Yes a ICF Yes Yes Yes X Yes Yes Yes ya Yes Yes Yes Yes X Yes Yes me Yes Yes Yes Yes Yes X Yes onn Yes Yes Yes Yes Yes Yes X ce Exceptions Disruptive if ALL current PUs are converted to different types may require individual LPAR disruption if dedicated PUs are converted z10 BC Model Structure Z10 BC System weight and IBF hold up times PU PUs for Max Avail Standard Standard CP Customer Subcapacity SAPs Spares E10 4 z10 Model E10 Single Frame w o IBF 1890 Ibs 2100 Ibs Max is for ESCON channels For each ZAAP and or zIIP installed there must be a corresponding CP 210 BC IBF hold uptime The CP may satisfy the requirement for both the ZAAP and or zIIP The
109. ow to adopt a multitude of innovations to keep the company competitive IBM has a vision that can help the Dynamic Infrastructure an evolutionary model that helps reset the economics of IT and can dramatically improve operational efficiency It also can help reduce and control rising costs and improve pro visioning speed and data center security and resiliency at any scale It will allow you to be highly responsive to any user need And it aligns technology and business giving you the freedom and the tools you need to innovate and be competitive IBM System z is an excellent choice as the foundation for a highly responsive infrastructure New world New business A whole new mainframe Meet the IBM System z10 Business Class z10 BC the tech nology that could change the way you think about Enter prise solutions The technology that delivers the scalability flexibility virtualization and breakthrough performance you need at the lower capacity entry point you want This is the technology that fights old myths and percep tions that s not just for banks and insurance companies This is the technology for any business that wants to ramp up innovation boost efficiencies and lower costs pretty much any enterprise any size any location This is a mainframe technology for a new kind of data center resil ient responsive energy efficient this is 210 BC And its about to rewrite the rules and deliver new freedoms f
110. p For most cases should a failure occur on the pri mary oscillator card the backup can detect it switch over and provide the clock signal to the system transparently with no system outage Previously in the event of a failure of the active oscillator a system outage would occur the subsequent system Power On Reset POR would select the backup and the system would resume operation Dynamic Oscillator Switchover is exclusive to System 210 and System z9 Transparent Sparing The z10 BC offers 12 PUs two are designated as System Assist Processors SAPs In the event of processor failure if there are spare processor units available undefined these PUs are used for transparent sparing Concurrent Memory Upgrade Memory can be upgraded concurrently using LIC CC if physical memory is available on the machine either through the Plan Ahead Memory feature or by having more physical memory installed in the machine that has not been activated Plan Ahead Memory Future memory upgrades can now be preplanned to be nondisruptive The preplanned memory feature will add the necessary physical memory required to support target memory sizes The granularity of physical memory in the System z10 design is more closely associated with the granularity of logical entitled memory leaving little room for growth If you anticipate an increase in memory require ments a target logical memory size can now be speci 47 fied in the config
111. pgrade CIU is designed to deliver it CIU is designed to allow you to respond to sudden increased capacity requirements by requesting a System z10 BC PU and or memory upgrade via the Web using IBM Resource Link and downloading and applying it to your System z10 BC server using your system s Remote Support connec tion Further with the Express option on CIU an upgrade may be made available for installation as fast as within a few hours after order submission Permanent upgrades Orders MESs of all PU types and memory for System z10 BC servers that can be delivered by Licensed Internal Code Control Code LIC CC are eligible for CIU delivery CIU upgrades may be performed up to the maximum available processor and memory resources on the installed server as configured While capacity upgrades to the server itself are concurrent your software may not be able to take advantage of the increased capacity without performing an Initial Programming Load IPL System z9 i System z10 Resources CP ZIIP ZAAP IFL ICF CP zIIP AAP IFL ICF SAP Requires access to IBM No password required or RETAIN to activate access to IBM RETAIN to activate CBU On Off CoD CBU On Off CoD CPE One offering ata time Multiple offerings active Permanent Requires de provisioning Concurrent with temporary offerings Yes w CBU amp On Off CoD upgrades of temporary capacity first Replenishment No CBU Tests CBU Expiration
112. processor migrations The LSPR contains the Internal Throughput Rate Ratios ITRRs for the z10 BC and the previous generation zSeries processor families based upon measurements and projections using standard IBM benchmarks in a con trolled environment The actual throughput that any user may experience will vary depending upon considerations such as the amount of multiprogramming in the user s job stream the I O configuration and the workload processed Therefore no assurance can be given that an individual user will achieve throughput improvements equivalent to the performance ratios stated For more detailed per formance information consult the Large Systems Perfor mance Reference LSPR available at htto www ibm com servers eserver zseries Ispr CPU Measurement Facility The CPU Measurement Facility is a hardware facility which consists of counters and samples The facility provides a means to collect run time data for software performance tuning The detailed architecture information for this facility can be found in the System z10 Library in Resource Link z10 BC I O Subsystem A new host bus interface using InfiniBand with a link data rate of 6 GBps was introduced on the z10 BC It provides enough throughput to support the full capacity and pro cessing power of the CPC The z10 BC contains an I O subsystem infrastructure which uses up to four I O drawers that provides eight I O slots in each drawer There are two I O d
113. ration Analyzer SIOA tool is a SE HMC based tool that will allow the system hardware administrator access to the information from these many sources in one place This will make it much easier to manage I O configurations particularly across multiple CPCs The SIOA is a view only tool It does not offer any options other than viewing options First the SIOA tool analyzes the current active IOCDS on the SE It extracts information about the defined channel partitions link addresses and control units Next the SIOA tool asks the channels for their node ID information The FICON channels support remote node ID information so that is also collected from them The data is then formatted and displayed on five screens 1 PCHID Control Unit Screen Shows PCHIDs CSS CHPIDs and their control units 2 PCHID Partition Screen Shows PCHIDS CSS CHP Ds and what partitions they are in 3 Control Unit Screen Shows the control units their PCHIDs and their link addresses in each of the CSS s 4 Link Load Screen Shows the Link address and the PCHIDs that use it 5 Node ID Screen Shows the Node ID data under the PCHIDs The SIOA tool allows the user to sort on various columns and export the data to a USB flash drive for later viewing z10 BC Channels and I O Connectivity ESCON Channels The z10 BC supports up to 480 ESCON channels The high density ESCON feature has 16 ports 15 of which can be activated for cus
114. re Size increment increase from 512 MB gt 1 MB ile Increasing the allowable tasks in the CF from 48 to 112 X 14 CFCC Dispatcher Enhancements X X 13 DB2 Castout Performance X X S z990 Compatibility 64 bit CFCC X X Addressability Message Time Ordering X X n DB2 Performance SM Duplexing Support for zSeries X X 11 z990 Compatibility SM Duplexing Support for 9672 G5 G6 R06 X X 10 z900 GA2 Level X X E Intelligent Resource Director IC3 ICB3 ISC3 Peer Mode X X MQSeries Shared Queues X X e WLM Multi System Enclaves X X Note zSeries 900 800 and prior generation servers are not supported with System z10 for Coupling Facility or Parallel Sysplex levels 64 Statement of Direction IBM intends to support optional water cooling on future high end System z servers This cooling technology will tap into building chilled water that already exists within the datacenter for computer room air conditioning systems External chillers or special water conditioning will not be required Water cooling technology for high end System z servers will be designed to deliver improved energy effi ciencies IBM intends to support the ability to operate from High Voltage DC power on future System z servers This will be in addition to the wide range of AC power already supported A direct HV DC datacenter power design can improve data center energy efficiency by removing the need for an additional DC to AC inversion step The System z10 will
115. rs footprint When consolidat ing on System z you can create virtual servers on demand achieve network savings through HiperSockets internal LAN improve systems management of virtual servers and most importantly consolidate software from many dis tributed servers to a single consolidated server So why run hundreds of standalone servers when z10 BC could do the work more efficiently in a smaller space at a lower cost virtually Less power Less space Less impact on the environment More Solutions More Affordable Today s businesses with extensive investments in hardware assets and core applications are demanding more from IT more value more transactions more for the money Above all they are looking for business solutions that can help enable business growth while driving costs out of the business System z has an ever growing set of solutions that are being enhanced to help you lower IT costs From enterprise wide applications such as SAP or Cognos BI to the consolidation of infrastructure workloads 210 BC has low cost solutions that also help you save more as your demand grows So consider consolidating your IT 4 workloads on the z10 BC server if you want the right solu tions on a premier platform at a price you can afford The convergence of Service Oriented Architecture SOA and mainframe technologies can also help liberate these core business assets by making it easier to enrich mod ernize ext
116. rt of 64 bit addressing The combined bal anced system design allows for increases in performance across a broad spectrum of work Large System Performance Reference IBM s Large Systems Performance Reference LSPR method is designed to provide comprehensive z Architecture processor capacity ratios for different con figurations of Central Processors CPs across a wide variety of system control programs and workload envi ronments For z10 BC z Architecture processor capacity identifier is defined with a AOx ZOx notation where x is the number of installed CPs from one to five There are a total of 26 subcapacity levels designated by the letters A through Z In addition to the general information provided for z OS V1 9 the LSPR also contains performance relationships for z VM and Linux operating environments Based on using an LSPR mixed workload the perfor mance of the z10 BC 2098 Z01 is expected to be 17 e up to 1 4 times that of the z9 BC 2096 Z01 Moving from a System z9 partition to an equivalently sized System z10 BC partition a z VM workload will experience an ITR ratio that is somewhat related to the workload s instruction mix MP factor and level of storage over com mitment Workloads with higher levels of storage over commitment or higher MP factors are likely to experience lower than average z10 BC to z9 ITR scaling ratios The range of likely ITR ratios is wider than the range has been for previous
117. s external InfiniBand coupling links are also valid to pass time synchronization signals for Server Time Protocol STP Therefore the same coupling links can be used to exchange timekeeping informa tion and Coupling Facility messages in a Parallel Sysplex environment The IBM System z10 BC also takes advantage of InfiniBand as a higher bandwidth replacement for the Self Timed Interconnect STI I O interface features found in prior System z servers InfiniBand coupling links are CHPID type CIB Coupling Connectivity for Parallel Sysplex Five coupling link options The 210 BC supports Internal Coupling channels ICs Integrated Cluster Bus 4 ICB 4 InterSystem Channel 3 ISC 3 peer mode and 12x and 1x InfiniBand IFB links for communication in a Parallel Sysplex environment 1 Internal Coupling Channels ICs can be used for inter nal communication between Coupling Facilities CFs defined in LPARs and z OS images on the same server 2 Integrated Cluster Bus 4 ICB 4 links are for short distances ICB 4 links use 10 meter 33 feet copper cables of which 3 meters 10 feet is used for internal routing and strain relief ICB 4 is used to connect z10 BC to z10 BC z10 EC z9 EC z9 BC z990 and z890 Note If connecting to a z9 BC or a z10 BC with ICB 4 those servers cannot be installed with the non raised floor feature Also if the z10 BC is ordered with the non raised floor feature CB 4 cannot be ordered 3
118. s IO may also benefit The FICON Express4 and FICON Express2 features will support both the existing FICON protocol and the ZHPF protocol concurrently in the server Licensed Internal Code High performance FICON is supported by z OS for DB2 VSAM PDSE and ZFS applications ZHPF applies to all FICON Express4 and FICON Express2 features CHPID type FC and is exclusive to System z10 Exploitation is required by the control unit IBM System Storage DS8000 Release 4 1 delivers new capabilities to support High Performance FICON for System z which can improve FICON I O throughput on a DS8000 port by up to 100 The DS8000 series Licensed Machine Code LMC level 5 4 2xx xx bundle version 64 2 xx xx or later is required Platform and name server registration in FICON channel The FICON channel now provides the same information to the fabric as is commonly provided by open systems registering with the name server in the attached FICON directors With this information your storage area network SAN can be more easily and efficiently managed enhancing your ability to perform problem determination and analysis Registration allows other nodes and or SAN managers to query the name server to determine what is connected to the fabric what protocols are supported FICON FCP and to gain information about the System z10 using the attributes that are registered The FICON channel is now designed to perform registration with the fibre channel
119. s enhancement if the PTS fails and the BTS takes over as CTS an API is now available on the HMC so you can automate the reassignment of the PTS BTS and Arbiter roles This can improve availability by avoiding a single point of failure after the BTS has taken over as the CTS Prior to this enhancement the PTS BTS and Arbiter roles had to be reassigned manually using the System Sysplex Time task on the HMC For additional details on the API please refer to System z Application Programming Interfaces SB10 7030 11 Additional information is available on the STP Web page http www ibm com systems z pso stp html The following Redbooks are available on the Redbooks Web site http Awww redbooks ibm com e Server Time Protocol Planning Guide SG24 7280 e Server Time Protocol Implementation Guide SG24 7281 Internal Battery Feature Recommendation Single data center e CTN with 2 servers install IBF on at least the PTS CTS Also recommend IBF on BTS to provide recovery pro tection when BTS is the CTS CTN with 3 or more servers IBF not required for STP recovery if Arbiter configured 56 Two data centers e CTN with 2 servers one in each data center install IBF on at least the PTS CTS Also recommend IBF on BTS to provide recovery pro tection when BTS is the CTS e CTN with 3 or more servers install IBF on at least the PTS CTS Also recommend IBF on BTS to provide recovery pro tection when BTS is the CTS
120. s in a trunk to prevent a single link from being overrun Link aggregation between a VSWITCH and the physical network switch Point to point connections Up to eight OSA Express3 or OSA Express2 ports in one aggregated link Ability to dynamically add remove OSA ports for on demand bandwidth Full duplex mode send and receive Target links for aggregation must be of the same type for example Gigabit Ethernet to Gigabit Ethernet The Open Systems Adapter Support Facility OSA SF will provide status information on an OSA port its shared or exclusive use state OSA SF is an integrated component of z VM Link aggregation is exclusive to System z10 and System z9 is applicable to the OSA Express3 and OSA Express2 features in Layer 2 mode when configured as CHPID type OSD QDIO and is supported by z VM 5 3 and later Layer 2 transport mode When would it be used If you have an environment with an abundance of Linux images in a guest LAN environment or you need to define router guests to provide the connection between these guest LANs and the OSA Express3 features then using the Layer 2 transport mode may be the solution If you have Exchange IPX NetBIOS and SNA pro tocols in addition to Internet Protocol Version 4 IPv4 and nternetwork Packet Pv6 use of Layer 2 could provide protocol independence The OSA ike Layer 2 type devices providing the capability of being Express3 features
121. stricted the amount of power usage it is important to review the role of the server in bal ancing IT spending Power Monitoring The mainframe gas gauge feature introduced on the System z9 servers provides power and thermal informa tion via the System Activity Display SAD on the Hardware Management Console and will be available on the 210 BC giving a point in time reference of the information The current total power consumption in watts and BTU hour as well as the air input temperature will be displayed Power Estimation Tool To assist in energy planning Resource Link provides tools to estimate server energy requirements before a new server purchase A user will input the machine model memory and I O configuration and the tool will output an estimate of the system total heat load and utility input power A customized planning aid is also available on Resource Link which provides physical characteristics of the machine along with cooling recommendations environmental specifications system power rating power plugs receptacles line cord wire specifications and the machine configuration 48 IBM Systems Director Active Energy Manager IBM Systems Director Active Energy Manager AEM is a building block which enables customers to manage actual power consumption and resulting thermal loads IBM serv ers place in the data center The z10 BC provides support for IBM Systems Director Active Energy Manager AEM for Linux
122. system to an increased threat of attack These enhancements are exclusive to System z10 and System z9 and are supported by z OS and z VM for z OS guest exploitation On Demand Capabilities It may sound revolutionary but it s really quite simple In the highly unpredictable world of On Demand business you should get what you need when you need it And you should pay for only what you use Radical Not to IBM It s the basic principle underlying IBM capacity on demand for the IBM System 210 The z10 BC also introduces a architectural approach for temporary offerings that can change the thinking about on demand capacity One or more flexible configuration defi nitions can be used to solve multiple temporary situations and multiple capacity configurations can be active at once for example activation of just two CBUs out of a definition that has four CBUs is acceptable This means that On Off CoD can be active and up to seven other offerings can be active simultaneously Tokens can be purchased for On Off CoD so hardware activations can be prepaid All activations can be done without having to interact with IBM when it is determined that capacity is required no passwords or phone connections are necessary As long as the total z10 BC can support the maximums that are defined then they can be made available With the z10 BC it is now possible to add permanent capacity while a temporary capacity is currently activated without ha
123. t months to the expiration date at the time of order the expiration date can be extended by no more than two additional years One test activation is provided for each additional CBU year added to the CBU entitlement record CBU Tests The allocation of the default number of test activations changed Rather than a fixed default number of five test activations for each CBU entitlement record the number of test activations per instance of the CBU entitlement record will coincide with the number of CBU years the number of years assigned to the CBU record This equates to one test activation per year for each CBU entitlement purchased Additional test activations are now available in quantities of one and the number of test acti vations remains limited at 15 per CBU entitlement record These changes apply only to System z10 and to CBU entitlements purchased through the IBM sales channel or directly from Resource Link There are terms governing System z Capacity Back Up CBU now available which allow customers to execute production workload on a CBU Upgrade during a CBU Test 42 While all new CBU contract documents contain the new CBU Test terms existing CBU customers will need to exe cute a contract to expand their authorization for CBU Test upgrades if they want to have the right to execute produc tion workload on the CBU Upgrade during a CBU Test Amendment for CBU Tests The modification of CBU Test terms is available for
124. tem z processor types CPs IFLs zllPs ZAAPs and ICFs to be defined in the same 2 VM LPAR for use by various guest operating systems e Capability to install Linux on System z as well as zVM from the HMC on a System 210 that eliminates the need for any external network setup or a physical connection between an LPAR and the HMC e Enhanced physical connectivity by exploiting all OSA Express3 ports helping service the network and reduc ing the number of required resources e Dynamic memory upgrade support that allows real memory to be added to a running z VM system With z VM V5 4 memory can be added nondisruptively to individual guests that support the dynamic memory reconfiguration architecture Systems can now be configured to reduce the need to re IPL z VM Processors channels OSA adapters and now memory can be dynamically added to both the z VM system itself and to individual guests The operation and management of virtual machines has been enhanced with new systems management APIs improvements to the algorithm for distributing a guest s CPU share among virtual processors and usability enhancements for managing a virtual network Security capabilities of zZ VM V5 4 provide an upgraded LDAP server at the functional level of the z OS V1 10 IBM Tivoli Directory Server for z OS and enhancements to the RACF Security Server to create LDAP change log entries in response to updates to RACF group and user profiles including user
125. temporary CP capacity ordered is limited by the quantity of purchased CP capacity permanently active plus unassigned The quantity of temporary IFLs ordered is limited by quantity of purchased IFLs permanently active plus unassigned Temporary use of unassigned CP capacity or unas signed IFLs will not incur a hardware charge The quantity of permanent zlIPs plus temporary zlIPs can not exceed the quantity of purchased permanent plus unassigned CPs plus temporary CPs and the quantity of temporary zlIPs can not exceed the quantity of permanent zIIPs The quantity of permanent ZAAPs plus temporary ZAAPs can not exceed the quantity of purchased permanent plus unassigned CPs plus temporary CPs and the quantity of temporary ZAAPs can not exceed the quan tity of permanent ZAAPs The quantity of temporary ICFs ordered is limited by the quantity of permanent ICFs as long as the sum of perma nent and temporary ICFs is less than or equal to 16 The quantity of temporary SAPs ordered is limited by the quantity of permanent SAPs as long as the sum of per manent and temporary SAPs is less than or equal to 32 Although the System z10 BC will allow up to eight tempo rary records of any type to be installed only one tempo rary On Off CoD record may be active at any given time An On Off CoD record may be active while other tempo rary records are active Management of temporary capacity through On Off CoD is further enhanced through the introd
126. tion GRE tunnels Improve outbound routing e Simplify configuration setup e Allow WebSphere Application Server content based routing to work with z OS in an IPv6 network e Allow z OS to use a standard interface ID for IPv6 addresses Remove the need for PRIROUTER SECROUTER function in z OS OSA Layer 3 VMAC for z OS is exclusive to System z and is applicable to OSA Express3 and OSA Express2 features when configured as CHPID type OSD QDIO Direct Memory Access DMA OSA Express3 and the operating systems share a common storage area for memory to memory communi cation reducing system overhead and improving perfor mance There are no read or write channel programs for data exchange For write processing no I O interrupts have to be handled For read processing the number of I O interrupts is minimized Hardware data router With OSA Express3 much of what was previously done in firmware packet construction inspection and routing is now performed in hardware This allows packets to flow directly from host memory to the LAN without firmware intervention With the hardware data router the store and forward technique is no longer used which enables true direct memory access a direct host memory to LAN flow return ing CPU cycles for application use This avoids a hop and is designed to reduce latency and to increase throughput for standard frames 1492 byte and jumbo frames 8992 byte IBM Com
127. to maintain synchronization The Sysplex Timer Model 2 provides the stepping signal that helps ensure that all TOD clocks in a multi server environment incre ment in unison to permit full read or write data sharing with integrity The Sysplex Timer Model 2 is a key component of an IBM Parallel Sysplex environment and a Geographically Dispersed Parallel Sysplex GDPS availability solution for On Demand Business The 210 BC server requires the External Time Reference ETR feature to attach to a Sysplex Timer The ETR fea ture is standard on the z10 BC and supports attachment at an unrepeated distance of up to three kilometers 1 86 miles and a link data rate of 8 Megabits per second The distance from the Sysplex Timer to the server can be extended to 100 km using qualified Dense Wavelength Division Multiplexers DWDMs However the maximum repeated distance between Sysplex Timers is limited to 40 km Server Time Protocol STP STP messages STP is a message based protocol in which timekeeping information is transmitted between servers over externally defined coupling links ICB 4 ISC 3 and InfiniBand coupling links can be used to transport STP messages Server Time Protocol enhancements STP configuration and time information restoration after Power on Resets POR or power outage This enhancement delivers system management improvements by restoring the STP configuration and time information after Power on Resets PORs or power
128. tomer use One port is always reserved as a spare which is activated in the event of a failure of one of the other ports For high availability the initial order of ESCON features will deliver two 16 port ESCON features and the active ports will be distributed across those features Fibre Channel Connectivity The on demand operating environment requires fast data access continuous data availability and improved flex ibility all with a lower cost of ownership The four port FICON Express4 and FICON Express2 features available on the z9 BC continue to be supported on the System z10 BC Choose the FICON Express4 features that best meet your business requirements To meet the demands of your Storage Area Network SAN provide granularity facilitate redundant paths and satisfy your infrastructure requirements there are five features from which to choose E ee ee Feature FC Infrastructure Ports per Feature FICON Express4 10KMLX 3321 Single mode fiber 4 FICON Express4 4KMLX 3324 Single mode fiber 4 FICON Express4 26 4KM LX 3323 Single mode fiber 2 FICON Express4 SX 3322 Multimode fiber 4 FICON Express4 2C SX 3318 Multimode fiber 2 Choose the features that best meet your granularity fiber optic cabling and unrepeated distance requirements 19 FICON Express4 Channels The z10 BC supports up to 128 FICON Express4 chan nels each one operating at 1 2 or 4 Gb sec auto negoti ated The
129. tput If STP is configured to use designed to provide a time accuracy of 100 milliseconds to the ETS device For this enhancement the NTP output of the NTP server has to be connected to the Support Element SE LAN and the PPS output of the same NTP server has to be con nected to the PPS input provided on the External Time Ref erence ETR card of the System z10 or System z9 server Continuous Availability of NTP servers used as Exter nal Time Source Improved External Time Source ETS availability can now be provided if you configure different NTP servers for the Preferred Time Server PTS and the Backup Time Server BTS Only the PTS or the BTS can be the Current Time Server CTS in an STP only CTN Prior to this enhancement only the CTS calculated the time adjustments necessary to maintain time accuracy With this enhancement if the PTS CTS cannot access the NTP Server or the pulse per second PPS signal from the NTP server the BTS if configured to a different NTP server may be able to calculate the adjustment required and propagate it to the PTS CTS The PTS CTS in turn will per form the necessary time adjustment steering This avoids a manual reconfiguration of the BTS to be the CTS if the PTS CTS is not able to access its ETS In an ETR network when the primary Sysplex Timer is not able to access the ETS device the secondary Sysplex Timer takes over the role of the primary a recovery action not always accepted
130. uction of resource tokens For CP capacity a resource token represents an amount of processing capacity that will result in one MSU of SW cost for one day an MSU day For specialty engines a resource token represents activation of one engine of that type for one day an IFL day a zlIP day or a ZAAP day The different resource tokens are contained in separate pools within the On Off CoD record The 43 customer via the Resource Link ordering process deter mines how many tokens go into each pool Once On Off CoD resources are activated tokens will be decremented from their pools every 24 hours The amount decremented is based on the highest activation level for that engine type during the previous 24 hours Resource tokens are intended to help customers bound the hardware costs associated with using On Off CoD The use of resource tokens is optional and they are available on either a prepaid or post paid basis When prepaid the customer is billed for the total amount of resource tokens contained within the On Off CoD record When post paid the total billing against the On Off Cod record is limited by the total amount of resource tokens contained within the record Resource Link will provide the customer an ordering wizard to help determine how many tokens they need to purchase for different activation scenarios Resource tokens within an On Off CoD record may also be replenished Resource Link offers an ordering wizard to help
131. um configuration capabilities can be exploited And with the introduction of the ability to seamlessly include such events as creation of LPARs inclusion of logical subsystems changing logical processor definitions in an LPAR and the introduction of cryptography into an LPAR Features that carry forward from previous generation pro cessors include the ability to dynamically enable I O and the dynamic swapping of processor types Hardware System Area HSA Fixed HSA of 8 GB is provided as standard with the z10 BC The HSA has been designed to eliminate planning for HSA and makes all the memory purchased by customers available for customer use Preplanning for HSA expansion for configurations will be eliminated as HCD IOCP will via the IOCDS process always reserve e 2 Logical Channel Subsystems LCSS pre defined 30 Logical Partitions LPARs pre defined Subchannel set 0 with 63 75k devices Subchannel set 1 with 64K 1 devices Dynamic I O Reconfiguration always enabled by default Concurrent Patch always enabled by default Add Change the number of logical CP IFL ICF ZAAP ZIIP processors per partition and add SAPs to the con figuration Dynamic LPAR PU assignment optimization CPs ICFs IFLs ZAAPs zllPs SAPs Dynamically Add Remove Crypto no LPAR deactivation required 46 Redundant 1 0 Interconnect In the event of a failure or customer initiated action such as the replacement of an HCA STI fano
132. um number of buffer credits supported by the FICON director or control unit as well as application and workload characteristics High bandwidth at extended distances is achievable only if enough buffer credits exist to support the link data rate FICON Express enhancements for Storage Area Networks N_Port ID Virtualization N_Port ID Virtualization is designed to allow for sharing of a single physical FCP channel among multiple operating system images Virtualization function is currently available for ESCON and FICON channels and is now available for FCP channels This function offers improved FCP channel utilization due to fewer hardware requirements and can reduce the complexity of physical FCP I O connectivity Program Directed re IPL Program Directed re IPL is designed to enable an operat ing system to determine how and from where it had been loaded Further Program Directed re IPL may then request that it be reloaded again from the same load device using the same load parameters In this way Program Directed re IPL allows a program running natively in a partition to trigger a re IPL This re IPL is supported for both SCSI and ECKD devices z VM 5 3 provides support for guest exploitation FICON Link Incident Reporting FICON Link Incident Reporting is designed to allow an operating system image without operating intervention to register for link incident reports which can improve the ability to capture data for link error
133. uration tool along with a starting logical memory size The configuration tool will then calculate the physical memory required to satisfy this target memory Should additional physical memory be required it will be fulfilled with the preplanned memory features The preplanned memory feature is offered in 4 gigabyte GB increments The quantity assigned by the configu ration tool is the number of 4 GB blocks necessary to increase the physical memory from that required for the starting logical memory to the physical memory required for the target logical configuration Activation of any preplanned memory requires the purchase of preplanned memory activation features One preplanned memory acti vation feature is required for each preplanned memory fea ture You now have the flexibility to activate memory to any logical size offered between the starting and target size Service Enhancements z10 BC service enhancements designed to avoid sched uled outages include e Concurrent firmware fixes e Concurrent driver upgrades Concurrent parts replacement Concurrent hardware upgrades DIMM FRU indicators Single processor core checkstop e Single processor core sparing e Rebalance PSIFB and I O Fanouts e Redundant 100 Mb Ethernet service network with VLAN Environmental Enhancements Power and cooling discussions have entered the budget planning of every IT environment As energy prices have risen and utilities have re
134. ut card the z10 BC is designed to provide access to your I O devices through another HCA STI to the affected I O domains This is exclu sive to System 210 and System z9 Enhanced Driver Maintenance One of the greatest contributors to downtime during planned outages is Licensed Internal Code LIC updates When properly configured z10 BC is designed to permit select planned LIC updates A new query function has been added to validate LIC EDM requirements in advance Enhanced programmatic internal controls have been added to help eliminate manual analy sis by the service team of certain exception conditions With the z10 BC PR SM code has been enhanced to allow multiple EDM From sync points Automatic apply of EDM licensed internal change requirements is now limited to EDM and the licensed internal code changes update pro cess There are several reliability availability and serviceability RAS enhancements that have been made to the HMC SE based on the feedback from the System z9 Enhanced Driver Maintenance field experience Change to better handle intermittent customer network issues EDM performance improvements New EDM user interface features to allow for customer and service personnel to better plan for the EDM A new option to check all licensed internal code which can be executed in advance of the EDM preload or activate Dynamic Oscillator Switchover The z10 BC has two oscillator cards a primary and a backu
135. ving to return first to the original configuration Capacity on Demand Temporary Capacity The set of contract documents which support the various Capacity on Demand offerings available for z10 BC has been completely refreshed While customers with exist ing contracts for Capacity Back Up CBU and Customer Initiated Upgrade CIU On Off Capacity on Demand On Off CoD may carry those contracts forward to z10 BC machines new CoD capability and offerings for 210 BC is only supported by this new contract set 41 The new contract set is structured in a modular hierarchi cal approach This new approach will eliminate redundant terms between contract documents simplifying the con tracts for our customers and IBM Just in time deployment of System z10 BC Capacity on Demand CoD is a radical departure from previous System z and zSeries servers This new architecture allows Up to eight temporary records to be installed on the CPC and active at any given time Up to 200 temporary records to be staged on the SE Variability in the amount of resources that can be acti vated per record The ability to control and update records independent of each other Improved query functions to monitor the state of each record The ability to add capabilities to individual records con currently eliminating the need for constant ordering of new temporary records for different user scenarios Permanent LIC CC upgrades to be performed while tempor
136. work security z OS Communications Server introduces new defensive filtering capability Defensive filters are evaluated ahead of configured IP filters and can be created dynamically which can provide added protection and minimal disruption of services in the event of an attack e z OS V1 10 also supports RSA key ISO Format 3 PIN block 13 Digit through 19 Digit PAN data secure key AES and SHA algorithms e Improved productivity z OS V1 10 provides improve ments in or new capabilities for simplifying diagnosis and problem determination expanded Health Check Services network and security management automatic dump and re IPL capability as well as overall z OS I O configuration sysplex and storage operations With z OS 1 9 IBM delivers functionality that continues to solidify System z leadership as the premier data server z OS 1 9 offers enhancements in the areas of security net working scalability availability application development integration and improved economics with more exploita tion for specialty engines A foundational element of the platform the z OS tight interaction with the System z hardware and its high level of system integrity With z OS 1 9 IBM introduces e A revised and expanded Statement of z OS System Integrity e Large Page Support 1 MB Capacity Provisioning Support for up to 64 engines in a single image on IBM System z10 Enterprise Class z10 EC model only e Simplified and ce
137. ystem 29 Network Traffic Analyzer With the large volume and complexity of today s network traffic the z10 BC offers systems programmers and net work administrators the ability to more easily solve net work problems With the introduction of the OSA Express Network Traffic Analyzer and QDIO Diagnostic Synchro nization on the System z and available on the z10 BC customers will have the ability to capture trace trap data and forward it to z OS 1 8 tools for easier problem determi nation and resolution This function is designed to allow the operating system to control the sniffer trace for the LAN and capture the records into host memory and storage file systems using existing host operating system tools to format edit and process the sniffer records OSA Express Network Traffic Analyzer is exclusive to the z10 BC z9 BC z10 EC and z9 EC and is applicable to the OSA Express3 and OSA Express2 features when configured as CHPID type OSD QDIO and is supported by z OS Dynamic LAN idle for z OS Dynamic LAN idle is designed to reduce latency and improve network performance by dynamically adjusting the inbound blocking algorithm When enabled the z OS TCP IP stack is designed to adjust the inbound blocking algorithm to best match the application requirements For latency sensitive applications the blocking algo rithm is modified to be latency sensitive For streaming throughput sensitive applications the blo
Download Pdf Manuals
Related Search
Related Contents
VPASS AL fds maj 08092006.pub Page 1 Page 2 àVEI-`ITISSEMENT SUR L`EPILEPSIE 取扱説明書(PDF) - ワールドオートツール Philips VR150/07 User's Manual XIOS User Guide - Forge Sprint EVO 3D UG ST-5500 Manual - Addiss Electric Supply Sweex 2.5" HDD Enclosure Acai Berry Blue USB USB powered ForceTriad エネルギープラットフォーム Copyright © All rights reserved.
Failed to retrieve file