Home

Dell PowerEdge M IO Aggregator Configuration manual

image

Contents

1. Dell show vlt brief VLT Domain Brief Domain ID T Role Secondary Role Priority 32768 ICL Link Status Up HeartBeat Status Up VLT Peer Status Up Local Unit Id 1 Version 6 2 Local System MAC address 8 5b1 56 0e b1 7 Remote System MAC address 00 01 e8 e1 e1 c3 Configured System MAC address 00 01 09 06 06 06 Remote system version 6 2 Delay Restore timer 90 seconds Peer Routing Disabled Peer Routing Timeout timer 0 seconds ulticast peer routing timeout 150 seconds Dell Dell show vlt detail Local LAG Id Peer LAG Id Local Status Peer Status Active VLANs 128 128 UP UP 1 Dell 6 Show the running configurations on this port channel Dell conf if po 128 show config interface Port channel 128 portmode hybrid switchport vlan tagged 10 15 vlan untagged 20 shutdown Dell conf if po 128 fend Dell 7 Show the VLAN configurations Dell show vlan Codes Default VLAN G GVRP VLANs R Remote Port Mirroring VLANs P Primary C Community I Isolated O Openflow Q U Untagged T Tagged x Dotlx untagged X Dotlx tagged o OpenFlow untagged O OpenFlow tagged G GVRP tagged M Vlan stack H VSN tagged i Internal untagged I Internal tagged v VLT untagged V VLT tagged NUM Status Description Q Ports 1 Active U Te 0 33 10 Active T Po128 Te 0 41 42 T Te 0 1 PMUX Mode of the IO Aggregator 231
2. 11 Active T Po128 Te 0 41 42 m Te 0 Jd 12 Active T Po128 Te 0 41 42 m Te 0 Yd 13 Active T Po128 Te 0 41 42 m Te 0 1 14 Active T Po128 Te 0 41 42 m Te 0 7 5 15 Active T Po128 Te 0 41 42 m Te 0 1 20 Active U Po128 Te 0 41 42 U Te 0 1 Dell You can remove the inactive VLANs that have no member ports using the following command Dell configure Dell conf no interface vlan lt vlan id gt gt vlan id Inactive VLAN with no member ports You can remove the tagged VLANs using the no vlan tagged lt VLAN RANGE gt command You can remove the untagged VLANs using the no vlan untagged command in the physical port port channel Stacking in PMUX Mode PMUX stacking allows the stacking of two or more IOA units This allows grouping of multiple units for high availability IOA supports a maximum of six stacking units 232 NOTE Prior to configuring the stack group ensure the stacking ports are connected and in 40G native mode Configure stack groups on all stack units Dell Dell configure Dell conf stack unit 0 stack group 0 Dell conf 00 37 46 SSTKUNITO M CP SIFMGR 6 STACK PORTS ADDED Ports Fo 0 33 have been configured as stacking ports Please save and reset stack unit 0 for config to take effect Dell conf stack unit 0 stack group 1 Dell conf 00 37 57 SSTKUNITO M CP SIFMGR 6 STACK PORTS ADDED Ports Fo 0 37 have been configured as stacking ports Please save and
3. INTEGER 1 Hex STRING 00 00 INTEGER 1 INTEGER 1 lt lt Status OSTATE DN Changed OSTATE_DN Changed interface Pv2 NMPv2 Pv2 09 OSTATE_UP Changed interface of a managed system The following tables are implemented for the Aggregator The Entity MIB provides a mechanism for presenting hierarchies of physical entities using SNMP tables The Entity MIB contains the following groups which describe the physical elements and logical elements Physical Entity A physical entity or physical component represents an identifiable physical resource within a managed system Zero or more logical entities may utilize a physical resource at any given time Determining which physical components are represented by an agent in the EntPhysicalTable is an implementation specific matter Typically physical resources for example communications ports backplanes sensors daughter cards power supplies and the overall chassis which you can manage via functions associated with one or more logical entities are included in the MIB Containment Tree Each physical component may be modeled as contained within another physical component A containment tree is the conceptual sequence of entPhysicallndex values that uniquely specifies the exact physical location of a physical component within the managed system It is generated by following and recording each entPhysicalContainedln instance up the
4. show interfaces port channel 1 Command Example Dell show interfaces port channel 1 Port channel 1 is up line protocol is up Created by LACP protocol Hardware address is 00 01 e8 el el c1 Current address is 00 01 e8 el el cl Interface index is 1107755009 nimum number of links to bring Port channel up is 1 ternet address is not set de of IP Address Assignment NONE n Y In O DHCP Client ID lagl000le8elelcl T m e U 12000 bytes IP MTU 11982 bytes LineSpeed 10000 Mbit mbers in this channel Te 0 12 U ARP type ARPA ARP Timeout 04 00 00 Last clearing of show interface counters 00 12 41 Queueing strategy fifo Input Statistics 112 packets 18161 bytes 0 64 byte pkts 46 over 64 byte pkts 37 over 127 byte pkts 29 over 255 byte pkts 0 over 511 byte pkts 0 over 1023 byte pkts 59 Multicasts 53 Broadcasts 0 runts 0 giants 0 throttles O CRC O overrun 0 discarded Output Statistics Link Aggregation 135 135 packets 19315 bytes 0 underruns O 64 byte pkts 79 over 64 byte pkts 32 over 127 byte pkts 24 over 255 byte pkts 0 over 511 byte pkts 0 over 1023 byte pkts 93 Multicasts 42 Broadcasts 0 Unicasts 0 throttles 0 discarded 0 collisions 0 wreddrops Rate info interval 299 seconds Input 00 00 Mbits sec 0 packets sec 0 00 of line rate Output 00 00 Mbits sec 0 packets sec 0 00 of line rate Time since last interface status change 00
5. Dell show ip igmp groups Total Number of Groups 2 IGMP Connected Group Membership Group Address Interface Mode Uptime Expires Last Reporter 226 0 0 1 Vlan 1500 INCLUDE 00 00 19 Never 1 1 1 2 226 0 0 1 Vlan 1600 INCLUDE 00 00 02 Never 1 1 1 2 Dell show ip igmp groups detail Interface Vlan 1500 Group 226 0 0 1 Uptime 00 00 21 Expires Never Router mode INCLUDE Last reporter a ees ee ara Last reporter mode INCLUDE Last report received IS INCL Group source list 90 Internet Group Management Protocol IGMP Source address Pale M2 Uptime Expires 00 00 21 00 01 48 Member Ports Po 1 Interface Vlan 1600 Group 22 6 3070 1 Uptime 00 00 04 Expires Never Router mode INCLUDE Last reporter Lo11 2 Last reporter mode INCLUDE Last report received IS INCL Source address 1 14122 Member Ports Dell Group source list Uptime Expires 00 00 04 00 02 05 Po 1 show ip igmp interface Command Example Dell show ip igmp interface Vlan 2 is up line protocol is down Inbound IGMP access group is not set Interface IGMP group join rate limit is not set IGMP snooping IGMP Snooping IGMP Snooping IGMP Snooping IGMP snooping IGMP snooping Vlan 3 is enabled on interface query interval is 60 seconds querier timeout is 125 seconds last member query response interval is 1000 ms fast leave is disabled on this interface querier is disabled on this interface is
6. Port Monitoring 155 In the following example the host and server are exchanging traffic which passes through the uplink interface 1 1 Port 1 1 is the monitored port and port 1 42 is the destination port which is configured to only monitor traffic received on tengigabitethernet 1 1 host originated traffic y Server Traffic Host 1 Server E Sniffer Figure 24 Port Monitoring Example Important Points to Remember Port monitoring is supported on physical ports only virtual local area network VLAN and port channel interfaces do not support port monitoring The monitored the source MD and monitoring ports the destination MG must be on the same switch e The monitored source interface must be a server facing interface in the format slot port where the valid slot numbers are O and server facing port numbers are from 1 to 8 The destination interface must be an uplink port ports 9 to 12 e In general a monitoring port should have no ip address and no shutdown as the only configuration the Dell Networking OS permits a limited set of commands for monitoring ports You can display these commands using the command Amonitoring port may not be a member of a VLAN There may only be one destination port in a monitoring session Asource port MD can only be monitored by one destination port MG If you try to assign a monitored port to more than one monitoring port th
7. Dell interface tengigabitEthernet 0 0Dell config if te 0 0 4 fcoe map SAN FABRIC ADell amp interface port channel 3Dell config if po 3 4 fcoe map SAN FABRIC ADell interface fortygigabitEthernet 0 48De11 config if fo 0 0 4 fcoe map SAN FABRIC A 3 Enable the port for FCoE transmission using no shutdown INTERFACE the map settings Applying an FCoE Map on Fabric facing FC Ports The MXL 10 40GbE Switch and M I O Aggregator with the FC Flex IO module FC ports are configured by default to operate in N port mode to connect to an F port on an FC switch in a fabric You can apply only one FCoE map on an FC port When you apply an FCoE map on a fabric facing FC port the FC port becomes part of the FCoE fabric whose settings in the FCoE map are configured on the port and exported to downstream server CNA ports Each MXL 10 40GbE Switch and M I O Aggregator with the FC Flex IO module FC port is associated with an Ethernet MAC address FCF MAC address When you enable a fabric facing FC port the FCoE map applied to the port starts sending FIP multicast advertisements using the parameters in the FCoE map over server facing Ethernet ports A server sees the FC port with its applied FCoE map as an FCF port Step Task Command Command Mode 1 Configure a fabric facing FC port interface CONFIGURATION fibrechannel slot port 2 Apply the FCoE and FC fabric configurations in fabric map name INTERFACE an FCoE map to the port Repea
8. interface TenGigabitEthernet 0 1 no ip address speed 1000 duplex full no shutdown Interfaces 111 Setting Auto Negotiation Options The negotiation auto command provides a mode option for configuring an individual port to forced master forced slave after you enable auto negotiation A CAUTION Ensure that only one end of the node is configured as forced master and the other is configured as forced slave If both are configured the same that is both as forced master or both as forced slave the show interface command flaps between an auto neg error and forced master slave states Auto Negotiation Speed and Duplex Settings on Different Optics Command speed 100 speed auto speed 1000 speed 10000 negotiation auto 112 Mode interface config mode interface config mode interface config mode interface config mode interface config mode 10GbaseT 10GSFP 1G SFP optics module Supported Supported Supported Supported Supported optics Not supported Error message is thrown Error Speed 100 not supported on this interface config ignored Te 0 49 Not supported Supported Supported Not supported Should some error message Not supported Error message is thrown Error Speed 100 not supported on this interface config ignored Te 0 49 Not supported Supported Not Supported Not supported Copper SFP Comments 10
9. 0 0 SEPT SFP AUTO Good 0 1 OSFP QSFP AUTO Good Mismatch Example of the show system stack unit stack group configured Command Dell show system stack unit 1 stack group configured Configured stack groups in stack unit 1 Example of the show system stack unit stack group Command Dell show system stack unit 1 stack group Stack group Ports 200 Stacking 5 0 53 Dell Example of the show system stack ports ring Command Dell show system stack ports Topology Ring Interface Connection Link Speed Admin Link Trunk Gb s Status Status Group 0 33 1 33 40 up up 0 37 1 37 40 up up 1 33 0 33 40 up up 1131 0 37 40 up up Example of the show system stack ports daisy chain Command Dell show system stack ports Topology Daisy Chain Interface Connection Link Speed Admin Link Trunk Gb s Status Status Group 0 33 40 up down 0 37 1 37 40 up up 1 33 40 up down 1 37 0 37 40 up up Troubleshooting a Switch Stack To perform troubleshooting operations on a switch stack use the following commands on the master switch 1 Displays the status of stacked ports on stack units show system stack ports 2 Displays the master standby unit status failover configuration and result of the last master standby synchronization allows you to verify the readiness for a stack failover show redundancy 3 Displays input and output flow statistics on a stacked port show hardware stack unit unit number stack port port
10. ipc enables debugging only for IPC events rx enables debugging only for incoming packet traffic Command Command Mode debug fip snooping all EXEC PRIVILEGE acl error ifm info ipe rx To turn off debugging event messages enter the no debug fip snooping command 84 FIP Snooping 7 Internet Group Management Protocol IGMP On an Aggregator IGMP snooping is auto configured You can display information on IGMP by using show ip igmp command Multicast is based on identifying many hosts by a single destination IP address Hosts represented by the same IP address are a multicast group The internet group management protocol IGMP is a Layer 3 multicast protocol that hosts use to join or leave a multicast group Multicast routing protocols such as protocol independent multicast PIM use the information in IGMP messages to discover which groups are active and to populate the multicast routing table This chapter contains the following sections e IGMP Overview e IGMP Snooping IGMP Overview IGMP has three versions Version 3 obsoletes and is backwards compatible with version 2 version 2 obsoletes version 1 IGMP Version 2 IGMP version 2 improves upon version 1 by specifying IGMP Leave messages which allows hosts to notify routers that they no longer care about traffic for a particular group Leave messages reduce the amount of time that the router takes to stop forwarding traffic for a group to a sub
11. Application Priority TLV Statistics Output Appln Priority TLV pkts Application Priority TLV Statistics Error Appln Priority TLV Pkts Description Acknowledgement number transmitted in Control TLVs Current operational state of DCBx protocol ACK or IN SYNC DCBx version advertised in Control TLVs received from peer device Highest DCBx version supported in Control TLVs received from peer device Sequence number transmitted in Control TLVs received from peer device Acknowledgement number transmitted in Control TLVs received from peer device umber of DCBx frames sent from local port umber of DCBx frames received from remote peer port umber of DCBx frames with errors received umber of unrecognizable DCBx frames received umber of PFC TLVs received umber of PFC TLVs transmitted umber of PFC error packets received umber of PFC pause frames transmitted umber of PFC pause frames received umber of PG TLVs received umber of PG TLVs transmitted umber of PG error packets received umber of Application TLVs received umber of Application TLVs transmitted umber of Application TLV error packets received Hierarchical Scheduling in ETS Output Policies ETS supports up to three levels of hierarchical scheduling For example you can apply ETS output policies with the following configurations Data Center Bridging DCB 59 Priority group 1 Priority group 2 Priority group 3
12. Dell Networking systems support a maximum of 8000 total neighbors per system If the number of interfaces multiplied by eight exceeds the maximum the system does not configure more than 8000 258 PMUX Mode of the IO Aggregator INTERFACE level configurations override all CONFIGURATION level configurations LLDP is not hitless LLDP Compatibility e Spanning tree and force10 ring protocol blocked ports allow LLDPDUs e 8024X controlled ports do not allow LLDPDUs until the connected device is authenticated CONFIGURATION versus INTERFACE Configurations All LLDP configuration commands are available in PROTOCOL LLDP mode which is a sub mode of the CONFIGURATION mode and INTERFACE mode e Configurations made at the CONFIGURATION level are global that is they affect all interfaces on the system e Configurations made at the INTERFACE level affect only the specific interface they override CONFIGURATION level configurations Example of the protocol 11dp Command CONFIGURATION Level R1 conf protocol lldp R1 conf lldp advertise Advertise TLVs disable Disable LLDP protocol globally end Exit from configuration mode exit Exit from LLDP configuration mode hello LDP hello configuration mode LDP mode configuration default rx and tx multiplier LDP multiplier configuration no Negate a command or set its defaults show Show LLDP configuration Dell conf 11dp exit Dell conf interfa
13. N port identifier virtualization NPIV I O Aggregator IOA Programmable MUX PMUX Mode IOA PMUX is a mode that provides flexibility of operation with added configurability This involves creating multiple LAGs configuring VLANs on uplinks and the server side configuring data center bridging DCB parameters and so forth By default IOA starts up in IOA Standalone mode You can change to PMUX mode by executing the following commands and then reloading the IOA After the IOA reboots the IOA operates in PMUX mode PMUX mode supports both stacking and VLT operations Configuring and Changing to PMUX Mode After the lOA is operational in the default Standalone mode 1 Connect the terminal to the console port on the IOA to access the CLI and enter the following commands Login username Password Dell enable Dell 224 PMUX Mode of the lO Aggregator Dell show system stack unit 0 iom mode Unit Boot Mode Next Boot O standalone standalone Dell Change IOA mode to PMUX mode Dell conf stack unit 0 iom mode programmable mux Where stack unit 0 defines the default stack unit number Delete the startup configuration file Dell delete startup config Reboot the IOA by entering the reload command Dell reload Repeat the above steps for each member of the IOA in PMUX mode After system is up you can see the PMUX mode status Dell sh system stack unit 0 iom mode Unit Boot Mode Next Boot 0 progr
14. The VLT Unit Id configured on both the VLT peers are identical e The VLT System MAC or Unit Id is configured only on one of the VLT peers The VLT domain ID is not the same on both peers If the VLT peer ship is already established changing the System MAC or Unit Id does not cause VLT peer ship to go down Also if the VLT peer ship is already established and the VLT Unit Id or System MAC are configured on both peers then changing the CLI configurations on the VLT Unit Id or System MAC is rejected if any of the following become TRUE e After making the CLI configuration change the VLT Unit Id becomes identical on both peers e After making the CLI configuration change the VLT System MAC do not match on both peers When the VLT peer ship is already established you can remove the VLT Unit Id or System MAC configuration from either or both peers However removing configuration settings can cause the VLT ports to go down if you configure the Unit Id or System MAC on only one of the VLT peers Overview VLT allows physical links between two chassis to appear as a single virtual link to the network core or other switches such as Edge Access or top of rack ToR VLT reduces the role of spanning tree protocols STPs by allowing link aggregation group LAG terminations on two separate distribution or core switches and by supporting a loop free topology To prevent the initial loop that may occur prior to VLT being establishe
15. To configure the uplink speed of the member interfaces in a LAG bundle to be 40 GbE Ethernet per second for the Aggregator that operates in standalone stacking or VLT mode perform the following steps Specify the uplink speed as 40 GbE By default the uplink speed of the LAG bundle is set as 10 GbE You cannot configure the uplink speed if the Aggregator operates in programmable MUX mode The stack unit unit number iom mode stack standalone vlt 40G command is available in the CMC interface and the CLI interface CONFIGURATION stack unit unit number iom mode stack standalone vlt 40G You can use the show system stack unit unit number iom uplink speed command to view the uplink speed of the LAG bundles configured on the Flex IO modules installed on the Aggregator The value under the Boot speed field in the output of the show system stack unit command indicates the uplink speed that is currently effective on the LAG bundles whereas the value under the Next Boot field indicates the uplink speed that is applicable for the LAG bundle after the next reboot of the switch Depending on the uplink speed configured the fan out setting is designed accordingly during the booting of the switch The following example displays the output of the show system stack unit unit number iom uplink speed command with the Boot speed field contained in it Dell show system stack unit 0 iom uplink speed Unit Boot speed Next Boot 198 Stacking Veri
16. Dell show io aggregator isolated networks Isolated Network Enabled VLANs 5 10 Isolated Networks for Aggregators 121 11 Link Aggregation Unlike IOA Automated modes Standalone and VLT modes the IOA Programmable MUX PMUX can support multiple uplink LAGs You can provision multiple uplink LAGs The I O Aggregator auto configures with link aggregation groups LAGs as follows e All uplink ports are automatically configured in a single port channel LAG 128 e Server facing LAGs are automatically configured if you configure server for link aggregation control protocol LACP based NIC teaming Network Interface Controller NIC Teaming No manual configuration is required to configure Aggregator ports in the uplink or a server facing LAG K NOTE Static LAGs are not supported on the SMUX Aggregator K NOTE In order to avoid loops only disjoint VLANs are allowed between the uplink ports uplink LAGs and uplink to uplink switching is disabled Supported Modes Standalone VLT PMUX Stacking How the LACP is Implemented on an Aggregator The LACP provides a means for two systems also called partner systems to exchange information through dynamic negotiations to aggregate two or more ports with common physical characteristics to form a link aggregation group K NOTE A link aggregation group is referred to as a port channel by the Dell Networking OS A LAG provides both load sharing and port redundancy across stack u
17. FC Flex IO Modules Command interface tengigabitEthernet slot port fortygigabitEthernet slot port dcb map name Command Mode CONFIGURATION INTERFACE 275 Step Task Command Command Mode Dell interface tengigabitEthernet 0 0 Dell config if te 0 0 dcb map SAN DCB1 Repeat this step to apply a DCB map to more than one port or port channel Creating an FCoE VLAN Create a dedicated VLAN to send and receive Fibre Channel traffic over FCoE links between servers and a fabric over an NPG The NPG receives FCOE traffic and forwards decapsulated FC frames over FC links to SAN switches in a specified fabric Step Task Command Command Mode 1 Create the dedicated VLAN for FCOE traffic interface vlan vlan CONFIGURATION id Range 2 4094 VLAN 1002 is commonly used to transmit FCOE traffic When you apply an FCoE map to an Ethernet port Applying an FCoE map on server facing Ethernet ports the port is automatically configured as a tagged member of the FCoE VLAN Creating an FCoE Map An FCoE map consists of e An association between the dedicated VLAN used to carry FCoE traffic and the SAN fabric where the storage arrays are installed Use a separate FCoE VLAN for each fabric to which the FCOE traffic is forwarded Any non FCoE traffic sent on a dedicated FCoE VLAN is dropped The FC MAP value used to generate the fabric provided MAC address FPMA The FPMA is used by servers to transmit FCOE traffic t
18. ISID 400001370000 Session 1 iSCSI Optimization 119 Target iqn 2001 05 com equallogic 0 8a2a0906 0f60c2002 0360018428d448c94 iom011 Initiator iqn 1991 05 com microsoft win x918v27yajg ISID 400001370000 show iscsi sessions detailed Command Example Dell show iscsi sessions detailed Session 0 arget ign 2010 11 com ixia ixload iscsi TGl Initiator iqn 2010 11 com ixia ixload initiator iscsi 2c Up Time 00 00 01 28 DD HH MM SS Time for aging out 00 00 09 34 DD HH MM SS ISID 806978696102 Initiator Initiator Target Target Connection IP Address TCP Port IP Address TCPPort ID 10 10 0 44 33345 10 10 0 101 3260 0 Session 1 arget ign 2010 11 com ixia ixload iscsi TGl Initiator iqn 2010 11 com ixia ixload initiator iscsi 35 Up Time 00 00 01 22 DD HH MM SS Time for aging out 00 00 09 31 DD HH MM SS ISID 806978696102 Initiator Initiator Target Target Connection ID IP Address TCP Port IP Address TCPPort 10 10 0 53 33432 10 210720 LOT 3260 0 120 iSCSI Optimization 10 Isolated Networks for Aggregators An Isolated Network is an environment in which servers can only communicate with the uplink interfaces and not with each other even though they are part of same VLAN If the servers in the same chassis need to communicate with each other it requires a non isolated network connectivity between them or it needs to be routed in the TOR Isolated Networks can be enabled on per VLAN basis If a VLAN is s
19. Unit UnitType Status ReqTyp CurTyp Version Ports 0 anagement online I O Aggregator I O Aggregator 8 3 17 46 56 1 Standby online I O Aggregator I O Aggregator 8 3 17 46 56 2 ember not present 3 ember not present 4 ember not present 5 ember not present Example of the show system Command Dell show system Stack MAC 00 le c9 1 00 9b Reload Type normal reload Next boot normal reload Unit 0 Unit Type Management Unit Status online Next Boot online Required Type I O Aggregator 34 port GE TE XL Stacking 199 Current Type I O Aggregator 34 port GE TE XL aster priority 0 Hardware Rev Num Ports 56 Up Time 2 hr 41 min FTOS Version 8 3 17 46 Jumbo Capable yes POE Capable no Burned In MAC 00 le c9 f1 00 9b No Of MACs 3 Unit 1 Unit Type Standby Unit Status online Next Boot online Required Type I O Aggregator 34 port GE TE XL Current Type I O Aggregator 34 port GE TE XL aster priority 0 Hardware Rev Num Ports 56 Up Time 2 hr 27 min FTOS Version 8 3 17 46 Jumbo Capable yes POE Capable no Burned In MAC 00 1le c9 f1 04 82 No Of MACs 3 Unit 2 Unit Type Member Unit Status not present Required Type Example of the show inventory optional module Command Dell show inventory optional module Unit Slot Expected Inserted Next Boot Power
20. clock Dell conf cl Configuration Fundamentals 23 Enter space after a keyword lists all of the keywords that can follow the specified keyword Dell conf clock summer time Configure summer daylight savings time timezone Configure time zone Dell conf clock Entering and Editing Commands Notes for entering commands The CLI is not case sensitive You can enter partial CLI keywords Enter the minimum number of letters to uniquely identify a command For example you cannot enter c1 as a partial Keyword because both the clock and c1ass map commands begin with the letters cl You can enter clo however as a partial keyword because only one command begins with those three letters The TAB key auto completes keywords in commands Enter the minimum number of letters to uniquely identify a command The UP and DOWN arrow keys display previously entered commands refer to Command History The BACKSPACE and DELETE keys erase the previous letter Key combinations are available to move quickly across the command line The following table describes these short cut key combinations Short Cut Key Action Combination CNTL A Moves the cursor to the beginning of the command line CNTL B Moves the cursor back one character CNTL D Deletes character at cursor CNTL E Moves the cursor to the end of the line CNTL F Moves the cursor forward one character CNTL I Completes a keyword CNTL K Deletes all charac
21. m Remote Chassis ID Subtype Mac address 4 Remote Chassis ID 00 01 e8 06 95 3e Remote Port Subtype Interface name 5 Remote Port ID GigabitEthernet 2 11 Local Port ID GigabitEthernet 1 21 Locally assigned remote Neighbor Index 4 Remote TTL 120 Information valid for next 120 seconds ime since last information change of this neighbor 01 50 16 Remote MTU 1554 Remote System Desc Dell Forcel0 Networks Real Time Operating System Software Dell Forcel0 Operating System Version 1 0 Dell Forcel0 App lication Software Version 7 5 1 0 Copyright c 19 99 Build Time Thu Aug 9 01 05 51 PDT 2007 Existing System Capabilities Repeater Bridge Router Enabled System Capabilities Repeater Bridge Router Remote Port Vlan ID 1 Port and Protocol Vlan ID 1 Capability Supported Status Enabled 242 PMUX Mode of the IO Aggregator Configuring LLDPDU Intervals LLDPDUs are transmitted periodically the default interval is 30 seconds To configure LLDPDU intervals use the following command Configure a non default transmit interval CONFIGURATION mode or INTERFACE mode hello Example of Viewing LLDPDU Intervals Dell conf Dell conf protocol lldp Dell conf lldp show config l protocol lldp Dell conf lldp hello lt 5 180 gt Hello interval in seconds default 30 Dell conf 11dp hello 10 Dell conf lldp show config protocol lldp hello 10 Dell conf 11d Del
22. 0 5 cpu data plane statistics This view provides insight into the packet types entering the CPU to see whether CPU bound traffic is internal IPC traffic or network control traffic which the CPU must process View the modular packet buffers details per stack unit and the mode of allocation EXEC Privilege mode show hardware stack unit 0 5 buffer total buffer e View the modular packet buffers details per unit and the mode of allocation EXEC Privilege mode show hardware stack unit 0 5 buffer unit 0 1 total buffer e View the forwarding plane statistics containing the packet buffer usage per port per stack unit EXEC Privilege mode show hardware stack unit 0 5 buffer unit 0 1 port 1 64 all buffer info e View the forwarding plane statistics containing the packet buffer statistics per COS per port EXEC Privilege mode show hardware stack unit 0 5 buffer unit 0 1 port 1 64 queue 0 14 all buffer info e View input and output statistics on the party bus which carries inter process communication traffic between CPUs EXEC Privilege mode show hardware stack unit 0 5 cpu party bus statistics View the ingress and egress internal packet drop counters MAC counters drop and FP packet drops for the stack unit on per port basis EXEC Privilege mode show hardware stack unit 0 5 drops unit 0 0 port 33 56 Debugging and Diagnostics 293 This view helps identifying the stack unit port pipe port that m
23. Configuring Priority Based Flow Control eene nnne nnne Enhanced Transmission Selection uci e eet ded S ei ccna T x a Ro den Configuring Enhanced Transmission Selection sssssssssssssssssseseeeeeneeenenne enn Configuring DCB Maps and its Attributes sssssssssssssssssssseeeeeeeeeneere ener nennen enne nnns DCB Map Config ratiorr Procedure ii aie Heide e br tione i dft tr Petit P Important Polnts toARemermbestis ceret ertt res ree pnt eon obedit been Applyinig a DEB Map on a POtt irte Ue d Hoe bo e be EU REOR dana Configuring PFC without a DGB Map senari ea e eaea enne nne nnn rennen Configuring L ossless QUeues red aee edad e A e ec Pr RD iar e ede rd don Data Center Bridging Exchange Protocol DCBx Data Center Bridging in a Traffic Flow eenenrnenenrnnre n nnne nn nnnns Enabling Data Center Bridglng iir EE E RERO RETE e PARC IRE Data Center Bridging Auto DCB Enable Mode sssssssseee emere QoS dotlp Traffic Classification and Queue Assignment How Priority Based Flow Control is Implemented sssssssseeem eee eere How Enhanced Transmission Selection is Implemented ETS Operation with DEB i neso iae bebe de petu ee t D OR e teet decas Bandwidth Allocation for DCBX CIN o decine tinea RE PIE AE EE peti DCBX Operation reise tede serbe ee i ete ene pe oet bv ee d p ed NAILS A Bea Stadia ate tag he ie De an DEBX Port ROLES io A ete een eid DEB Configuration EXChange eed ee i
24. DCB ETS enabled interfaces traffic destined to queue that is not mapped to any dotlp priority are dropped dotip Value in the Egress Queue Assignment Incoming Frame 0 0 1 0 Data Center Bridging DCB 41 dotip Value in the Egress Queue Assignment Incoming Frame 2 0 3 1 4 2 5 3 6 3 7 3 How Priority Based Flow Control is Implemented Priority based flow control provides a flow control mechanism based on the 802 1p priorities in converged Ethernet traffic received on an interface and is enabled by default As an enhancement to the existing Ethernet pause mechanism PFC stops traffic transmission for specified priorities CoS values without impacting other priority classes Different traffic types are assigned to different priority classes When traffic congestion occurs PFC sends a pause frame to a peer device with the CoS priority values of the traffic that needs to be stopped DCBx provides the link level exchange of PFC parameters between peer devices PFC creates zero loss links for SAN traffic that requires no drop service while at the same time retaining packet drop congestion management for LAN traffic PFC is implemented on an Aggregator as follows If DCB is enabled as soon as a DCB policy with PFC is applied on an interface DCBx starts exchanging information with PFC enabled peers The IEEE802 1Qbb CEE and CIN versions of PFC TLV are supported DCBxalso validates PFC configurations received in TLVs from peer device
25. Dell Networking OS supports a maximum of 8000 total neighbors per system If the number of interfaces multiplied by eight exceeds the maximum the system does not configure more than 8000 Link Layer Discovery Protocol LLDP 145 e LLDP is not hitless Viewing the LLDP Configuration To view the LLDP configuration use the following command Display the LLDP configuration CONFIGURATION or INTERFACE mode show config Example of Viewing LLDP Global Configurations R1 conf protocol lldp R1 conf lldp show config protocol lldp advertise dot3 tlv max frame size advertise management tlv system capabilities system description hello 10 no disable R1 conf 11dp Example of Viewing LLDP Interface Configurations R1 conf 11dp exit Rl conf ttinterface tengigabitethernet 0 3 R1 conf if te 0 3 show config interface tengigabitEthernet 0 3 no ip address switchport no shutdown R1 conf if te 0 3 protocol lldp R1 conf if te 0 3 lldp show config protocol lldp R1 conf if te 0 3 lldp Viewing Information Advertised by Adjacent LLDP Agents To view brief information about adjacent devices or to view all the information that neighbors are advertising use the following commands e Display brief information about adjacent devices show lldp neighbors e Display all of the information that neighbors are advertising show lldp neighbors detail Example of Viewing Brief Information Advertised by Neighbo
26. Dis line protocol is down address is 00 01 e8 32 7a 47 Tengig 0 47 Up Te 13 3 Up Te 13 5 Up Te Tengig 0 5 Dwn Te 13 11 Dis Te 13 12 Dis Uplink State Group 6 Status Enabled Up Upstream Interfaces Downstream Interfaces Uplink State Group 7 Status Enabled Up Upstream Interfaces Downstream Interfaces Uplink State Group 16 Status Disabled Up Upstream Interfaces Tengig 0 41 Dwn Po 8 Dwn Downstream Interfaces Tengig 0 40 Dwn error disabled UFD 127 byte pkts 0 over 1023 byte pkts ARP type ARPA ARP Timeout 04 00 00 Last clearing of show interface counters 00 25 46 Queueing strategy fifo Input Statistics 0 packets 0 bytes 0 64 byte pkts 0 over 64 byte pkts 0 over 0 over 255 byte pkts 0 over 511 byte pkts O Multicasts 0 Broadcasts 0 0 CRC 0 overrun O discarded Output Statistics 0 packets 0 bytes 0 underruns 0 64 byte pkts 0 over 64 byte pkts 0 over 127 byte pkts 0 over 255 byte pkts 0 over 511 byte pkts 0 over 1023 byte pkts O Multicasts 0 Broadcasts 0 Unicasts 0 throttles 0 discarded 0 collisions Rate info interval 299 seconds Input 00 00 Mbits sec 0 packets sec 0 00 of line rate Output 00 00 Mbits sec 0 packets sec 0 00 of line rate Time since last interface status change 00 01 23 De11 show running config uplink state group uplink state group 1 no enable downstream TenGigabitEthernet 0 0 upstream TenGigabitEthernet 0 1 Dell Upl
27. Link Delay 65535 pause quantum Dell conf 4 The find keyword displays the output of the show command beginning from the first occurrence of specified text The following example shows this command used in combination with the show linecard all command Example of the find Keyword Dell conf do show stack unit all stack ports all pfc details find 0 stack unit 0 stack port all Admin mode is On Admin is enabled Local is enabled Link Delay 65535 pause quantum O Pause Tx pkts 0 Pause Rx pkts Dell conf The no more command displays the output all at once rather than one screen at a time This is similar to the terminal length command except that the no more option affects the output of the specified command only The save command copies the output to a file for future reference NOTE You can filter a single command output multiple times The save option must be the last option entered For example Dell command grep regular expression except regular expression grep other regular expression find regular expression save Multiple Users in Configuration Mode Dell notifies all users when there are multiple users logged in to CONFIGURATION mode A warning message indicates the username type of connection console or VTY and in the case of a VTY connection the IP address of the terminal on which the connection was established For example e On the system that telnets into the switch this messag
28. Number of F nicast discovery advertisements received P FLOGI accept frames received on the interface P FLOGI reject frames received on the interface P FDISC accept frames received on the interface P FDISC reject frames received on the interface P FLOGO accept frames received on the interface P FLOGO reject frames received on the interface Pc ear virtual link frames received on the interface Number of FCF discovery timeouts that occurred on the interface 81 Number of VN Port Session Number of VN port session timeouts that occurred on the interface Timeouts Number of Session failures due Number of session failures due to hardware configuration that to Hardware Config occurred on the interface show fip snooping system Command Example Dell show fip snooping system Global Mode Enabled FCOE VLAN List Operational 1 100 FCFs ser Enodes 909 Sessions sz K NOTE NPIV sessions are included in the number of FIP snooped sessions displayed show fip snooping vlan Command Example Dell show fip snooping vlan Default VLAN VLAN FC MAP FCFs Enodes Sessions 1 fe T ren Es 100 OXOEFCOO 1 2 d K NOTE NPIV sessions are included in the number of FIP snooped sessions displayed 82 FIP Snooping FIP Snooping Example The below illustration shows an Aggregator used as a FIP snooping bridge for FCOE traffic between an ENode server blade and an FCF ToR switch The ToR swit
29. R Remote Port Mirroring VLANs P Primary C Community I Isolated Q U Untagged T Tagged x Dotlx untagged X Dotlx tagged G GVRP tagged M Vlan stack H VSN tagged i Internal untagged I Internal tagged v VLT untagged V VLT tagged C CMC tagged NUM Status Description Q Ports 4 Active U Pol Te 0 16 T Po128 Te 0 33 39 51 56 Interfaces 101 T Te 0 1 15 17 32 Dell Port Channel Interfaces On an Aggregator port channels are auto configured as follows e All LOGbE uplink interfaces ports 33 to 56 are auto configured to belong to the same 10GbE port channel LAG 128 e Server facing interfaces ports 1 to 32 auto configure in LAGs 1 to 127 according to the NIC teaming configuration on the connected servers Port channel interfaces support link aggregation as described in IEEE Standard 802 3ad Es NOTE A port channel may also be referred to as a link aggregation group LAG Port Channel Definitions and Standards Link aggregation is defined by IEEE 802 3ad as a method of grouping multiple physical interfaces into a single logical interface a link aggregation group LAG or port channel A LAG is a group of links that appear to a MAC client as if they were a single link according to IEEE 802 3ad In Dell Networking OS a LAG is referred to as a port channel interface A port channel provides redundancy by aggregating physical interfaces into one logical interface If one ph
30. TLVs PFC TLV Statistics Input TLV pkts umber of PFC TLVs received PFC TLV Statistics Output TLV pkts umber of PFC TLVs transmitted PFC TLV Statistics Error pkts umber of PFC error packets received PFC TLV Statistics Pause Tx pkts umber of PFC pause frames transmitted PFC TLV Statistics Pause Rx pkts umber of PFC pause frames received Input Appln Priority TLV pkts umber of Application Priority TLVs received Output Appln Priority TLV pkts umber of Application Priority TLVs transmitted Error Appln Priority TLV pkts umber of Application Priority error packets received Example of the show interface ets summary Command Dell show interfaces te 0 0 ets summary Interface TenGigabitEthernet 0 0 Max Supported TC Groups is 4 Number of Traffic Classes is 8 Admin mode is on Admin Parameters Admin is enabled TC grp Priority Bandwidth TSA 0 0 1 2 3 4 5 6 7 100 ETS dh ETS 2 0 ETS 3 ETS 4 ETS 5 0 ETS 6 0 ETS 7 0 ETS Priority Bandwidth TSA oe oe 3 i ETS ETS ETS ETS ETS ETS oe oo oo NNNN WW CO CO oe Ed Bd DH Ed EH DH EH Ed oe oe TD 1oU0i0hNHO OGO mote Parameters Remote is disabled Data Center Bridging DCB 53 Local Parameters Local is enabled TC grp Priority Bandwidth TSA 0 071 2 3 Anor Orel 100 ETS d 0 ETS 2 ETS 3 0 ETS 4 0 ETS 5 ETS 6 0 ETS 7 0 ETS Priority Bandwidth TSA 0 13 ETS 1 13 ETS 2 ETS 3 13 ETS 4 12 ET
31. Te 0 6 Dwn Te 0 7 Up Te 0 8 Dwn Te 0 9 Dwn Te 0 10 Dwn Te 0 11 Dwn Te 0 12 Up Te 0 13 Up Te 0 14 Dwn Te 0 15 Up Te 0 16 Dwn Te 0 17 Dwn Te 0 18 Dwn Te 0 19 Dwn Te 0 20 Dwn Te 0 21 Dwn Te 0 22 Dwn Te 0 23 Dwn Te 0 24 Dwn Te 0 25 Dwn Te 0 26 Dwn Te 0 27 Dwn Te 0 28 Dwn Te 0 29 Dwn Te 0 30 Dwn Te 0 31 Dwn Te 0 32 Dwn Dell NOTE In this example if port channel 128 goes down the downstream interfaces are brought operationally down set to UFD error disabled Similarly if the upstream port channel goes up the downstream ports are brought up cleared from UFD error disabled PMUX Mode of the lO Aggregator 229 Virtual Link Trunking VLT in PMUX Mode VLT allows the physical links between two devices known as VLT nodes or peers within a VLT domain to be considered a single logical link to connected external devices For VLT operations use the following configurations on both the primary and secondary VLT Ensure the VLTi links are connected and administratively up VLTi connects the VLT peers for VLT data exchange 1 Configure VLTi Dell configure Dell conf int port channel 127 Dell conf if po 127 channel member fortygige 0 33 37 Dell conf if po 127 no shutdown Dell conf if po 127 end 2 Configure the VLT domain Dell configure Dell conf tvlt domain 1 Dell conf vlt domain peer link port channel 127 gt VLT peer destination Dell con
32. The Aggregator supports the management ethernet interface as well as the standard interface on any front end port You can use either method to connect to the system Configuring a Management Interface On the Aggregator the dedicated management interface provides management access to the system You can configure this interface with Dell Networking OS but the configuration options on this interface are limited You cannot configure gateway addresses and IP addresses if it appears in the main routing table of Dell Networking OS In addition the proxy address resolution protocol ARP is not supported on this interface For additional management access IOM supports the default VLAN VLAN 1 L3 interface in addition to the public fabric D management interface You can assign the IP address for the VLAN 1 default management interface using the setup wizard or through the CLI If you do not configure the default VLAN 1 in the startup configuration using the wizard or CLI by default the VLAN 1 management interface gets its IP address using DHCP To configure a management interface use the following command in CONFIGURATION mode Command Syntax Command Mode Purpose interface Managementethernet interface CONFIGURATION Enter the slot and the port 0 96 Interfaces Slot range 0 0 To configure an IP address on a management interface use either of the following commands in MANAGEMENT INTERFACE mode Command Syntax Command Mode Purpose
33. dotlipl group num dotip2 group num dotlp3 group num dotip4 group num dotlip5 group num dotlp6 group num dotlip7 group num e If you remove a dotlp priority to priority group mapping from a DCB map no priority pgid commana the PFC and ETS parameters revert to their default values on the interfaces on which the DCB map is applied By default PFC is not applied on specific 802 1p priorities ETS assigns equal bandwidth to each 802 1p priority As a result PFC and lossless port queues are disabled on 802 1p priorities and all priorities are mapped to the same priority queue and equally share the port bandwidth e To change the ETS bandwidth allocation configured for a priority group in a DCB map do not modify the existing DCB map configuration Instead first create a new DCB map with the desired PFC and 54 Data Center Bridging DCB ETS settings and apply the new map to the interfaces to override the previous DCB map settings Then delete the original dot1p priority priority group mapping If you delete the dot1p priority priority group mapping no priority pgid command before you apply the new DCB map the default PFC and ETS parameters are applied on the interfaces This change may create a DCB mismatch with peer DCB devices and interrupt network operation Applying a DCB Map on a Port When you apply a DCB map with PFC enabled on an S6000 interface a memory buffer for PFC enabled priority traffic is automatically allocated
34. https www force10networks com csportal20 KnowledgeBase Documentation aspx You also can obtain a list of selected MIBs and their OIDs at the following URL https www force10networks com csportal20 MIBs MIB OIDs aspx Some pages of iSupport require a login To request an iSupport account go to https www force10networks com CSPortal20 Support AccountRequest aspx If you have forgotten or lost your account information contact Dell TAC for assistance 514 Standards Compliance
35. in byte e Value Indicates the data for this part of the message Link Layer Discovery Protocol LLDP 141 fnCO0S7mp 1 octet 1 255 octets Figure 20 Type Length Value TLV Segment TLVs are encapsulated in a frame called an LLDP data unit LLDPDU which is transmitted from one LLDP enabled device to its LLDP enabled neighbors LLDP is a one way protocol LLDP enabled devices LLDP agents can transmit and or receive advertisements but they cannot solicit and do not respond to advertisements There are five types of TLVs as shown in the below table All types are mandatory in the construction of an LLDPDU except Optional TLVs You can configure the inclusion of individual Optional TLVs Type Length Value TLV Types Type TLV Description 0 End of LLDPDU Marks the end of an LLDPDU 1 Chassis ID The Chassis ID TLV is a mandatory TLV that identifies the chassis containing the IEEE 802 LAN station associated with the transmitting LLDP agent 2 Port ID The Port ID TLV is a mandatory TLV that identifies the port component of the MSAP identifier associated with the transmitting LLDP agent 3 Time to Live The Time To Live TLV indicates the number of seconds that the recipient LLDP agent considers the information associated with this MSAP identifier to be valid Optional Includes sub types of TLVs that advertise specific configuration information These sub types are Management TLVs IEEE 802 1 IEEE 802 3 and TIA
36. which are members of the dedicated FCoE VLAN that carries storage traffic to the specified fabric show qos dcb map Command Examples Dell show qos dcb map dcbmap2 State Complete PfcMode ON PG 0 TSA ETS BW 50 PFC OFF Priorities 0 124567 PG 1 TSA ETS BW 50 PFC ON Priorities 3 282 FC Flex IO Modules Table 21 show qos dcb map Field Descriptions Field State PFC Mode PG TSA BW PFC Priorities Description Complete All mandatory DCB parameters are correctly configured In progress The DCB map configuration is not complete Some mandatory parameters are not configured PFC configuration in the DCB map On enabled or Off Priority group configured in the DCB map Transmission scheduling algorithm used in the DCB map Enhanced Transmission Selection ETS Percentage of bandwidth allocated to the priority group PFC setting for the priority group On enabled or Off 802 1p priorities configured in the priority group show npiv devices brief Command Example Dell show npiv devices brief Total NPIV Devices 2 ENode Intf LoginMethod fid_1003 Te 0 12 20 Te 0 13 10 ENode WWPN FCoE Vlan Fabric Intf Fabric Map Status 01 00 10 18 1 94 20 1003 Fc 0 5 FLOGI OGGED IN 00 200 00 c9 d9 9c cb 1003 Fc 0 0 FDISC OGGED IN fid 1003 Table 22 show npiv devices brief Field Descriptions Field Total NPIV Devices ENode Intf ENode WWPN FCoE V
37. 0 SSTKUNITO M CP DHCLIENT 5 DHCLIENT IENT 5 DHCLIENT LOG IENT 5 DHCLIENT LOG El DHCLIENT DBG EVT Interface El VT Interface DHCLIENT DBG ial lt d 0 0 DHCP in state BOUND UNITO M CP SDHC in Interface itioned PED UNITO M CP SDHC IP TOS in state STOPPED IENT 5 DHCLIENT LOG 1w2d23h SSTKUNITO M CP SDHCLIENT 5 DHCLIENT LOG IENT 5 DHCLIENT LOG OG DHCLIENT DBG DHCLIENT DBG PKT DHCP DHCLIENT DBG EVT Interface DHCLIENT DBG EVT Interface iral lt d OG DHCLIENT DBG 0 0 DHCP El J in state STOPP UNITO M itioned CTING 1w2d23h SSTKUNITO M CP SDHCLIENT 5 DHCLIENT LOG in Interface UNITO M CP ket in oo CP SDHCLIENT 5 DHCLIENT LOG DHCLIENT 5 DHCLIENT LOG DHCLIENT DBG El VT Interface DHCLIENT DBG PKT DHCP DHCLIENT DBG PKT Received 0 0 with Lease Ip 10 16 134 250 Mask 255 255 0 0 Server Id The following example shows the packet and event level debug messages displayed for the packet transmissions and state transitions on a DHCP client interface when you release and renew a DHCP client DHCP Client Debug Messages Logged during DHCP Client Release Renew Dell release ay 27 152553 Int
38. 0 0 0 0 TOn Ta Tad BOUND 08 26 2011 04 33 39 08 27 2011 04 33 39 Renew Time Rebind Time 08 26 2011 16 21 50 08 27 2011 01 33 39 70 Dynamic Host Configuration Protocol DHCP 6 FIP Snooping FIP snooping is auto configured on an Aggregator in standalone mode You can display information on FIP snooping operation and statistics by entering show commands This chapter describes about the FIP snooping concepts and configuration procedures Fibre Channel over Ethernet Fibre Channel over Ethernet FCoE provides a converged Ethernet network that allows the combination of storage area network SAN and LAN traffic on a Layer 2 link by encapsulating Fibre Channel data into Ethernet frames FCoE works with Ethernet enhancements provided in data center bridging DCB to support lossless no drop SAN and LAN traffic In addition DCB provides flexible bandwidth sharing for different traffic types such as LAN and SAN according to 802 1p priority classes of service For more information refer to the Data Center Bridging DCB chapter Ensuring Robustness in a Converged Ethernet Network Fibre Channel networks used for SAN traffic employ switches that operate as trusted devices End devices log into the switch to which they are attached in order to communicate with the other end devices attached to the Fibre Channel network Because Fibre Channel links are point to point a Fibre Channel switch controls all storage traffic that an end device
39. 3 4 5 6 7 100 1 56 Data Center Bridging DCB 00 30104 CO h2 Stack unit 1 stack port all Max Supported TC Groups is 4 Number of Traffic Classes is 1 Admin mode is on Admin Parameters Admin is enabled TC grp Priority Bandwidth 0 75004 CO NO rS CO Example of the show interface De De 0 1 2 3 4 11 show interface 5 6 7 DCBx detail Command ll show interface te 0 4 dcbx detail E e PS Configuration P R ETS Recommendation PFC Configuration TLV enabled TLV enabled TLV enabled H F Application priority for FCOE enabled di I Application priority for iSCSI enabled di sabled sabled tengigabitethernet 0 4 dcbx detail a e ETS Configuration TLV disabled r ETS Recommendation TLV disabled p PFC Configuration TLV disabled E f Application Priority for FCOE E pd i Application Priority for iSCSI In Local DCBX Compatibility mode is CEE Local DCBX Configured mode is CEE Peer Operating version is CEE Local DCBX TLVs Transmitted ErPfi Lo terface TenGigabit Remote Mac Address Port Role is Au DCBX Operational S cal DCBX Status Sequence Number 2 Protocol State In Data Center Bridging DCB Acknowledgment Number 2 Sync Ethernet 0 4 00 00 to Upstream tatus is Enabled Is Configuration Source TRUE 00 00 00 11 DCBX Operational
40. 7 vlan tagged 2 Dell conf if te 1 7 exit Dell conf exit Dell show vlan id 2 Codes Default VLAN G GVRP VLANs R Remote Port Mirroring VLANs P Primary C Community I Isolated Q U Untagged T Tagged x Dotlx untagged X Dotlx tagged G GVRP tagged Vlan stack H VSN tagged i Internal untagged I Internal tagged v VLT untagged V VLT tagged C CMC tagged NUM Status Description Q Ports 2 Active U Pol Te 0 7 18 T Po128 Te 0 50 51 T Te 1 7 Dell conf if te 1 7 Except for hybrid ports only a tagged interface can be a member of multiple VLANs You can assign hybrid ports to two VLANs if the port is untagged in one VLAN and tagged in all others Es NOTE When you remove a tagged interface from a VLAN using the no vlan tagged command it remains tagged only if it is a tagged interface in another VLAN If you remove the tagged interface from the only VLAN to which it belongs the interface is placed in the default VLAN as an untagged interface Adding an Interface to an Untagged VLAN To move an untagged interfaces from the default VLAN to another VLAN use the vlan untagged command as shown in the below figure Dell conf interface tengigabit 0 16 Dell conf if te 0 16 vlan untagged 4 Dell conf if te 0 16 exit Dell conf exit Dell 00 23 49 SSTKUNITO M CP SYS 5 CONFIG I Configured from console Dell show vlan id 4 Codes Default VLAN G GVRP VLANs
41. Aged amp Drops 2 0 TTL Threshold Drops INVALID VLAN CNTR Drops L2MC Drops PKT Drops of ANY Conditions Hg MacUnderflow TX Err PKT Counter oo0oooo Restoring the Factory Default Settings Restoring factory defaults deletes the existing NVRAM settings startup configuration and all configured settings such as stacking or fanout To restore the factory default settings use the restore factory defaults stack unit 0 5 all clear all nvram command in EXEC Privilege mode A CAUTION There is no undo for this command Important Points to Remember When you restore all the units in a stack all units in the stack are placed into stand alone mode When you restore a single unit in a stack only that unit is placed in stand alone mode No other units in the stack are affected e When you restore the units in stand alone mode the units remain in stand alone mode after the restoration e After the restore is complete the units power cycle immediately The following example shows the using the restore factory defaults command to restore the Factory Default Settings Restoring the Factory Default Settings Dell restore factory defaults stack unit 0 nvram KKK KKK KKK KKK KKK KKK KK KKK KKK KKK KKK KKK KKK KKK KKK KKK KK KA ko ko ko ko ko kv ko ko ko ko ko kocko Warning Restoring factory defaults will delete the existing K persistent settings stacking fanout etc After restoratio
42. DCB map with PFC enabled pfc on an error message is displayed Step Task 1 Enter INTERFACE Configuration mode 2 Open a DCB map and enter DCB map configuration mode 3 Disable PFC 4 Return to interface configuration mode 36 Command Command Mode interface tengigabitE CONFIGURATION thernet slot port fortygigabitEthernet slot port dcb map name INTERFACE no pfc mode on DCB MAP exit DCB MAP Data Center Bridging DCB Step Task Command Command Mode 5 Apply the DCB map created to disable the dcb map name INTERFACE PFC operation on the interface default 6 Configure the port queues that still function pfc no drop INTERFACE as no drop queues for lossless traffic queues queue range The maximum number of lossless queues globally supported on a port is 2 You cannot configure PFC no drop queues on an interface on which a DCB map with PFC enabled has been applied or which is already configured for PFC using the pfc priority command Range 0 3 Separate queue values with a comma specify a priority range with a dash for example pfc no drop queues 1 3 or pfc no drop queues 2 3 Default No lossless queues are configured Data Center Bridging Exchange Protocol DCBx The data center bridging exchange DCBx protocol is disabled by default on any switch on which PFC or ETS are enabled DCBx allows a switch to automatically discover DCB enabled peers and exchange configuration information PFC and ETS u
43. DCBx performs the following operations e Discovers DCB configuration such as PFC and ETS in a peer device e Detects DCB mis configuration in a peer device that is when DCB features are not compatibly configured on a peer device and the local switch Mis configuration detection is feature specific because some DCB features support asymmetric configuration e Reconfigures a peer device with the DCB configuration from its configuration source if the peer device is willing to accept configuration Accepts the DCB configuration from a peer if a DCBx port is in willing mode to accept a peer s DCB settings and then internally propagates the received DCB configuration to its peer ports 44 Data Center Bridging DCB DCBx Port Roles The following DCBX port roles are auto configured on an Aggregator to propagate DCB configurations learned from peer DCBX devices internally to other switch ports Auto upstream Auto downstream The port advertises its own configuration to DCBx peers and receives its configuration from DCBX peers ToR or FCF device The port also propagates its configuration to other ports on the switch The first auto upstream that is capable of receiving a peer configuration is elected as the configuration source The elected configuration source then internally propagates the configuration to other auto upstream and auto downstream ports A port that receives an internally propagated configuration overwrites it
44. EXEC Privilege mode ip ssh rsa authentication enable 5 Bind the public keys to RSA authentication EXEC Privilege mode ip ssh rsa authentication my authorized keys flash public key Example of Generating RSA Keys admin Unix client ssh keygen t rsa Generating public private rsa key pair Enter file in which to save the key home admin ssh id rsa home admin ssh id rsa already exists Overwrite y n y Enter passphrase empty for no passphrase Enter same passphrase again Your identification has been saved in home admin ssh id rsa Your public key has been saved in home admin ssh id rsa pub Configuring Host Based SSH Authentication Authenticate a particular host This method uses SSH version 2 To configure host based authentication use the following commands 1 Configure RSA Authentication Refer to Using RSA Authentication of SSH Create shosts by copying the public RSA key to the file shosts in the directory ssh and write the IP address of the host to the file cp etc ssh ssh host rsa key pub ssh shosts Refer to the first example 3 Create a list of IP addresses and usernames that are permitted to SSH in a file called rhosts Refer to the second example 4 Copy the file shosts and rhosts to the Dell Networking system 5 Disable password authentication and RSA authentication if configured Security for M I O Aggregator 171 CONFIGURATION mode or EXEC Privilege mode no ip ssh passwo
45. Example of Number of Monitoring Ports Dellfshow mon session SessionID Source Destination Direction Mode Type 0 TenGig 0 1 TenGig 0 9 FX interface Port based 10 TenGig 0 2 TenGig 0 10 rx interface Port based 20 TenGig 0 3 TenGig 0 11 rx interface Port based 30 TenGig 0 4 TenGig 0 12 rx interface Port based Dell conf A source port may only be monitored by one destination port but a destination port may monitor more than one source port Dell Networking OS Behavior All monitored frames are tagged if the configured monitoring direction is transmit TX regardless of whether the monitored port MD is a Layer 2 or Layer 3 port e If the MD port is a Layer 2 port the frames are tagged with the VLAN ID of the VLAN to which the MD belongs e If the MD port is a Layer 3 port the frames are tagged with VLAN ID 4095 e If the MD portis in a Layer 3 VLAN the frames are tagged with the respective Layer 3 VLAN ID For example in the configuration source tengig 0 1 destination tengig 0 9 direction tx if the source port 0 1 is an untagged member of any VLAN all monitored frames that the destination port 0 9 receives are tagged with the VLAN ID of the source port Port Monitoring 155 15 Security for M I O Aggregator Security features are supported on the M I O Aggregator This chapter describes several ways to provide access security to the Dell Networking system For details about all the commands described in this chapter
46. For a 10GbE interface enter the keyword TenGigabitEthernet followed by the slot port numbers for example interface tengigabitethernet 0 7 The information displays in a continuous run refreshes every two seconds by default Refer monitor interface command example below Use the following keys to manage the output m Change mode l Page up T Increase refresh interval by 1 second q Quit C Clear screen a Page down monitor interface command example Dellfmonitor interface tengig 0 1 Dell Networking OS uptime is 1 day s 00 00 Refresh Intvl Monitor time 00 Interface Te 0 1 Traffic statistics Input bytes Output bytes kets kets kets kets kets kets kets kets Error statistics Input underruns Input giants tles CRES ksum Input overrun Output underruns Output throttles Input pac Output pac 64B pac Over 64B pac Over 127B pac Over 255B pac Over 511B pac Over 1023B pac Input throt Input Input IP chec Interfaces Disabled Link is Down Current OO OOOO OO O S OOOO OCC 4 hour s 2s Linespeed is 1000 Mbit GO OGG OGO O OGOOGO OGO o00o00o00o00o00o00o0o0o t Decrease refresh interval by 1 secona 31 minute s Rate Bps Bps PPS PPS PPS PPS PPS PPS PPS PPS pps pps pps pps pps pps pps pps jw O pa ct w o0000000000o oo0o0o0o0o0oo 107 m Change mode c Clear screen 1 Pa
47. INPUT POLICY mode exit Enter interface configuration mode CONFIGURATION mode interface type slot port Apply the input policy with the PFC configuration to an ingress interface INTERFACE mode dcb policy input policy name Repeat Steps 1 to 8 on all PFC enabled peer interfaces to ensure lossless traffic service Dell Networking OS Behavior As soon as you apply a DCB policy with PFC enabled on an interface DCBx starts exchanging information with PFC enabled peers The IEEE802 10bb CEE and CIN versions of PFC Type Length Value TLV are supported DCBx also validates PFC configurations that are received in TLVs from peer devices 30 Data Center Bridging DCB By applying a DCB input policy with PFC enabled you enable PFC operation on ingress port traffic To achieve complete lossless handling of traffic also enable PFC on all DCB egress ports or configure the dotlp priority queue assignment of PFC priorities to lossless queues To remove a DCB input policy including the PFC configuration it contains use the no dcb input policy name command in INTERFACE Configuration mode To disable PFC operation on an interface use the no pfc mode on command in DCB Input Policy Configuration mode PFC is enabled and disabled as the global DCB operation is enabled dcb enable or disabled no dcb enable You can enable any number of 802 1p priorities for PFC Queues to which PFC priority traffic is mapped are lossless by default Traffic m
48. Interfaces 97 VLAN Membership A virtual LAN VLANs is a logical broadcast domain or logical grouping of interfaces in a LAN in which all data received is kept locally and broadcast to all members of the group In Layer 2 mode VLANs move traffic at wire speed and can span multiple devices Dell Networking OS supports up to 4093 port based VLANs and one default VLAN as specified in IEEE 802 1Q VLAN provide the following benefits Improved security because you can isolate groups of users into different VLANs e Ability to create one VLAN across multiple devices On an Aggregator in standalone mode all ports are configured by default as members of all 4094 VLANs including the default VLAN All VLANs operate in Layer 2 mode You can reconfigure the VLAN membership for individual ports by using the vlan taggedorvlan untagged commands in INTERFACE configuration mode Configuring VLAN Membership Physical Interfaces and port channels can be members of VLANs NOTE You can assign a static IP address to default VLAN 1 using the ip address command To assign a different VLAN ID to the default VLAN use the default vlan id vlan id command Following table lists out the VLAN defaults in Dell Networking OS Feature Default Mode Layer 2 no IP address is assigned Default VLAN ID VLAN 1 Default VLAN When an Aggregator boots up all interfaces are up in Layer 2 mode and placed in the default VLAN as untagged interfaces Only untagged i
49. List for RADIUS xai tdg petet e de dco i e 160 TACA CS is ai endure t e bete tesd abl ede aes Wile tar ate o Ue ede den ran adios 163 Configuration Task List for TACACS Eivissa ad e e n n i 163 TACACS Remot Authentication dope und tete EF pin be o nee ds 167 Eriabling SCP and SSEL di aed i eeu e et d 168 Using SCP with SSH to Copy a Software Image ene 169 Secure Shell Authentication tnter enne 170 Troubleshooting SSH tip ede odere ER n e av t Pe E gt e e ets 175 uri M 175 VTY Line and Access Class Configuration nemen 175 VTY Line Local Authentication and Authorization 174 VTY Line Remote Authentication and Authorization sss 174 VEY MAC SA Filter SUPPORT ne EUREN CERA 175 16 Simple Network Management Protocol SNMP 176 Implementation Infortmatior on curte add la 176 Configuring the Simple Network Management Protocol em 176 Important Points to zRemember umet pA tte pet n Re ata d Pr Eo Ua E AO E E 176 Setting UP SNMP somete dte y iaa iii 177 Creating a cormmullby ea quiete a e br es 177 Reading Managed Object Values eene nennen enne er enne 177 Displaying the Ports in a VLAN using SNMP ener 178 Fetching Dynamic MAC Entries using SNMP nennen 180 Deriving Interface Iridices reme ete ee teca dva oe eddie er ene 181 Monitor P rt Charnnelsz ud ste t Debe Rt t e eunt m f e 182 Entity MIBS xu d ep RETE RR RUE E EUER iE e ERA EHE te
50. Number of Number of F Number of E nicast Discovery FLOGI DISC LOGO ode Keep Alives Number of V N Port Keep Alives Number of Multicast Discovery Advertisements Number of U nicast Discovery Advertisements Number of F Number of F Number of F Number of Number of Number of LOGI Accepts LOGI Rejects DISC Accepts DISC Rejects FLOGO Accepts LOGO Rejects Number of CVLs Number of FCF Discovery Timeouts FIP Snooping Description Number of F interface Number of F interface Number of F Number of F the interface Number of F interface Number of F interface Number of F Number of F interface Number of F interface Number of F Number of F P snooped VLAN request frames received on the P snooped VLAN notification frames received on the P snooped multicast discovery solicit frames received on the interface P snooped u P snooped F P snooped F P snooped F P snooped E nicast discovery solicit frames received on LOGI request frames received on the DISC request frames received on the LOGO frames received on the interface Node keep alive frames received on the P snooped VN port keep alive frames received on the P snooped multicast discovery advertisements received on the interface P snooped u on the interface Number of F Number of F Number of F Number of F Number of F Number of F
51. OSTATE DN Changed interface state to down Te 0 2 00 10 12 SSTKUNITO M CP SIFMGR 5 OSTATE DN Changed interface state to down Te 0 3 00 10 13 SSTKUNITO M CP SIFMGR 5 OSTATE DN Changed uplink state group state to down Group 3 00 10 13 SSTKUNITO M CP SIFMGR 5 OSTATE DN Downstream interface set to UFD error disabled Te 0 4 00 10 13 SSTKUNITO M CP SIFMGR 5 OSTATE DN Downstream interface set to UFD error disabled Te 0 5 00 10 13 SSTKUNITO M CP SIFMGR 5 OSTATE DN Downstream interface set to UFD error disabled Te 0 6 00 10 13 SSTKUNITO M CP SIFMGR 5 OSTATE DN Changed interface state to down Te 0 4 00 10 13 SSTKUNITO M CP SIFMGR 5 OSTATE DN Changed interface state to down Te 0 5 00 10 13 SSTKUNITO M CP SIFMGR 5 OSTATE DN Changed interface state to down Te 0 6 Dell conf if range te 0 1 3 do clear ufd disable uplink state group 3 00 11 50 SSTKUNITO M CP SIFMGR 5 OSTATE UP Downstream interface cleared from UFD error disabled Te 0 4 00 11 51 SSTKUNITO M CP SIFMGR 5 OSTATE UP Downstream interface cleared from UFD error disabled Te 0 5 00 11 51 SSTKUNITO M CP SIFMGR 5 OSTATE UP Downstream interface cleared from UFD error disabled Te 0 6 00 11 51 SSTKUNITO M CP SIFMGR 5 OSTATE UP Changed interface state to up Te 0 4 00 11 51 S
52. SFP 49 TX Power High Warning threshold SFP 49 RX Power High Warning threshold SFP 49 Temp Low Warning threshold SFP 49 Voltage Low Warning threshold SFP 49 Bias Low Warning threshold SFP 49 TX Power Low Warning threshold SFP 49 RX Power Low Warning threshold SFP 49 Temperature SFP 49 Voltage SFP 49 Tx Bias Current SFP 49 Tx Power SFP 49 Rx Power SFP 49 Data Ready state Bar SFP 49 Rx LOS state SFP 49 Tx Fault state Average 100 000C 5 000V 100 000mA 5 000mw 5 000mw 50 000C 0 000V 0 000mA 0 000mw 0 000mw 100 000C 5 000V 100 000mA 5 000mwW 5 000mw 50 000C 0 000V 0 000mA 0 000mw 0 000mw 40 844C 3 169V 0 000mA 0 000mW 0 227mW False False False Recognize an Over Temperature Condition An overtemperature condition occurs for one of two reasons the card genuinely is too hot or a sensor has malfunctioned Inspect cards adjacent to the one reporting the condition to discover the cause If directly adjacent cards are not normal temperature suspect a genuine overheating condition If directly adjacent cards are normal temperature suspect a faulty sensor Debugging and Diagnostics 295 When the system detects a genuine over temperature condition it powers off the card To recognize this condition look for the following system messages CHMGR 2 MAJOR TEMP Major alarm chassis temperature high temperature reaches r exceeds threshold of value C HMGR 2 TEMP SHUTDOWN WARN WARNING tempe
53. The buffer size is allocated according to the number of PFC enabled priorities in the assigned map To apply a DCB map to an Ethernet port follow these steps Step Task Command Command Mode 1 Enter interface configuration mode on an interface CONFIGURATION Ethernet port tengigabitEthernet slot port fortygigabitEthernet slot port 2 Apply the DCB map on the Ethernet port to dcb map name INTERFACE configure it with the PFC and ETS settings in the map for example Dell interface tengigabitEthernet 0 0 Dell config if te 0 0 dcb map SAN_A_dcb_mapi Repeat Steps 1 and 2 to apply a DCB map to more than one port You cannot apply a DCB map on an interface that has been already configured for PFC using thepfc priority command or which is already configured for lossless queues pfc no drop queues command Configuring PFC without a DCB Map In a network topology that uses the default ETS bandwidth allocation assigns equal bandwidth to each priority you can also enable PFC for specific dot1p priorities on individual interfaces without using a DCB map This type of DCB configuration is useful on interfaces that require PFC for lossless traffic but do not transmit converged Ethernet traffic Step Task Command Command Mode 1 Enter interface configuration mode on an interface CONFIGURATION Ethernet port tengigabitEthernet slot port Data Center Bridging DCB 35 Step Task 2 Enable PFC on specified priorities R
54. This situation occurs when the new dot1p queue assignment exceeds the maximum number 2 of lossless queues supported globally on the switch In this case all PFC configurations received from PFC enabled peers are removed and resynchronized with the peer devices Traffic may be interrupted when you reconfigure PFC no drop priorities in an input policy or reapply the policy to an interface Enhanced Transmission Selection Enhanced transmission selection ETS supports optimized bandwidth allocation between traffic types in multiprotocol Ethernet FCoE SCSI links ETS allows you to divide traffic according to its 802 1p priority into different priority groups traffic classes and configure bandwidth allocation and queue scheduling for each group to ensure that each traffic type is correctly prioritized and receives its required bandwidth For example you can prioritize low latency storage or server cluster traffic in a traffic class to receive more bandwidth and restrict best effort LAN traffic assigned to a different traffic class Data Center Bridging DCB 31 Although you can configure strict priority queue scheduling for a priority group ETS introduces flexibility that allows the bandwidth allocated to each priority group to be dynamically managed according to the amount of LAN storage and server traffic in a flow Unused bandwidth is dynamically allocated to prioritized priority groups Traffic is queued according to its 802 1p prior
55. Timeticks 32856932 3 days 19 16 09 32 Example of Reading the Value of the Next Managed Object gt snmpgetnext v 2c c mycommunity 10 11 131 161 1 3 6 1 2 1 1 3 0 SNMPv2 MIB sysContact 0 STRING gt snmpgetnext v 2c c mycommunity 10 11 131 161 sysContact 0 SNMPv2 MIB sysName 0 STRING Example of Reading the Value of Many Managed Objects at Once gt snmpwalk v 2c c mycommunity 10 16 130 148 1 3 6 1 2 1 1 SNMPv2 MIB sysDescr 0 STRING Dell Networking OS Operating System Version 1 0 Application Software Version E8 3 17 46 Series I O Aggregator Copyright c 1999 2012 by Dell Inc All Rights Reserved Build Time Sat Jul 28 03 20 24 PDT 2012 SNMPv2 MIB sysObjectID 0 OID SNMPv2 SMI enterprises 6027 1 4 2 DISMAN EVENT MIB sysUpTimeInstance Timeticks 77916 0 12 59 16 SNMPv2 MIB sysContact 0 STRING SNMPv2 MIB sysName 0 STRING FTOS SNMPv2 MIB sysLocation 0 STRING SNMPv2 MIB sysServices 0 INTEGER 4 Displaying the Ports in a VLAN using SNMP Dell Networking OS identifies VLAN interfaces using an interface index number that is displayed in the output of the show interface vlan command Example of Identifying the VLAN Interface Index Number Dell conf do show interface vlan id 10 Error No such interface name R5 conf tdo show interface vlan 10 Vlan 10 is down line protocol is down Address is 00 01 e8 cc cc ce Cur
56. Version is 0 DCBX Max Version Supported is 0 57 Peer DCBX Status DCBX Operational Version is 0 DCBX Max Version Supported is 255 Sequence Number 2 Acknowledgment Number 2 2 Input PFC TLV pkts pkts O Pause Rx pkts 2 Input PG TLV Pkts 2 Input Appln Priority TLV pkts Appln Priority TLV Pkts Total DCBX Frames transmitted 27 Total DCBX Frames received 6 Total DCBX Frame errors 0 Total DCBX Frames unrecognized 0 3 Output PFC TLV pkts 0 Error PFC pkts 3 Output PG TLV Pkts 0 Error PG TLV Pkts 0 Output Appln Priority TLV pkts 0 Error O PFC Pause Tx The following table describes the show interface DCBx detail command fields Table 6 show interface DCBx detail Command Description Field Interface Port Role DCBx Operational Status Configuration Source Local DCBx Compatibility mode Local DCBx Configured mode Peer Operating version Local DCBx TLVs Transmitted Local DCBx Status DCBx Operational Version Local DCBx Status DCBx Max Version Supported Local DCBx Status Sequence Number 58 Description Interface type with chassis slot and port number Configured DCBx port role auto upstream or auto downstream Operational status enabled or disabled used to elect a configuration source and internally propagate a DCB configuration The DCBx operational status is the combination of PFC and ETS operational status Specifies whether the port serves as the DCBx configuration
57. a VLT number for port channels on VLT peers that connects to the device The discovery protocol requires that an attached device always runs LACP over the port channel interface VLT provides a loop free topology for port channels with endpoints on different chassis in the VLT domain VLT uses shortest path routing so that traffic destined to hosts via directly attached links on a chassis does not traverse the chassis interconnect link VLT allows multiple active parallel paths from access switches to VLT chassis VLT supports port channel links with LACP between access switches and VLT peer switches Dell Networking recommends using static port channels on VLTi If VLTi connectivity with a peer is lost but the VLT backup connectivity indicates that the peer is still alive the VLT ports on the Secondary peer are orphaned and are shut down Software features supported on VLT port channels For information about configuring IGMP Snooping in a VLT domain refer to VLT and IGMP Snooping All system management protocols are supported on VLT ports including SNMP RMON AAA ACL DNS FTP SSH Syslog NTP RADIUS SCP TACACS Telnet and LLDP Enable Layer 5 VLAN connectivity VLT peers by configuring a VLAN network interface for the same VLAN on both switches Dell Networking does not recommend enabling peer routing if the CAM is full To enable peer routing a minimum of two local DA spaces for wild c
58. advertisements are transmitted Number of ENodes connected to the FCF Fibre Channel session ID assigned by the FCF show fip snooping statistics VLAN and port Command Example umber umber umber Number Number Number Number umber FIP Snoopi of of of of of of of of ng lan Requests LOGI DISC LOGO d nnn dig lt lt lan Notifications ulticast Discovery Solicits nicast Discovery Solicits Dell show fip snooping statistics interface vlan 100 Oornonoo Enode Keep Alive 9021 79 Number Number Number Number Number Number Number Number Number Number Number Number Number Number Number Number Number Number Number Number Number Number Number Number Number Number Number Number Number Number Number Number Number Number Number Number Number Number Number Number Number Number Number Number Number Number Number Number Number Number Number Number Number Number Number 80 of of of of of of of of of of of of of Dell conf of of of of of of of of of of of of of of of of of of of of of of of of of of of of of of of of of of of of of of of of of of VN Port Keep Alive Multicast Discovery Advertisement Unicast Discovery Advertisement E FLOGI Accepts FLOGI Rejects FDISC Accepts FDISC Rejects FLOGO Accepts FLOGO Rejects VL FCF Discovery Timeouts VN Port Sessio
59. an uplink state group that was down comes up the set of UFD disabled downstream ports which were previously disabled due to this upstream port going down is brought up and the UFD Disabled error is cleared e f you disable an uplink state group the downstream interfaces are not disabled regardless of the state of the upstream interfaces If an uplink state group has no upstream interfaces assigned you cannot disable downstream interfaces when an upstream link goes down e To enable the debug messages for events related to a specified uplink state group or all groups use the debug uplink state group group id command where the group id is from 1 to 16 To turn off debugging event messages use the no debug uplink state group group id command 216 Uplink Failure Detection UFD Foran example of debug log message refer to Clearing a UFD Disabled Interface Configuring Uplink Failure Detection PMUX mode To configure UFD use the following commands 1 Create an uplink state group and enable the tracking of upstream links on the switch router CONFIGURATION mode uplink state group group id e group id values are from 1 to 16 To delete an uplink state group use the no uplink state group group id command Assign a port or port channel to the uplink state group as an upstream or downstream interface UPLINK STATE GROUP mode upstream downstream interface For interface enter one of the following interfac
60. and Duplex Mode of Ethernet Interfaces By default auto negotiation of speed and duplex mode is enabled on 1GbE and 10GbE Ethernet interfaces on an Aggregator The local interface and the directly connected remote interface must have the same setting Auto negotiation is the easiest way to accomplish these settings as long as the remote interface is capable of auto negotiation K NOTE As a best practice Dell Networking recommends keeping auto negotiation enabled Auto negotiation should only be disabled on switch ports that attach to devices not capable of supporting negotiation or where connectivity issues arise from interoperability issues For 100 1000 10000 Ethernet interfaces the negotiation auto command is tied to the speed command Auto negotiation is always enabled when the speed command is set to 1000 or auto In Dell Networking OS the speed 1000 command is an exact equivalent of speed auto 1000 in IOS To discover whether the remote and local interface require manual speed synchronization and to manually synchronize them if necessary follow these steps Step Task Command Syntax Command Mode 1 Determine the local interface status show interfaces interface status EXEC Privilege 2 Determine the remote interface Use the command on the remote EXEC status system that is equivalent to the above command EXEC Privilege 3 Access CONFIGURATION mode config EXEC Privilege 110 Interfaces 4 Access the port interface int
61. and I O Aggregator provide thirty two 1GbE or 10 GbE server facing ports and the option to add two FC Flex IO modules that offer up to 8 8Gb Fibre Channel ports for uplink traffic in addition to the fixed two 40GbE ports on the MXL 10 40GbE Switch and I O Aggregator NOTE When an FC Flex IO module is inserted into an I O Aggregator and the FC ports are in the operationally up state you can configure the port speed of these FC ports as 2 Gbps 4 Gbps or 8 Gbps In the chassis management controller CMC GUI the FC port link speed is always shown as 10 Gbps regardless of whether the port speed configured is 2 Gbps 4 Gbps or 8 Gbps You can configure one of the following upstream fabric facing FC ports e Two 40GbE and eight 8GB FC ports e Four 40GbE and four 8GB FC ports Two 40GbE four 10GbE and four 8GB FC ports e Two 4OGDbE four 10GBASE T and four 8GB FC ports 260 FC Flex IO Modules FC Flex IO Module Capabilities and Operations The FC Flex IO module has the following characteristics e You can install one or two FC Flex IO modules on the MXL 10 40GbE Switch or I O Aggregator Each module supports four FC ports e Each port can operate in 2Gbps 4Gbps or 8Gbps of Fibre Channel speed e All ports on an FC Flex IO module can function in the NPIV mode that enables connectivity to FC switches or directors and also to multiple SAN topologies e t automatically senses the current speed when the port link is up Valid link
62. are connected to SAN core switches As the SAN grows it is necessary to add more ports and SAN FC Flex IO Modules 269 switches This results in an increase in the required domain IDs which may surpass the upper limit of 239 domain IDs supported in the SAN network An NPG avoids the need for additional domain IDs because it is deployed outside the SAN and uses the domain IDs of core switches in its FCoE links e With the introduction of 10GbE links FCoE is being implemented for server connections to optimize performance However a SAN traditionally uses Fibre Channel to transmit storage traffic FCoE servers require an efficient and scalable bridging feature to access FC storage arrays which an NPG provides NPIV Proxy Gateway Operation Consider a sample scenario of NPG operation An M1000e chassis configured as an NPG does not join a SAN fabric but functions as an FCoE FC bridge that forwards storage traffic between servers and core SAN switches The core switches forward SAN traffic to and from FC storage arrays An M1000e chassis FC port is configured as an N node port that logs in to an F fabric port on the upstream FC core switch and creates a channel for N port identifier virtualization NPIV allows multiple N port fabric logins at the same time on a single physical Fibre Channel link Converged Network Adapter CNA ports on servers connect to the M1000e chassis Ten Gigabit Ethernet ports and log in to an upstream FC core switc
63. are numbered refer to Port Numbering Link aggregation All uplink ports are configured in a single LAG LAG 128 VLANs All ports are configured as members of all 4094 VLANs All VLANs are up and can send or receive layer 2 traffic For more information refer to VLAN Membership Data center bridging capability exchange protocol DCBx Server facing ports auto configure in auto downstream port roles uplink ports auto configure in auto upstream port roles Fibre Channel over Ethernet FCoE connectivity and FCoE initiation protocol FIP snooping The uplink port channel LAG 128 is enabled to operate in Fibre channel forwarder FCF port mode Link layer discovery protocol LLDP Enabled on all ports to advertise management TLV and system name with neighboring devices Before You Start e Internet small computer system interface iSCSl optimization Internet group management protocol IGMP snooping e Jumbo frames Ports are set to a maximum MTU of 12 000 bytes by default Link tracking Uplink state group 1 is automatically configured In uplink state group 1 server facing ports auto configure as downstream interfaces the uplink port channel LAG 128 auto configures as an upstream interface Server facing links are auto configured to be brought up only if the uplink port channel is up e In stacking mode base module ports are automatically configured as stack ports n VLT mode port 9 is automatically configured as VL
64. becomes the master of the merged stack To ensure a fully synchronised bootup it is possible to reset individual units to force them to give up the management role or reload the whole stack from the command line interface CLI Example of Viewing Stack Members Dell show system brief Stack MAC 00 le c9 1 00 9b Stack Info Unit UnitType Status ReqTyp CurTyp Version Ports 0 anagement online I O Aggregator I O Aggregator 8 3 17 46 56 1 Standby online I O Aggregator I O Aggregator 8 3 17 46 56 2 ember not present 3 ember not present 4 ember not present 5 ember not present Dell Failover Roles If the stack master fails for example powered off it is removed from the stack topology The standby unit detects the loss of peering communication and takes ownership of the stack management switching 190 Stacking from standby to master The lack of a standby unit triggers an election within the remaining units for a standby role After the former master switch recovers despite having a higher priority or MAC address it does not recover ts master role but instead takes the next available role MAC Addressing All port interfaces in the stack use the MAC address of the management interface on the master switch The MAC address of the chassis in which the master Aggregator is installed is used as the stack MAC address The stack continues to use the master s chassis MAC address even after a failover The MAC
65. capabilities system description no disable R1 conf 11dp Debugging LLDP You can view the TLVs that your system is sending and receiving To view the TLVs use the following commands View a readable version of the TLVs debug lldp brief e View a readable version of the TLVs plus a hexadecimal version of the entire LLDPDU debug lldp detail 244 PMUX Mode of the lO Aggregator 1widi9h 1widi9h 1widi9h 1widi9h 1widi9h 1widi9h 1wid19h Sending LLDP pkt out of G lengt 1wid19h Packet dump 1w1d19h 0180 c2 00 00 0e 00 01 e8 Od b7 3b 8100 00 00 1w1d19h 1w1d19h 1widi9h 1w1d19h LLDP frame 9h Started Transr Figure 30 The debug lldp detail Command LLDPDU Packet Dissection Virtual Link Trunking VLT VLT allows physical links between two chassis to appear as a single virtual link to the network core VLT eliminates the requirement for Spanning Tree protocols by allowing link aggregation group LAG terminations on two separate distribution or core switches and by supporting a loop free topology VLT provides Layer 2 multipathing creating redundancy through increased bandwidth and enabling multiple parallel paths between nodes and load balancing traffic where alternative paths exist PMUX Mode of the lO Aggregator 245 K NOTE When you launch the VLT link the VLT peer ship is not established if any of the following is TRUE e The VLT System MAC configured on both the VLT peers do not match
66. control feature is enabled by default on all ports and disabled on a port when an iSCSI storage device is detected Broadcast storm control is re enabled as soon as the connection with an iSCSI device ends Broadcast traffic on Layer 2 interfaces is limited or suppressed during a broadcast storm You can view the status of a broadcast storm control operation by using the show io aggregator broadcast storm control status command You can disable broadcast storm control by using the no io aggregator broadcast storm control command Dell Networking OS Behavior If broadcast traffic exceeds 1000 Mbps the Aggregator limits it to 1000 Mbps per port pipe Disabling Broadcast Storm Control To disable broadcast storm control on an Aggregator use the no io aggregator broadcast storm control command from CONFIGURATION mode To re enable broadcast storm control enter the io aggregator broadcast storm control command Displaying Broadcast Storm Control Status To display the status of a current storm control operation use the show io aggregator broadcast storm control status command from EXEC Privilege mode Configuring Storm Control The following configurations are available only in PMUX mode 1 To configure the percentage of broadcast traffic allowed on an interface use the storm control broadcast packets per second in command from INTERFACE mode 2 To configure the percentage of multicast traffic allowed on an interface use the stor
67. disabled when you disable DCBx and PFC f no PFC dcb map has been applied on the interface the default PFC settings are used e PFC supports buffering to receive data that continues to arrive on an interface while the remote system reacts to the PFC operation e PFC uses the DCB MIB IEEE802 1azd2 5 and the PFC MIB IEEE802 1bb d2 2 If DCBx negotiation is not successful for example due to a version or TLV mismatch DCBx is disabled and you cannot enable PFC or ETS Configuring Priority Based Flow Control PFC provides a flow control mechanism based on the 802 1p priorities in converged Ethernet traffic received on an interface and is enabled by default when you enable DCB As an enhancement to the existing Ethernet pause mechanism PFC stops traffic transmission for specified priorities Class of Service CoS values without impacting other priority classes Different traffic types are assigned to different priority classes When traffic congestion occurs PFC sends a pause frame to a peer device with the CoS priority values of the traffic that is to be stopped Data Center Bridging Exchange protocol DCBx provides the link level exchange of PFC parameters between peer devices PFC allows network administrators to create zero loss links for Storage Area Network SAN traffic that requires no drop service while retaining packet drop congestion management for Local Area Network LAN traffic To ensure complete no drop service apply
68. each connection between an FCoE end device and an FCF via a transit switch FIP Snooping 71 FIP provides a functionality for discovering and logging in to an FCF After discovering and logging in FIP allows FCoE traffic to be sent and received between FCoE end devices ENodes and the FCF FIP uses its own EtherType and frame format The below illustration about FIP discovery depicts the communication that occurs between an ENode server and an FCoE switch FCF FIP performs the following functions e FIP virtual local area network VLAN discovery FCoE devices Enodes discover the FCoE VLANs on which to transmit and receive FIP and FCOE traffic e FIP discovery FCoE end devices and FCFs are automatically discovered Initialization FCoE devices perform fabric login FLOGI and fabric discovery FDISC to create a virtual link with an FCoE switch Maintenance A valid virtual link between an FCoE device and an FCoE switch is maintained and the link termination logout LOGO functions properly FCoE Initialization Protocol FIP VLAN Discovery All FCF MAC ENode VLAN Notification FCF FCoE Switch Discovery Solicitation All FCF MAC Discovery Advertisement x FIP FLOGI FLOGI Accept PLOGI SCR Connected FIP LOGOUT End Session Accept Clear Virtual Link End Session Accept Figure 7 FIP discovery and login between an ENode and an FCF FIP Snooping on Ethernet Bridges In a converged Ethernet
69. from CONFIGURATION mode the buffer profile name still appears in the output of the show buffer profile detail summary command After a stack unit reset the buffer profile correctly returns to the default values but the profile name remains Remove it from the show buffer profile detail summary command output by entering no buffer fp uplink csf stack unit port set buffer policy from CONFIGURATION mode and no buffer policy from INTERFACE mode To display the allocations for any buffer profile use the show commands To display the default buffer profile use the show buffer profile summary detail command from EXEC Privilege mode Example of Viewing the Default Buffer Profile Dell show buffer profile detail interface tengigabitethernet 0 1 Interface tengig 0 1 Buffer profile Dynamic buffer 194 88 Kilobytes 300 Debugging and Diagnostics Queuef Dedicated Buffer Buffer Packets Kilobytes 0 2 50 256 1 2000 256 2 2 50 256 3 2 550 256 4 9 38 256 5 9 38 256 6 9 38 256 7 9 38 256 Example of Viewing the Buffer Profile Allocations Dell show running config interface tengigabitethernet 2 0 interface TenGigabitEthernet 2 0 no ip address mtu 9252 switchport no shutdown buffer policy myfsbufferprofile Example of Viewing the Buffer Profile Interface De11 show buffer profile detail int gi 0 10 Interface Gi 0 10 Buffer profile fsqueue fp Dynamic buffer 1256 00 Kilobytes Queue Dedicated B
70. in the 8 character string is for one port starting with Port 1 at the left end of the string and ending with Port 8 at the right end A O indicates that the port is not a member of the VLAN a 1 indicates VLAN membership All hex pairs are 00 indicating that no ports are assigned to VLAN 10 In the following example Port 0 2 is added to VLAN 10 as untagged the first hex pair changes from 00 to 04 Example of Viewing VLAN Ports Using SNMP Port Assigned R5 conf do show vlan id 10 Codes Default VLAN G GVRP VLANs Q U Untagged T Tagged x Dotlx untagged X Dotlx tagged G GVRP tagged M Vlan stack NUM Status Description Q Ports 10 Inactive U TenGi 0 2 Unix system output gt snmpget v2c c mycommunity 10 11 131 185 I 326 1 2 01 17 7 1 4 3 1 2 11077871786 SNMPv2 SMI mib 2 17 7 1 4 3 1 2 1107787786 Hex STRING 40 00 00 00 00 00 00 00 00 00 00 The value 40 is in the first set of 7 hex pairs indicating that these ports are in Stack Unit O The hex value 40 is 0100 0000 in binary As described the left most position in the string represents Port 1 The next position from the left represents Port 2 and has a value of 1 indicating that Port 0 2 is in VLAN 10 The remaining positions are O so those ports are not in the VLAN Simple Network Management Protocol SNMP 179 Fetching Dynamic MAC Entries using SNMP The Aggregator supports the RFC 1493 dotld table for the default VLAN and the dot1q table for al
71. interface fc command in the CLI interface an error message is displayed You must configure FC interfaces by using the interface fi command in CONFIGURATION mode Displays the Fibre Channel and FCoE configuration parameters in FCoE maps Enter the brief keyword to display an overview of currently configured FCoE maps Enter the name of an FCoE map to display the FC and FCoE parameters configured in the map to be applied on MXL 10 40GbE Switch or M I O Aggregator with the FC Flex IO module Ethernet FCoE and FC ports Displays configuration parameters in a specified DCB map Displays information on FCoE and FC devices currently logged in to the NPG Displays the FC mode of operation and worldwide node WWN name of an MXL 10 40GbE Switch or M I O Aggregator with the FC Flex IO module show interfaces status Command Example Dell show interfaces status Port Description Status Speed Duplex Vlan Fc 0 0 Up 8000 Mbit Auto Fc 0 1 Down Auto Auto Fc 0 2 Down Auto Auto Fc 0 3 Down Auto Auto Fc 0 4 Down Auto Auto Fc 0 5 Down Auto Auto Fc 0 6 Down Auto Auto m Fc 0 7 Down Auto Auto Fc 0 8 Down Auto Auto m Fc 0 9 Down Auto Auto 280 FC Flex IO Modules Fc 0 10 Down Auto Auto Fc 0 11 Down Auto Auto Te 1 12 Down Auto Auto Te 1 13 Down Auto Auto Te 1 14 Down Auto Auto Te 1 15 Down Auto Auto Te 1 16 Down Auto Auto Te 1 17 Down Auto Auto Te 1 18 Down Auto Auto Te 1 19 Up 10000 M
72. interfaces at LOGbE in standalone mode FlexlO module interfaces support only uplink connections You can only use the 40GbE ports on the base module for stacking By default the two fixed 40GbE ports on the base module operate in 4x10GbE mode with breakout cables and support up to eight 10GbE uplinks You can configure the base module ports as 40GbE links for stacking The interfaces on a 40GbE QSFP FlexlO module auto configure to support only 10GbE SFP connections using 4x10GbE breakout cables All LOGbE uplink interfaces belong to the same 10GbE link aggregation group LAG Interfaces The tagged Virtual Local Area Network VLAN membership of the uplink LAG is automatically configured based on the VLAN configuration of all server facing ports ports 1 to 32 The untagged VLAN used for the uplink LAG is always the default VLAN 1 The tagged VLAN membership of a server facing LAG is automatically configured based on the server facing ports that are members of the LAG The untagged VLAN of a server facing LAG is auto configured based on the untagged VLAN to which the lowest numbered server facing port in the LAG belongs e Allinterfaces are auto configured as members of all 4094 VLANs and untagged VLAN 1 All VLANs are up and can send or receive layer 2 traffic You can use the Command Line Interface CLI or CMC interface to configure only the required VLANs on a port interface e Aggregator ports are numbered 1 to 56 P
73. interfaces operate in Layer 3 mode Dell conf if te 0 1 show config interface TenGigabitEthernet 0 1 mtu 12000 portmode hybrid switchport auto vlan protocol lldp Interfaces 95 advertise management tlv system name dcbx port role auto downstream no shutdown Dell conf if te 0 1 To view the interfaces in Layer 2 mode use the show interfaces switchport command in EXEC mode Management Interfaces An Aggregator auto configures with a DHCP based IP address for in band management on VLAN 1 and remote out of band OOB management The IOM management interface has both a public IP and private IP address on the internal Fabric D interface The public IP address is exposed to the outside world for WebGUI configurations WSMAN and other proprietary traffic You can statically configure the public IP address or obtain the IP address dynamically using the dynamic host configuration protocol DHCP Accessing an Aggregator You can access the Aggregator using e Internal RS 232 using the chassis management controller CMC Telnet into CMC and do a connect b switch id to get console access to corresponding IOM e External serial port with a universal serial bus USB connector front panel connect using the IOM front panel USB serial line to get console access Labeled as USB B e Telnet ssh using the public IP interface on the fabric D interface e CMC through the private IP interface on the fabric D interface
74. local area network VLAN tagged packets and dotlp priority values Untagged packets are treated with a dot1p priority of 0 For DCB to operate effectively you can classify ingress traffic according to its dotip priority so that it maps to different data queues The dotlp queue assignments used are shown in the following table To enable DCB enable either the SCSI optimization configuration or the FCoE configuration For information to configure iSCSI optimization refer to SCSI Optimization For information to configure FCoE refer to Fibre Channel over Ethernet 38 Data Center Bridging DCB To enable DCB with PFC buffers on a switch enter the following commands save the configuration and reboot the system to allow the changes to take effect 1 Enable DCB CONFIGURATION mode dcb enable 2 Set PFC buffering on the DCB stack unit CONFIGURATION mode dcb stack unit all pfc buffering pfc ports 64 pfc queues 2 NOTE To save the pfc buffering configuration changes save the configuration and reboot the system NOTE Dell Networking OS Behavior DCB is not supported if you enable link level flow control on one or more interfaces For more information refer to Flow Control Using Ethernet Pause Frames Data Center Bridging Auto DCB Enable Mode On an Aggregator in standalone or VLT modes the default mode of operation for data center bridging on Ethernet ports is auto DCB enable mode In this mode Aggregator ports det
75. module for LLDP configuration statistics local system data and remote systems data components The LLDP Management Information Base extension module for IEEE 802 1 organizationally defined discovery information LLDP DOT1 MIB and LLDP DOT3 MIB The LLDP Management Information Base extension module for IEEE 802 3 organizationally defined discovery information LLDP DOT1 MIB and LLDP DOT3 MIB sFlow Version 5 sFlow Version 5 MIB Force10 Enterprise IF Extension MIB extends the Interfaces portion of the MIB 2 RFC 1213 by providing proprietary SNMP OIDs for other counters displayed in the show interfaces output Force10 Enterprise Link Aggregation MIB Force10 File Copy MIB supporting SNMP SET operation Force10 Monitoring MIB Force10 Product Object Identifier MIB Force10 S Series Enterprise Chassis MIB Force10 Structure of Management Information Force10 System Component MIB enables the user to view CAM usage information Force10 Textual Convention Force10 Trap Alarm MIB Force10 FIP Snooping MIB Based on T11 FCoE MIB mentioned in FC BB 5 Force10 DCB MIB 515 RFC Full Name IEEE 802 1Qaz Management Information Base extension module for IEEE 802 1 organizationally defined discovery information LDP EXT DOT1 DCBX MIB IEEE 802 1Qbb Priority based Flow Control module for managing IEEE 802 1Qbb MIB Location You can find Force10 MIBs under the Force10 MIBs subhead on the Documentation page of iSupport
76. module uses the same baseboard hardware of the MXL 10 40GbE Switch or the Aggregator and the M1000 chassis You can insert the FC Flex IO module into any of the optional module FC Flex IO Modules 259 slots of the MXL 10 40GbE Switch and it provides four FC ports per module If you insert only one FC Flex IO module four ports are supported if you insert two FC Flex IO modules eight ports are supported By installing an FC Flex IO module you can enable the MXL 10 40GbE Switch and I O Aggregator to directly connect to an existing FC SAN network The FC Flex IO module uses the existing slots on the MXL 10 40GbE Switch and I O Aggregator and provides four or eight FC ports up to speed of 8 GbE per second You can connect all of the FC ports to the same FC SAN fabric to yield FC bandwidth of up to 64GB It is possible to connect some of the ports to a different FC SAN fabric to provide access to multiple fabric devices In a typical Fibre Channel storage network topology separate network interface cards NICs and host bus adapters HBAs on each server two each for redundancy purposes are connected to LAN and SAN networks respectively These deployments typically include a ToR SAN switch in addition to a ToR LAN switch By employing converged network adapters CNAs that the FC Flex IO module supports CNAs are used to transmit FCoE traffic from the server instead of separate NIC and HBA devices In such a scenario you can determine whether the FC
77. monitors and tracks active iSCSI sessions in connections on the switch including port information and iSCSI session information 116 SCSI Optimization e iSCSI QoS A user configured iSCSI class of service CoS profile is applied to all iSCSI traffic Classifier rules are used to direct the iSCSI data traffic to queues that can be given preferential QoS treatment over other data passing through the switch Preferential treatment helps to avoid session interruptions during times of congestion that would otherwise cause dropped iSCSI packets e iSCSI DCBx TLVs are supported The following figure shows iSCSI optimization between servers in an M1000e enclosure and a storage array in which an Aggregator connects installed servers iSCSI initiators to a storage array iSCSI targets in a SAN network iSCSI optimization running on the Aggregator is configured to use dotlp priority queue assignments to ensure that iSCSI traffic in these sessions receives priority treatment when forwarded on Aggregator hardware M iSCSI Storage Array c wu aa one Aggregators Installed in M1000e Chassis Servers Installed in M1000e Chassis Figure 16 iSCSI Optimization Example Monitoring iSCSI Traffic Flows The switch snoops iSCSI session establishment and termination packets by installing classifier rules that trap iSCSI protocol packets to the CPU for examination Devices that initiate iSCSI sessions usually use well known TCP ports 3260 or 860 t
78. network intermediate Ethernet bridges can snoop on FIP packets during the login process on an FCF Then using ACLs a transit bridge can permit only authorized FCoE traffic to be 72 FIP Snooping transmitted between an FCoE end device and an FCF An Ethernet bridge that provides these functions is called a FIP snooping bridge FSB On a FIP snooping bridge ACLs are created dynamically as FIP login frames are processed The ACLs are installed on switch ports configured for the following port modes ENode mode for server facing ports e FCF mode for a trusted port directly connected to an FCF You must enable FIP snooping on an Aggregator and configure the FIP snooping parameters When you enable FIP snooping all ports on the switch by default become ENode ports Dynamic ACL generation on an Aggregator operating as a FIP snooping bridge functions as follows Global ACLs are applied on server facing ENode ports e Port based ACLs are applied on ports directly connected to an FCF and on server facing ENode ports e Port based ACLs take precedence over global ACLs e FCoE generated ACLs take precedence over user configured ACLs A user configured ACL entry cannot deny FCoE and FIP snooping frames The below illustration depicts an Aggregator used as a FIP snooping bridge in a converged Ethernet network The ToR switch operates as an FCF for FCoE traffic Converged LAN and SAN traffic is transmitted between the ToR switch and an Aggregato
79. of the month start month Enter the name of one of the 12 months in English You can enter the name of a day to change the order of the display to time day month year start day Enter the number of the day The range is from 1 to 31 You can enter the name of a month to change the order of the display to time day month year start year Enter a four digit number as the year The range is from 1993 to 2035 start time Enter the time in hours minutes For the hour variable use the 24 hour format example 17 15 is 5 15 pm end week If you entered a start week enter the one of the following as the week that daylight saving ends week number Enter a number from 1 to 4 as the number of the week in the month to end daylight saving time first Enter the keyword first to end daylight saving time in the first week of the month last Enter the keyword last to end daylight saving time in the last week of the month end month Enter the name of one of the 12 months in English You can enter the name of a day to change the order of the display to time day month year end day Enter the number of the day The range is from 1 to 31 You can enter the name of a month to change the order of the display to time day month year end year Enter a four digit number as the year The range is from 1993 to 2035 end time Enter the time in hours minutes For the hour variable use the 24 hour format example 17 15
80. reset stack unit 0 for config to take effect Dell conf end De11 00 38 16 SSTKUNITO M CP SYS 5 CONFIG I Configured from console Reload the stack units Dell reload Proceed with reload confirm yes no yes Show the units stacking status Dell show system brief Stack MAC 00 01 e8 el el c3 Reload Type normal reload Next boot normal reload PMUX Mode of the lO Aggregator Stack Info Unit UnitType Status ReqTyp CurTyp Version Ports 0 Management online I O Aggregator 1 0 Aggregator lt lt release version gt gt 56 1 Standby online 1 0 Aggregator 1 0 Aggregator lt lt release version gt gt 56 2 Member not present 3 Member not present 4 Member not present 5 Member not present Dell Configuring an NPIV Proxy Gateway Prerequisite Before you configure an NPIV proxy gateway on an IOA or MXL switch with the FC Flex IO module ensure the following e DCB is enabled by default e DCB PFC and ETS parameters are configured for converged traffic To configure the minimum number of member links that must be up for a LAG bundle to be fully up perform the following steps Enable Fibre Channel capability on the switch Create a DCB map Apply a DCB map on server facing Ethernet ports Create an FCoE VLAN Create an FCoE map Apply an FCoE map on server facing Ethernet ports SN 0 1 BOW NES Apply an FCoE Map on fabric facing FC ports Enabling Fibre Chan
81. sends an authentication packet with the following Username Senab15 Password lt password entered by user gt Therefore the RADIUS server must have an entry for this username RADIUS Remote authentication dial in user service RADIUS is a distributed client server protocol This protocol transmits authentication authorization and configuration information between a central RADIUS server and a RADIUS client the Dell Networking system The system sends user information to Security for M I O Aggregator 159 the RADIUS server and requests authentication of the user and password The RADIUS server returns one of the following responses e Access Accept the RADIUS server authenticates the user e Access Reject the RADIUS server does not authenticate the user If an error occurs in the transmission or reception of RADIUS packets you can view the error by enabling the debug radius command Transactions between the RADIUS server and the client are encrypted the users passwords are not sent in plain text RADIUS uses UDP as the transport protocol between the RADIUS server host and the client For more information about RADIUS refer to RFC 2865 Remote Authentication Dial in User Service RADIUS Authentication Dell Networking OS supports RADIUS for user authentication text password at login and can be specified as one of the login authentication methods in the aaa authentication login command Idle Time Every ses
82. sends and receives over the network As a result the switch can enforce zoning configurations ensure that end devices use their assigned addresses and secure the network from unauthorized access and denial of service attacks To ensure similar Fibre Channel robustness and security with FCoE in an Ethernet cloud network the Fibre Channel over Ethernet initialization protocol FIP establishes virtual point to point links between FCoE end devices server ENodes and target storage devices and FCoE forwarders FCFs over transit FCoE enabled bridges Ethernet bridges commonly provide access control list ACLs that can emulate a point to point link by providing the traffic enforcement required to create a Fibre Channel level of robustness In addition FIP serves as a Layer 2 protocol to e Operate between FCoE end devices and FCFs over intermediate Ethernet bridges to prevent unauthorized access to the network and achieve the required security Allow transit Ethernet bridges to efficiently monitor FIP frames passing between FCoE end devices and an FCF and use the FIP snooping data to dynamically configure ACLs on the bridge to only permit traffic authorized by the FCF FIP enables FCoE devices to discover one another initialize and maintain virtual links over an Ethernet network and access storage devices in a storage area network FIP satisfies the Fibre Channel requirement for point to point connections by creating a unique virtual link for
83. slot port 10 Gigabit Ethernet enter tengigabitethernet slot port Port channel enter port channel 1 128 Example of the show vlt backup link Command VLT Backup Link Destination Peer HeartBeat status HeartBeat Timer Interval HeartBeat Timeout UDP Port HeartBeat Messages Sent HeartBeat Messages Received VLT Backup Link Destination Peer HeartBeat status HeartBeat Timer Interval HeartBeat Timeout UDP Port HeartBeat Messages Sent HeartBeat Messages Received Dell VLTpeerl show vlt backup link 10 11 200 18 Up 1 3 34998 1026 1025 Dell VLTpeer24 show vlt backup link 10 11 200 20 Up 1 3 34998 1030 1014 Example of the show vlt brief Command Dell VLTpeerl show vlt brief VLT Domain Brief Domain ID Role Role Priority ICL Link Status HeartBeat Status VLT Peer Status Local Unit Id Version Local System MAC address Remote System MAC address Remote system version Delay Restore timer Dell VLTpeer2 show vlt brief VLT Domain Brief Domain ID Role Role Priority PMUX Mode of the IO Aggregator 1000 Secondary 32768 Up Up Up 0 E 00 01 e8 8a e9 70 00 01 e8 8a e7 e7 Configured System MAC address 00 0a 0a 01 01 0a 5 1 90 seconds 1000 Primary 32768 253 ICL Link Status HeartBeat Status VLT Peer Status Local Unit Id Version Local
84. slot port information e Fora 10GbE interface enter the keyword TenGigabitEthernet followed by the slot port numbers for example interface tengigabitethernet 0 5 e For the management interface on a stack unit enter the keyword ManagementEthernet followed by the slot port numbers for example interface managementethernet 0 0 2 shutdown INTERFACE Enter the shutdown command to disable the interface To confirm that the interface is enabled use the show config command in INTERFACE mode To leave INTERFACE mode use the exit command or end command You cannot delete a physical interface The management IP address on the D fabric provides a dedicated management access to the system The switch interfaces support Layer 2 traffic over the 10 Gigabit Ethernet interfaces These interfaces can also become part of virtual interfaces such as VLANs or port channels For more information about VLANs refer to VLANs and Port Tagging For more information about port channels refer to Port Channel Interfaces Dell Networking OS Behavior The Aggregator uses a single MAC address for all physical interfaces Layer 2 Mode On an Aggregator physical interfaces port channels and VLANs auto configure to operate in Layer 2 mode Following example demonstrates about the basic configurations found in Layer 2 interface K NOTE Layer 3 Network mode is not supported on Aggregator physical interfaces port channels and VLANs Only management
85. source on the switch true yes or false no DCBx version accepted in a DCB configuration as compatible In auto upstream mode a port can only received a DCBx version supported on the remote peer DCBx version configured on the port CEE CIN IEEE v2 5 or Auto port auto configures to use the DCBx version received from a peer DCBx version that the peer uses to exchange DCB parameters Transmission status enabled or disabled of advertised DCB TLVs see TLV code at the top of the show command output DCBx version advertised in Control TLVs Highest DCBx version supported in Control TLVs Sequence number transmitted in Control TLVs Data Center Bridging DCB Field Local DCBx Status Acknowledgment Number Local DCBx Status Protocol State Peer DCBx Status DCBx Operational Version Peer DCBx Status DCBx Max Version Supported Peer DCBx Status Sequence Number Peer DCBx Status Acknowledgment Number Total DCBx Frames transmitted Total DCBx Frames received Total DCBx Frame errors Total DCBx Frames unrecognized FC TLV Statistics Input PFC TLV pkts FC TLV Statistics Output PFC TLV pkts P P PFC TLV Statistics Error PFC pkts PFC TLV Statistics PFC Pause Tx pkts P FC TLV Statistics PFC Pause Rx pkts PG TLV Statistics Input PG TLV Pkts PG TLV Statistics Output PG TLV Pkts PG TLV Statistics Error PG TLV Pkts Application Priority TLV Statistics Input Appln Priority TLV pkts
86. sources The multicast router must satisfy all hosts if they have conflicting requests For example if another host on the subnet is interested in traffic from 10 11 1 3 the router cannot record the include request There are no other interested hosts so the request is recorded At this point the multicast routing protocol prunes the tree to all but the specified sources Internet Group Management Protocol IGMP 87 e The host s third message indicates that it is only interested in traffic from sources 10 11 1 1 and 10 11 1 2 Because this request again prevents all other sources from reaching the subnet the router sends another group and source query so that it can satisfy all other hosts There are no other interested hosts so the request is recorded Membership Reports Joining and Filtering IGMP Group and Source Specific Query reertace Mutat Croup fitar Source Sasco Querier Non Querter Madres Timer Mode Tener Type 0x11 ous A gt Group Address 244 1 1 1 ee v reese wwe C Number of Sources 1 C ae Eo Source Address 10 11 1 1 M V1 Le 3i cM gt 2 Type 0122 i Dia Records 1 Number of Group Records 1 __ y Record Type 4 Change to Include 4 Record Type 3 A AA a IGMP Join message Number of Sources 1 i en gt Multicast Address 224 1 1 1 DM ANNE SEALE Source Address 10 11 1 1 x3 Type 0x22 Number of Group Records 1 Record Type 5 A ow New 7 Numbor of Sources 1 Multicast Address
87. speeds are 2 Gbps 4 Gbps and 8 Gbps e By default the FC ports are configured to operate in N port mode to connect to an F port on an FC switch in a fabric You can apply only one FCoE map on an FC port An N Port is a port on the node of an FC device and is called a node port There should a maximum of 64 server fabric login FLOGI requests or fabric discovery FDISC requests per server MAC address before being forwarded by the FC Flex IO module to the FC core switch Without user configuration only 32 server login sessions are permitted for each server MAC address To increase the total number of sessions to 64 use the max sessions command e Adistance of up to 500 meters is supported at 8 Gbps for Fibre Channel traffic e Multiple domains are supported in an NPIV proxy gateway NPQ You cannot configure the MXL or Aggregator switches in Stacking mode if the switches contain the FC Flex IO module Similarly FC Flex IO modules do not function when you insert them in to a stack of MXL or Aggregrator switches e fthe switch does not contain FC Flex modules you cannot create a stack and a log message states that stacking is not supported unless the switches contain only FC Flex modules Guidelines for Working with FC Flex IO Modules The following guidelines apply to the FC Flex IO module All the ports of FC Flex IO modules operate in FC mode and do not support Ethernet mode e FC Flex IO modules are not supported in the ch
88. sss nnne tns 108 RUE c A A ti ae oe a 109 Auto Negotiation on Ethernet Interfaces ener 110 Setting Auto Negotiation Options ssssssssseseeeeeeeeeee nennen eterne nnne trennen enne 112 Viewing Interface IriformatilOnDu i cre e GEAR EE ie elie RS Roda Clearing Interface Counters Enabling the Management Address TLV on All Interfaces of an Aggregator 115 Enhanced Validation of Interface Ranges tot eret n ert bed aei bi eb y eee 115 9 ISCSL OpHUrmizalblofivaciasoid cp d pA A 116 isGsI Optimization OVervIgws ts 116 Monitoring ISCSI Traffic F lOWS 2 certet trees 117 Information Monitored in iSCSI Traffic Flows sssssssssseeeeee nemen 118 Detection and Auto configuration for Dell EqualLogic Arrays e 118 isesIl Optimization Opefatiori co tenter a a a E 118 Displaying SCSI Optimization InforMatiON ocnnnniinnnnnnnncnnnnnncnnccnaccnn coronar rra 119 10 Isolated Networks for Aggregators eene 121 Configuring and Verifying Isolated Network Settings sssssssssssssssseeneeneeen 121 1L Link AggregallOl s oen three cei iaa re ako iria Louie Aaa TNS 122 13 SUpported MOG ES EP How the LACP is Implemented on an AggregatOF occoccicoccciocccconcconnncnonccnnn conan emen leise ler PL Server ESCALAS c Inte o ee e egere io au ee recte TE RU e is EAGP MOGSS utet tbt bete e es ol tm Hii to te ob ted Posee das ito Auto Gornfigured EACB TIE OU cento re
89. test The port must be enabled to run the test or the test prints an error message 2 show tdr tengigabitethernet EXEC Privilege Displays TDR test results slot port Flow Control Using Ethernet Pause Frames An Aggregator auto configures to operate in auto DCB enable mode Refer to Data Center Bridging Auto DCB Enable Mode In this mode Aggregator ports detect whether peer devices support converged enhanced Ethernet CEE or not and enable DCBX and PFC or link level flow control accordingly Interfaces come up with DCB disabled and link level flow control enabled to control data transmission between the Aggregator and other network devices e When DCB is disabled on an interface PFC ETS and DCBX are also disabled e When DCBX protocol packets are received interfaces automatically enable DCB and disable link level flow control e DCB is required for PFC ETS DCBX and FCoE initialization protocol FIP snooping to operate Link level flow control uses Ethernet pause frames to signal the other end of the connection to pause data transmission for a certain amount of time as specified in the frame Ethernet pause frames allow for 108 Interfaces a temporary stop in data transmission A situation may arise where a sending device may transmit data faster than a destination device can accept it The destination sends a pause frame back to the source stopping the sender s transmission for a period of time The globall
90. the global default values for all RADIUS host are applied To specify multiple RADIUS server hosts configure the radius server host command multiple times If you configure multiple RADIUS server hosts Dell Networking OS attempts to connect with them in the order in which they were configured When Dell Networking OS attempts to authenticate a user the software connects with the RADIUS server hosts one at a time until a RADIUS server host responds with an accept or reject response If you want to change an optional parameter setting for a specific host use the radius server host command To change the global communication settings to all RADIUS server hosts refer to Setting Global Communication Parameters for all RADIUS Server Hosts To view the RADIUS configuration use the show running config radius command in EXEC Privilege mode To delete a RADIUS server host use the no radius server host hostname ip address command Setting Global Communication Parameters for all RADIUS Server Hosts You can configure global communication parameters auth port key retransmit and timeout parameters and specific host communication parameters on the same system However if you configure both global and specific host parameters the specific host parameters override the global parameters for that RADIUS server host To set global communication parameters for all RADIUS server hosts use the following commands e Seta time interval aft
91. the port LED shows a blinking green light The Link LED displays solid green when a proper link with the peer is established If there is no connectivity the LEDs are not lit e The MXL and IOA switches continue to operate in FCoE Gateway mode even if connectivity to a TOR switch does not exist e The I O Aggregator examines whether the FC Flex IO module is inserted into the switch When the FC Flex IO module is present during the boot process the switch runs in FCoE NPIV gateway mode by default e When an FC Flex IO module is present in the I O Aggregator the software autoconfigures the DCB settings on the ports that support DCB and does not retrieve these settings from the ToR switch Active fabric manager AFM is compatible with FC Flex IO modules e All SNMP MIBs that are supported for MXL and IOA switches apply equally for FC Flex IO modules The interface MIB indicates the FC interface when you install the FC flex IO module The interface MIB statistical counters compute and display the FC interface metrics e When the Dell Networking OS sends FC frames the initial FLOGI or FLOGO messages or converts FLOGI to FDISC messages or processes any internally generated FC frames the software computes and verifies the FC cyclic redundancy check CRC value before sending the frame to FC ports Fabric worldwide name WWN verification is available for eight FC ports Single switching WWN capability is provided when the switch operat
92. the principal SAN switch The principal switch in a fabric is the FC switch with the lowest domain ID When you apply the FCoE map to a server facing Ethernet port in ENode mode ACLs are automatically configured to allow only FCOE traffic from servers that perform a successful FLOGI on the FC switch All other traffic on the VLAN is denied You can specify one or more upstream N ports in an FCoE map The FCoE map also contains the VLAN ID of the dedicated VLAN used to transmit FCOE traffic between the SAN fabric and servers 270 FC Flex IO Modules NPIV Proxy Gateway Protocol Services An MXL 10 40GbE Switch and M I O Aggregator with the FC Flex IO module NPG provides the following protocol services e Fibre Channel service to create N ports and log in to an upstream FC switch e FCoE service to perform e Virtualization of FC N ports on an NPG so that they appear as FCoE FCFs to downstream servers e NPIV service to perform the association and aggregation of FCoE servers to upstream F ports on core switches through N ports on the NPG Conversion of server FLOGIs and FDISCs which are received over MXL 10 40GbE Switch and M I O Aggregator with the FC Flex IO module ENode ports are converted into FDISCs addressed to the upstream F ports on core switches NPIV Proxy Gateway Functionality An MXL 10 40GbE Switch and M I O Aggregator with the FC Flex IO module NPG provides the following functionality in a storage area network e FIP S
93. the smallest available unit ID in the stack To add a standalone Aggregator to a stack follow these steps 1 Power on the switch Attach QSFP or direct attach cables to connect 40GbE ports on the switch to one or more switches in the stack 3 Log onto the CLI and enter global configuration mode Login username Password Dell gt enable Dell configure 4 Configure the Aggregator to operate in stacking mode CONFIGURATION mode stack unit 0 iom mode stack 5 Reload the switch Dell Operating System automatically assigns a number to the new unit and adas it as member switch in the stack The new unit synchronizes its running and startup configurations with the stack EXEC Privilege mode reload If an Aggregator is already configured to operate in stacking mode simply attach QSFP or direct attach cables to connect 40GbE ports on the base module of each stacked Aggregator The new unit synchronizes its running and startup configurations with the stack Dell Networking OS Behavior When you add a new Aggregator to a stack e If the new unit has been configured with a stack number that is already assigned to a stack member the stack avoids a numbering conflict by assigning the new switch the first available stack number e If the stack has been provisioned for the stack number that is assigned to the new unit the pre configured provisioning must match the switch type If there is a conflict between the provisioned sw
94. this chapter To connect the cabling 1 Connect a 40GbE base port on the first Aggregator to a 40GbE base port on another Aggregator in the same chassis 2 Connect a 40GbE base port on the second Aggregator to a 40GbE port on the first Aggregator 194 Stacking The resulting ring topology allows the entire stack to function as a single switch with resilient fail over capabilities If you do not connect the last switch to the first switch Step 4 the stack operates in a daisy chain topology with less resiliency Any failure in a non edge stack unit causes a split stack Accessing the CLI To configure a stack you must access the stack master in one of the following ways e For remote out of band management OOB enter the OOB management interface IP address into a Telnet or secure shell SSH client and log in to the switch using the user ID and password to access the CLI e For local management use the attached console connection to the master switch to log in to the CLI Console access to the stack CLI is available on the master only e For remote in band management from a network management station enter the virtual local area network VLAN IP address of the management port and log in to the switch to access the CLI Configuring and Bringing Up a Stack After you attach the 40G QSFP or direct attach cables in a stack of Aggregators to bring up the stack follow these steps K NOTE The procedure uses command examples fo
95. tools to discover and make available a physical topology for network management The Dell Networking operating software implementation of LLDP is based on IEEE standard 801 1ab The starting point for using LLDP is invoking LLDP with the protocol lldp command in either CONFIGURATION or INTERFACE mode The information LLDP distributes is stored by its recipients in a standard management information base MIB You can access the information by a network management system through a management protocol such as simple network management protocol SNMP An Aggregator auto configures to support the link layer discovery protocol LLDP for the auto discovery of network devices You can use CLI commands to display acquired LLDP information clear LLDP counters and debug LACP operation Overview LLDP defined by IEEE 802 1AB is a protocol that enables a local area network LAN device to advertise its configuration and receive configuration information from adjacent LLDP enabled LAN infrastructure devices The collected information is stored in a management information base MIB on each device and is accessible via simple network management protocol SNMP Protocol Data Units Configuration information is exchanged in the form of type length value TLV segments The below figure shows the chassis ID TLV e Type Indicates the type of field that a part of the message represents e Length Indicates the size of the value field
96. traffic ingresses on a PFC enabled port and egresses on another PFC enabled port Lossless traffic is not guaranteed when it is transmitted on a PFC enabled port and received on a link level flow control enabled port or transmitted on a link level flow control enabled port and received on a PFC enabled port Enabling DCB on Next Reload To configure the Aggregator so that all interfaces come up with DCB enabled and flow control disabled use the dcb enable on next reload command Internal PFC buffers are automatically configured Task Command Command Mode Globally enable DCB on all dcb enable on next reload CONFIGURATION interfaces after next switch reload 40 Data Center Bridging DCB To reconfigure the Aggregator so that all interfaces come up with DCB disabled and link level flow control enabled use the no dcb enable on next reload command PFC buffer memory is automatically freed Enabling Auto DCB Enable Mode on Next Reload To configure the Aggregator so that all interfaces come up in auto DCB enable mode with DCB disabled and flow control enabled use the dcb enable aut detect on next reload command Task Command Command Mode Globally enable auto detection dcb enable auto detect on next CONFIGURATION of DCBx and auto enabling of reload DCB on all interfaces after switch reload Enabling DCB To configure the Aggregator so that all interfaces are DCB enabled and flow control disabled use the dcb enable command Di
97. up line protocol is down Inbound IGMP access group is not set Interface IGMP group join rate limit is not set IGMP snooping IGMP Snooping IGMP Snooping IGMP Snooping IGMP snooping IGMP snooping SMoxress is enabled on interface query interval is 60 seconds querier timeout is 125 seconds last member query response interval is 1000 ms fast leave is disabled on this interface querier is disabled on this interface show ip igmp snooping mrouter Command Example Dell show ip igmp snooping mrouter Interface Router Ports Vlan 1000 Po 128 Dell Internet Group Management Protocol IGMP 91 8 Interfaces This chapter describes 100 1000 10000 Mbps Ethernet 10 Gigabit Ethernet and 40 Gigabit Ethernet interface types both physical and logical and how to configure them with the Dell Networking Operating Software OS Basic Interface Configuration Interface Auto Configuration Interface Types Viewing Interface Information Disabling and Re enabling a Physical Interface Layer 2 Mode Management Interfaces VLAN Membership Port Channel Interfaces Advanced Interface Configuration Monitor and Maintain Interfaces Flow Control Using Ethernet Pause Frames MTU Size Auto Negotiation on Ethernet Interfaces Viewing Interface Information Interface Auto Configuration An 92 Aggregator auto configures interfaces as follows All interfaces operate as layer 2
98. verify the configuration of a VLT domain use any of the following show commands on the primary and secondary VLT switches e Display information on backup link operation EXEC mode show vlt backup link e Display general status information about VLT domains currently configured on the switch EXEC mode show vlt brief e Display detailed information about the VLT domain configuration including local and peer port channel IDs local VLT switch status and number of active VLANs on each port channel EXEC mode show vlt detail e Display the VLT peer status role of the local VLT switch VLT system MAC address and system priority and the MAC address and priority of the locally attached VLT device EXEC mode show vlt role e Display the current configuration of all VLT domains or a specified group on the switch EXEC mode show running config vlt e Display statistics on VLT operation EXEC mode show vlt statistics e Display the RSTP configuration on a VLT peer switch including the status of port channels used in the VLT interconnect trunk and to connect to access devices EXEC mode show spanning tree rstp e Display the current status of a port or port channel interface used in the VLT domain EXEC mode 252 PMUX Mode of the IO Aggregator show interfaces interface interface specify one of the following interface types Fast Ethernet enter fastethernet slot port 1 Gigabit Ethernet enter gigabitethernet
99. 0 00 00 00 00 00 00 00 00 00 0000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 01 00 The last byte is free byte The bit for LAGs starts from 43 byte If server LAG 1 is created with server ports Te 0 6 and Te 0 7 the respective bit for the ports are unset and the bit for LAG 1 is set in default VLAN The corresponding output will be as follows snmpwalk Os c public v 1 10 16 151 151 1 3 6 1 2 1 17 7 1 4 2 1 5 mib 2 17 7 1 4 2 1 5 0 1107525633 Hex STRING F9FFFFFF 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 80 00 00 00 00 00 00 00 00 00 00 00 00 00 00 O1 00 In the above example the 43rd byte is set to 80 The 43rd byte is for LAG IDs from 1 to 8 But only one LAG po 1 is set as switch port Hence the binary bits will be 10000000 If this converted to Hexadecimal the value will be 80 Similarly the first byte for Te 0 1 to Te 0 8 server ports as the 6th and 7th byte is removed from switch port the respective bits are set to O In binary the value is 11111001 and the corresponding hex decimal value is F9 In standalone mode there are 4000 VLANs by default The SNMP output will display for all 4000 VLANs To view a particular VLAN issue the snmp query with VLAN interface ID Dell show interface vlan 1010 grep Interface index Interface index is 1107526642 Use the output of the above comman
100. 00baseT Error Speed 100 not supported on this interface Not Error supported messages not thrown wherever it says not supported Supported Not Error supported messages not thrown wherever it says not supported Not Error supported messages not thrown wherever it says not supported Interfaces be thrown duplex half interface Supported CLI not CLI not available Invalid Input config available error CLI mode not available duplex full interface Supported CLI not CLI not available Invalid Input config available error CLl mode not available Setting Auto Negotiation Options Dell conf int tengig 0 1 Dell conf if te 0 1 neg auto Dell conf if autoneg end Exit from configuration mode exit Exit from autoneg configuration mode mode Specify autoneg mode no egate a command or set its defaults show Show autoneg configuration information Dell conf if autoneg mode forced master Force port to master mode forced slave Force port to slave mode Dell conf if autoneg Viewing Interface Information Displaying Non Default Configurations The show ip running config interfaces configured command allows you to display only interfaces that have non default configurations are displayed The below example illustrates the possible show commands that have the available configured keyword Dell show interfaces configured Dell show interfaces tengigabitEthernet 0 configured D
101. 02 101 Internal untagged I Internal igabitEthernet 0 1 ed Hybrid SMUX port mode Auto VLANs enabled Vlan membe 0 V1 U il T 2 Native Vla status Dell conf Dell conf Dell conf Dell show Codes U x G i VLT tagged a l enG ragg Name 802 101 SMUX port mode Vlan membe 0 V1 U 1 T 2 Native Vla rship ans 4094 nid 1 interface tengigabitethernet 0 1 if te 0 1 vlan tagged 2 5 100 401 if te 0 1 interfaces switchport tengigabitet Untagged T Tagged Dotlx untagged X Dotlx tagged GVRP tagged M Trunk H VSN Internal untagged I Internal igabitEthernet 0 1 ed Hybrid Admin VLANs enabled rship ans 5 100 4010 nid 1 Software show Commands tagged v VLT untagged V Assign the port to a specified group of VLANs vlan tagged command and re display the port mode 0 hernet 0 1 tagged tagged v VLT untagged V Use the show version and show system stack unit 0 commands as a part of troubleshooting an Aggregator s software configuration in a standalone or stacking scenario Table 25 Software Command show version show system stack unit O show Commands Description Display the current version of Dell Networking OS software running on an Aggregator Display software c show version Command Example Dellfshow vers Dell Real Tim Dell Operating System Version Dell Force10 Appl Debug
102. 027 3 2 1 1 2 1 Hex STRING 00 01 E8 13 A5 C7 SNMPv2 SMI enterprises 6027 3 2 1 L 2 2 Hex STRING 00 01 E8 13 A5 C8 SNMPv2 SMI enterprises 6027 3 2 1 1 3 1 INTEGER 1107755009 SNMPv2 SMI enterprises 6027 3 2 1 1 3 2 INTEGER 1107755010 SNMPv2 SMI enterprises 6027 3 2 1 1 4 1 INTEGER SNMPv2 SMI enterprises 6027 3 2 1 1 4 2 INTEGER 1 SNMPv2 SMI enterprises 6027 3 2 1 1 5 1 Hex STRING 00 00 SNMPv2 SMI enterprises 6027 3 2 1 1 5 2 Hex STRING 00 00 SNMPv2 SMI enterprises 6027 3 2 1 1 6 1 STRING Tengig 0 4 lt lt Channel member for Poi SNMPv2 SMI enterprises 6027 3 2 1 6 2 STRING Tengig 0 5 lt lt Channel member for Po2 dot3aCommonAggFdbIndex SNMPv2 SMI enterprises 6027 3 2 1 1 6 1 1 1107755009 1 INTEGER 1107755009 dot3aCommonAggFdbVlanId SNMPv2 SMI enterprises 6027 3 2 1 1 6 1 2 1107755009 1 INTEGER 1 dot3aCommonAggFdbTagConfig SNMPv2 SMI enterprises 6027 3 2 1 1 6 1 3 1107755009 1 INTEGER 2 Tagged 1 or Untagged 2 dot3aCommonAggFdbStatus SNMPv2 SMI enterprises 6027 3 2 1 1 6 1 4 1107755009 1 INTEGER 1 lt lt Status active 2 status inactive If you learn the MAC address for the LAG the LAG status also displays 182 Simple Network Management Protocol SNMP dot3aCurAggVlanId SNMPv2 SMI enterprises 6027 3 2 1 4 1 1 1 0 0 0 0 0 1 1 dot3aCurAggMacAddr SNMPv2 SMI enterprises 602
103. 1 to 31 You can enter the name of a month to change the order of the display to time day month year year Enter a four digit number as the year The range is from 1993 to 2035 Example of the clock set Command Dell clock set 12 11 00 21 may 2012 Dell Setting the Timezone Universal time coordinated UTC is the time standard based on the International Atomic Time standard commonly known as Greenwich Mean time When determining system time you must include the differentiator between the UTC and your local timezone For example San Jose CA is the Pacific Timezone with a UTC offset of 8 To set the clock timezone use the following command System Time and Date 209 Set the clock to the appropriate timezone CONFIGURATION mode clock timezone timezone name offset timezone name Enter the name of the timezone Do not use spaces offset Enter one of the following anumber from 1 to 23 as the number of hours in addition to UTC for the timezone a minus sign then a number from 1 to 23 as the number of hours Example of the clock timezone Command Dell conf Dell conf clock timezone Pacific 8 Dell Setting Daylight Savings Time Dell Networking OS supports setting the system to daylight savings time once or on a recurring basis every year Setting Daylight Saving Time Once Set a date and time zone on which to convert the switch to daylight saving time on a one time basis To set the clock for d
104. 10 timeout 1 Dell conf tacacs server key angeline Dell conf sRPMO P CP SEC 5 LOGIN SUCCESS Login successful for user admin on vtyO 10 11 9 209 SRPMO P CP SEC 3 AUTHENTICATION ENABLE SUCCESS Enable password authentication success on vty0 10 11 9 209 SRPMO P CP SEC 5 LOGOUT Exec session is terminated for user admin on line vtyO 10 11 9 209 Dell conf username angeline password angeline Dell conf sRPMO P CP SEC 5 LOGIN SUCCESS Login successful for user angeline 164 Security for M I O Aggregator on vty0 10 11 9 209 SRPMO P CP SEC 3 AUTHENTICATION ENABLE SUCCESS Enable password authentication success on vtyO 10 11 9 209 AAA Accounting Accounting authentication and authorization AAA accounting is part of the AAA security model For details about commands related to AAA security refer to the Security chapter in the Dell Networking OS Command Reference Guide AAA accounting enables tracking of services that users are accessing and the amount of network resources being consumed by those services When you enable AAA accounting the network server reports user activity to the security server in the form of accounting records Each accounting record comprises accounting attribute value AV pairs and is stored on the access control server As with authentication and authorization you must configure AAA accounting by defining a na
105. 1057 Organizationally Specific TLVs 142 Link Layer Discovery Protocol LLDP Figure 21 LLDPDU Frame Optional TLVs The Dell Networking Operating System OS supports the following optional TLVs Management TLVs IEEE 802 1 and 802 3 organizationally specific TLVs and TIA 1057 organizationally specific TLVs Management TLVs A management TLV is an optional TLVs sub type This kind of TLV contains essential management information about the sender Organizationally Specific TLVs A professional organization or a vendor can define organizationally specific TLVs They have two mandatory fields as shown in the following illustration in addition to the basic TLV fields e Organizationally Unique Identifier OUI a unique number assigned by the IEEE to an organization or vendor e OUI Sub type These sub types indicate the kind of information in the following data field The sub types are determined by the owner of the OUI fnC0052mp 7 bits 9 bits 3 octots 1 octet 0 507 octets Figure 22 Organizationally Specific TLVs LLDP Operation On an Aggregator LLDP operates as follows e LLDP is enabled by default e LLDPDUs are transmitted and received by default LLDPDUS are transmitted periodically The default interval is 50 seconds e LLDPDU information received from a neighbor expires after the default Time to Live TTL value 120 seconds e Dell Networking OS supports up to eight neighbors per interface
106. 12 38 show lacp 1 Command Example Dell show lacp 1 Port channel 1 admin up oper up mode lacp Actor System ID Priority 32768 Address 0001 e8e1 e1c3 Partner System ID Priority 65535 Address 24b6 fd87 d8ac Actor Admin Key 1 Oper Key 1 Partner Oper Key 33 VLT Peer Oper Key 1 ACP LAG 1 is an aggregatable link ACP LAG 1 is a normal LAG A Active LACP B Passive LACP C Short Timeout D Long Timeout E Aggregatable Link F Individual Link G IN SYNC H OUT OF SYNC I Collection enabled J Collection disabled K Distribution enabled L Distribution disabled M Partner Defaulted N Partner Non defaulted O Receiver is in expired state P Receiver is not in expired state Port Te 0 12 is enabled LACP is enabled and mode is lacp Port State Bundle Actor Admin State ADEHJLMP Key 1 Priority 32768 Oper State ADEGIKNP Key 1 Priority 32768 Partner Admin State BDFHJLMP Key 0 Priority 0 Oper State ADEGIKNP Key 33 Priority 255 156 Link Aggregation 12 Layer 2 The Aggregator supports CLI commands to manage the MAC address table e Clearing the MAC Address Entries e Displaying the MAC Address Table The Aggregator auto configures with support for Network Interface Controller NIC Teaming Es NOTE On an Aggregator all ports are configured by default as members of all 4094 VLANs including the default VLAN All VLANs operate in Layer 2 mode You can reconfigure the VLAN membership for individual ports by us
107. 224 1 1 1 Siste chango reports retransmimod Source Address 10 11 12 Query Robustnens Value time at Urrea Report beer vas Figure 13 GMP Membership Reports Joining and Filtering Leaving and Staying in Groups The below illustration shows how multicast routers track and refreshes the state change in response to group and specific and general queries e Host 1 sends a message indicating it is leaving group 224 1 1 1 and that the included filter for 10 11 1 1 and 10 11 1 2 are no longer necessary The querier before making any state changes sends a group and source query to see if any other host is interested in these two sources queries for state changes are retransmitted multiple times If any are interested they respond with their current state information and the querier refreshes the relevant state information e Separately in the below figure the querier sends a general query to 224 0 0 1 e Host 2 responds to the periodic general query so the querier refreshes the state information for that group 88 Internet Group Management Protocol IGMP Membership Queries Leaving and Staying ES gt Ge oe 91 a s o d i IM K le M a a Figure 14 IGMP Membership Queries Leaving and Staying in Groups IGMP Snooping IGMP snooping is auto configured on an Aggregator Multicast packets are addressed with multicast MAC addresses which represents a group of devices rather than one u
108. 28001ec9f10358 MTU 12000 bytes IP MTU 11982 bytes LineSpeed 10000 Mbit Members in this channel Te 1 49 U ARP type ARPA ARP Timeout 04 00 00 Last clearing of show interface counters 00 14 06 Queueing strategy fifo Input Statistics 476 packets 33180 bytes 414 64 byte pkts 33 over 64 byte pkts 29 over 127 byte pkts 0 over 255 byte pkts 0 over 511 byte pkts 0 over 1023 byte pkts 476 Multicasts 0 Broadcasts 0 runts 0 giants 0 throttles 0 CRC 0 overrun 0 discarded Output Statistics 9124688 packets 3156959396 bytes 0 underruns 0 64 byte pkts 30 over 64 byte pkts 804 over 127 byte pkts 9123854 over 255 byte pkts 0 over 511 byte pkts 0 over 1023 byte pkts 834 Multicasts 9123854 Broadcasts 0 Unicasts 0 throttles 0 discarded 0 collisions 0 wreddrops Rate info interval 299 seconds Input 00 00 Mbits sec 1 packets sec 0 00 of line rate Output 34 00 Mbits sec 12314 packets sec 0 36 of line rate Time since last interface status change 00 13 57 Interface Range An interface range is a set of interfaces to which other commands may be applied and may be created if there is at least one valid interface within the range Bulk configuration excludes from configuring any non existing interfaces from an interface range A default VLAN may be configured only if the interface range being configured consists of only VLAN ports The interface range command allows you to create an interface range allowing other command
109. 7 3 2 1 4 1 2 1 0 0 0 0 0 1 1 00 00 00 01 dot3aCurAggIndex SNMPv2 SMI enterprises 6027 3 2 1 4 1 3 1 0 0 0 0 0 1 1 dot3aCurAggStatus SNMPv2 SMI enterprises 6027 3 2 1 4 1 4 1 0 0 0 0 0 1 1 active 2 status inactive For L3 LAG you do not have this support SNMPv2 MIB sysUpTime 0 Timeticks 8500842 23 36 48 42 SNMPv2 MIB snmpTrapOID 0 OID IF MIB linkDown IF MIB ifIndex 33865785 INTEGER 33865785 SNMPv2 SMI enterprises 6027 3 1 1 4 1 2 STRING interface state to down Tengig 0 1 2010 02 10 14 22 39 10 16 130 4 10 16 130 4 SNMPv2 MIB sysUpTime 0 Timeticks 8500842 23 36 48 42 SNMPv2 MIB snmpTrapOID 0 OID IF MIB linkDown IF MIB ifIndex 1107755009 INTEGER 1107755009 SNMPv2 SMI enterprises 6027 3 1 1 4 1 2 STRING state to down Po 1 2010 02 10 14 22 40 10 16 130 4 10 16 130 4 SNMPv2 MIB sysUpTime 0 Timeticks 8500932 23 36 49 32 SN MIB snmpTrapOID O OID IF MIB linkUp IF MIB ifIndex 33865785 INTEGER 33865785 S SMI enterprises 6027 3 1 1 4 1 2 STRING OSTATE UP Changed interface state to up Tengig 0 1 2010 02 10 14 22 40 10 16 130 4 10 16 130 4 SNMPv2 MIB sysUpTime 0 Timeticks 8500934 23 36 49 34 SN MIB snmpTrapOID O OID IF MIB linkUp IF MIB ifIndex 1107755009 INTEGER 11077550 SNMPv2 SMI enterprises 6027 3 1 1 4 1 2 STRING state to up Po 1 Entity MIBS
110. AN When you apply an FCoE map on a port FCoE is enabled on the port All non FCoE traffic is dropped on an FCoE VLAN FCoE Initialization Protocol Layer 2 protocol for endpoint discovery fabric login and fabric association FIP is used by server CNAs to discover an upstream FCoE switch operating as an FCF FIP keepalive messages maintain the connection between an FCoE initiator and an FCF N port identifier virtualization The capability to map multiple FCoE links from downstream ports to a single upstream FC link The switch in a fabric with the lowest domain number The principal switch accesses the master name database and the zone zone set database A Data Center Bridging DCB map is used to configure DCB functionality such as PFC and ETS on MXL 10 40GbE Switch and M I O Aggregator with the FC Flex IO module Ethernet ports that support CEE traffic and are DCBx enabled by default For more information on PFC and ETS see Data Center Bridging DCB By default no PFC and ETS settings in a DCB map are applied to MXL 10 40GbE Switch and M I O Aggregator with the FC Flex IO module Ethernet ports when they are enabled On an MXL 10 40GbE Switch and M I O Aggregator with the FC Flex IO module NPG you must configure PFC and ETS parameters ina DCB map and then apply the map to server facing Ethernet ports see the Creating a DCB map section 272 FC Flex IO Modules FCoE Maps An FCoE map is used to identify the SAN fabr
111. Assigns traffic to one priority queue with 20 of the link bandwidth and strict priority scheduling Assigns traffic to one priority queue with 30 of the link bandwidth Assigns traffic to two priority queues with 50 of the link bandwidth and strict priority scheduling In this example the configured ETS bandwidth allocation and scheduler behavior is as follows Unused bandwidth usage Strict priority groups Normally if there is no traffic or unused bandwidth for a priority group the bandwidth allocated to the group is distributed to the other priority groups according to the bandwidth percentage allocated to each group However when three priority groups with different bandwidth allocations are used on an interface e f priority group 3 has free bandwidth it is distributed as follows 20 of the free bandwidth to priority group 1 and 30 of the free bandwidth to priority group 2 e If priority group 1 or 2 has free bandwidth 20 30 of the free bandwidth is distributed to priority group 3 Priority groups 1 and 2 retain whatever free bandwidth remains up to the 20 30 If two priority groups have strict priority scheduling traffic assigned from the priority group with the higher priority queue number is scheduled first However when three priority groups are used and two groups have strict priority scheduling such as groups 1 and 3 in the example the strict priority group whose traffic is mapped to one queue t
112. B disabled and link level flow control enabled show interfaces Command Example DCB disabled and Flow Control enabled Dell show running config interface te 0 4 Data Center Bridging DCB 39 interface TenGigabitEthernet 0 4 mtu 12000 portmode hybrid switchport auto vlan flowcontrol rx on tx off dcb map DCB MAP PFC OFF no keepalive l protocol lldp advertise management tlv management address system name dcbx port role auto downstream no shutdown Dell When DCB is Enabled When an interface receives a DCBx protocol packet it automatically enables DCB and disables link level flow control The dcb map and flow control configurations are removed as shown in the following example show interfaces Command Example DCB enabled and Flow Control disabled Dell show running config interface te 0 3 interface TenGigabitEthernet 0 3 mtu 12000 portmode hybrid switchport auto vlan l protocol lldp advertise management tlv management address system name dcbx port role auto downstream no shutdown Dell When no DCBx TLVs are received on a DCB enabled interface for 180 seconds DCB is automatically disabled and flow control is re enabled Lossless Traffic Handling In auto DCB enable mode Aggregator ports operate with the auto detection of DCBx traffic At any moment some ports may operate with link level flow control while others operate with DCB based PFC enabled As a result lossless traffic is ensured only if
113. Center Bridging DCB Description Interface type with stack unit and port number Maximum number of priority groups supported Number of 802 1p priorities currently configured ETS mode on or off When on the scheduling and bandwidth allocation configured in an ETS output policy or received in a DCBx TLV from a peer can take effect on an interface ETS configuration on local port including priority groups assigned dotlp priorities and bandwidth allocation ETS configuration on remote peer port including Admin mode enabled if a valid TLV was received or disabled priority groups assigned dotlp priorities and bandwidth allocation If the ETS Admin mode is enabled on the remote port for DCBx exchange the Willing bit received in ETS TLVs from the remote peer is included ETS configuration on local port including Admin mode enabled when a valid TLV is received from a peer priority groups assigned dotip priorities and bandwidth allocation Port state for current operational ETS configuration e Init Local ETS configuration parameters were exchanged with peer e Recommend Remote ETS configuration parameters were received from peer e Internally propagated ETS configuration parameters were received from configuration source 55 Field ETS DCBx Oper status State Machine Type Conf TLV Tx Status Reco TLV Tx Status Input Conf TLV pktsOutput Conf TLV pktsError Conf TLV pkts Input Reco TLV pktsOu
114. Configuring a Static Route for a Management Interface 97 VEAN Membership zs re a e pat e i a a ec eeu a 98 Default VEAN cua dirette dete nm een eo ebat ifie tet dina ebd M pee e autom 98 Port Based VAN Sa het 98 VLANS a d Port Faggilg tecto itecto e edo te ed 99 Configuring VLAN Membership enne nennen erret nenet 99 Displaying VEAN MembersFip seco ls tei a a td teste n Re bh e 100 Adding an Interface to a Tagged VLAN sssssssssseseeeeeeeeeneenn ener nnne 101 Adding an Interface to an Untagged VLAN ssssssssssseseeeeeeeenem eene 101 Port Channel Interfaces ssssssssRRR Port Channel Definitions and Standards Port Channel Bertefits uic itr epa reta epa ER EGRE ER EUR RT qe Un SEE ERROR RAE Eg Port Channel ImplementatiON ooooocccconoccccncooccnnnononcnnnnnnncnnnnnnnnnnnnnn cnn nan n cnn nan nn netten nennen nente 1GbE and 10GbE Interfaces in Port Channels Uplink Port Channel VLAN Membership enne a nnne nnn Server Facing Port Channel VLAN Membership ssssssseememeeeeees 103 Displaying Port Channel Information cccccccccsccssccssccseccssesseecseecseeseeseeseecseessesceeseecseecseeaes 104 Int rfacesRaAGe acca HER RR enn Up 105 Bulk Configuration Examples netter eint c e urere hee Da e te e edad 105 Monitor and Maintain Interfaces sss eene nnne 107 Maintenance Using TDR s phe AAA RUE DER ede 108 Flow Control Using Ethernet Pause Frames
115. DIUS and set up TACACS as backup CONFIGURATION mode aaa authentication enable default radius tacacs 2 Establish a host address and password CONFIGURATION mode radius server host x x x x key some password 3 Establish a host address and password CONFIGURATION mode tacacs server host x x x x key some password To get enable authentication from the RADIUS server and use TACACS as a backup issue the following commands Example of Enabling Authentication from the RADIUS Server Dell config aaa authentication enable default radius tacacs Radius and TACACS server has to be properly setup for this Dell config radius server host x x x x key lt some password gt Dell config tacacs server host x x x x key lt some password gt To use local authentication for enable secret on the console while using remote authentication on VTY lines issue the following commands Example of Enabling Local Authentication for the Console and Remote Authentication for VTY Lines Dell config aaa authentication enable mymethodlist radius tacacs Dell config line vty 0 9 Dell config line vty enable authentication mymethodlist Server Side Configuration e TACACS When using TACACS Dell Networking OS sends an initial packet with service type SVC ENABLE and then sends a second packet with just the password The TACACS server must have an entry for username enable e RADIUS When using RADIUS authentication Dell Networking OS
116. Dell Networking OS CLI is divided into three major mode levels e EXEC mode is the default mode and has a privilege level of 1 which is the most restricted level Only a limited selection of commands is available notably the show commands which allow you to view system information 20 Configuration Fundamentals EXEC Privilege mode has commands to view configurations clear counters manage configuration files run diagnostics and enable or disable debug operations The privilege level is 15 which is unrestricted You can configure a password for this mode e CONFIGURATION mode allows you to configure security features time settings set logging and SNMP functions configure static ARP and MAC addresses and set line cards on the system Beneath CONFIGURATION mode are submodes that apply to interfaces protocols and features The following example shows the submode command structure Two sub CONFIGURATION modes are important when configuring the chassis for the first time INTERFACE submode is the mode in which you configure Layer 2 protocols and IP services specific to an interface An interface can be physical 10 Gigabit Ethernet or logical Null port channel or virtual local area network VLAN e LINE submode is the mode in which you to configure the console and virtual terminal lines NOTE At any time entering a question mark displays the available command options For example when you are in CONFIGURATION mode ent
117. Dell PowerEdge Configuration Guide for the M I O Aggregator 9 5 0 1 Notes Cautions and Warnings Es NOTE A NOTE indicates important information that helps you make better use of your computer A CAUTION A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid the problem A WARNING A WARNING indicates a potential for property damage personal injury or death Copyright O 2014 Dell Inc All rights reserved This product is protected by U S and international copyright and intellectual property laws Dell and the Dell logo are trademarks of Dell Inc in the United States and or other jurisdictions All other marks and names mentioned herein may be trademarks of their respective companies 2014 07 Rev ADO Contents TAboutthis GUIAS ii da 13 Adultos tester dads 13 CONVENIOS 4 55 Ade ives nd ies AAA heii e kta oh Mle eh i ee 13 Related DOCUMENT tbt itte nein anaes 14 2 Before YOU Mit tada 15 IOA Operational MOdES cccccccscsscesseesscesscesecseecseecseccseeeseecsesssessseecseesseesseesseeseceecsecsuecseseeestesseee 15 Staridalone mode 5 eerie e e m ie E e ELTE PRU TEE REUS 15 Stacking mod Ezeren e b PP e POUR Pe EL e beca t Hu da bee ed oe rund 15 MN E 15 Programmable MUXTIOde ht n eade teet ba de lebe d 15 Default SGttinigs o ee RR eu EUM 16 Other Auto Configured Settings ercp i eter diria 16 Datacenter Bridging Support cust eem at Rt oue adr e o et
118. E traffic Dell conf interface vlan 1002 4 Configure an FCoE map to be applied on downstream server facing Ethernet and upstream core facing FC ports Dell config fcoe map SAN FABRIC A Dell config fcoe name fabric id 1002 vlan 1002 Dell config fcoe name description SAN FABRIC A Dell config fcoe name fc map Oefc00 Dell config fcoe name keepaliv Dell config fcoe name fcf priority 128 Dell config fcoe name fka adv period 8 5 Enable an upstream FC port Dell config interface fibrechannel 0 0 Dell config if fc 0 no shutdown FC Flex IO Modules 279 6 Enable a downstream Ethernet port Dell config interface tengigabitEthernet 0 0 Dell conf if te 0 no shutdown Displaying NPIV Proxy Gateway Information To display information on the NPG operation use the show commands in the following table Table 18 Displaying NPIV Proxy Gateway Information Command show interfaces status show fcoe map brief map name show qos dcb map map name show npiv devices brief show fc switch Description Displays the operational status of Ethernet and Fibre Channel interfaces on an MXL 10 40GbE Switch or M I O Aggregator with the FC Flex IO module NPG K NOTE Although the show interface status command displays the Fiber Channel FC interfaces with the abbreviated label of Fc in the output if you attempt to specify a FC interface by using the
119. Error Fr Total Unrecognize Total TLVs Discar vi rhe neighbors are Remote Chassis Remote Chassis Remote Port Sub Remote Port ID Local Port ID Clearing LLDP Next packet will b 0 3 has 1 neighbor 39165 40650 formation Age outs 0 ighbors Detected 0 arded 0 ames 0 d TLVs 0 ded 0 sent after 4 seconds given below ID Subtype Mac address 4 ID 00 00 c9 ad f6 12 type Mac address 3 00 00 c9 ad 6 12 TenGigabitEthernet 0 3 Counters You can clear LLDP statistics that are maintained on an Aggregator for LLDP counters for frames transmitted to and received from neighboring devices on all or a specified physical interface To clear LLDP counters enter the clear lldp counters command Link Layer Discovery Protocol LLDP 145 Command Syntax Command Mode Purpose clear lldp counters interface EXEC Privilege Clear counters for LLDP frames sent to and received from neighboring devices on all Aggregator interfaces or on a specified interface interface specifies a TOGbE uplink port in the format tenGigabitEthernet slot port Debugging LLDP You can view the TLVs that your system is sending and receiving To view the TLVs use the following commands View a readable version of the TLVs debug lldp brief e View a readable version of the TLVs plus a hexadecimal version of the entire LLDPDU debug lldp detail 1widi9h 1widl9h 1widi9h 1widi9h 1widi9hn 1w1d19h Sending LLDP
120. FC and I D Compliance The Dell Networking OS supports the following standards The standards are grouped by related protocol The columns showing support by platform indicate which version of Dell Networking OS first supports the standard Standards Compliance 309 General Internet Protocols The following table lists the Dell Networking OS support per platform for general internet protocols Table 27 General Internet Protocols RFCH Full Name 768 User Datagram Protocol 793 Transmission Control Protocol 854 Telnet Protocol Specification 959 File Transfer Protocol FTP 1321 The MD5 Message Digest Algorithm 1350 The TFTP Protocol Revision 2 1661 The Point to Point Protocol PPP 1989 PPP Link Quality Monitoring 1990 The PPP Multilink Protocol MP 1994 PPP Challenge Handshake Authentication Protocol CHAP 2474 Definition of the Differentiated Services Field DS Field in the IPv4 and IPv6 Headers 2698 A Two Rate Three Color Marker 3164 The BSD syslog Protocol General IPv4 Protocols The following table lists the Dell Networking OS support per platform for general IPv4 protocols Table 28 General IPv4 Protocols RFC Full Name 791 Internet Protocol 792 Internet Control Message Protocol 826 An Ethernet Address Resolution Protocol 1027 Using ARP to Implement Transparent Subnet Gateways 1035 DOMAIN NAMES IMPLEMENTATION AND SPECIFICATION client 1042 A Standard for the Transmission of IP Datagrams over
121. FP port SNMPv2 SMI mib 2 47 1 2 37 STRING Unit 0 Port 33 40G Level SNMPv2 SMI mib 2 47 L 2 41 STRING 40G QSFP port SNMPv2 SMI mib 2 47 L 2 42 STRING Unit 0 Port 37 40G Level SNMPv2 SMI mib 2 47 1 1 2 46 STRING Optional module 0 SNMPv2 SMI mib 2 47 1 1 2 56 STRING Optional module 1 SNMPv2 SMI mib 2 47 L 2 57 STRING 4 port 10GE 10BASE T XL SNMPv2 SMI mib 2 47 1 2 58 STRING Unit 0 Port 49 10G Level SNMPv2 SMI mib 2 47 1 2 59 STRING Unit 0 Port 50 10G Level SNMPv2 SMI mib 2 47 1 2 60 STRING Unit 0 Port 51 10G Level SNMPv2 SMI mib 2 47 1 2 61 STRING Unit 0 Port 52 10G Level SNMPv2 SMI mib 2 47 1 1 2 66 STRING PowerConnect I O Aggregator SNMPv2 SMI mib 2 47 1 2 67 STRING Module 0 SNMPv2 SMI mib 2 47 1 2 68 STRING Unit Port 1 10G Level SNMPv2 SMI mib 2 47 1 2 69 STRING Unit Port 2 10G Level SNMPv2 SMI mib 2 47 L 2 70 STRING Unit Port 3 10G Level SNMPv2 SMI mib 2 47 1 2 71 STRING Unit Port 4 10G Level SNMPv2 SMI mib 2 47 1 2 72 STRING Unit Port 5 10G Level SNMPv2 SMI mib 2 47 L 2 73 STRING Unit Port 6 10G Level SNMPv2 SMI mib 2 47 1 2 74 STRING Unit Port 7 10G Level SNMPv2 SMI mib 2 47 1 2 75 STRING Unit Port 8 10G Level 184 Simple Network Management Protocol SNMP Pv2 S Pv2 S Pv2 S Pv2 S Pv2 S Pv2 S Pv2 S Pv2 S Pv2 S Pv2 S Pv2 S Pv2 S Pv2 S Pv2 S Pv2 S Pv2 S Pv2 S Pv2
122. For the 48 port 1G card Dynamic Pool Total Available Pool 16384 cells Total Dedicated Pool 5904 cells e Oversubscription ratio 10 e Dynamic Cell Limit Per port 59040 29 2036 cells 298 Debugging and Diagnostics Figure 33 Buffer Tuning Points Deciding to Tune Buffers Dell Networking recommends exercising caution when configuring any non default buffer settings as tuning can significantly affect system performance The default values work for most cases As a guideline consider tuning buffers if traffic is bursty and coming from several interfaces In this case e Reduce the dedicated buffer on all queues interfaces Increase the dynamic buffer on all interfaces Increase the cell pointers on a queue that you are expecting will receive the largest number of packets To define change and apply buffers use the following commands e Define a buffer profile for the FP queues CONFIGURATION mode buffer profile fp fsqueue Define a buffer profile for the CSF queues CONFIGURATION mode buffer profile csf csqueue e Change the dedicated buffers on a physical 1G interface Debugging and Diagnostics 299 BUFFER PROFILE mode buffer dedicated e Change the maximum number of dynamic buffers an interface can request BUFFER PROFILE mode buffer dynamic e Change the number of packet pointers per queue BUFFER PROFILE mode buffer packet pointers Apply the buffer profile to a CSF to FP link CONFIGURATI
123. I O Aggregator with the FC Flex IO module Ethernet port that provides access to FCF functionality on a fabric FC Flex IO Modules 271 Term CNA port DCB map Fibre Channel fabric FCF FC MAP FCoE map FCoE VLAN FIP NPIV principal switch DCB Maps Description N port functionality on an FCoE enabled server port A converged network adapter CNA can use one or more Ethernet ports CNAs can encapsulate Fibre Channel frames in Ethernet for FCoE transport and de encapsulate Fibre Channel frames from FCoE to native Fibre Channel Template used to configure DCB parameters including priority based flow control PFC and enhanced transmission selection ETS on CEE ports Network of Fibre Channel devices and storage arrays that interoperate and communicate Fibre Channel forwarder FCoE enabled switch that can forward FC traffic to both downstream FCoE and upstream FC devices An NPIV proxy gateway functions as an FCF to export upstream F port configurations to downstream server CNA ports FCoE MAC address prefix The unique 24 bit MAC address prefix in FCoE packets used to generate a fabric provided MAC address FPMA The FPMA is required to send FCoE packets from a server to a SAN fabric Template used to configure FCoE and FC parameters on Ethernet and FC ports in a converged fabric VLAN dedicated to carrying only FCoE traffic between server CNA ports and a SAN fabric FCoE traffic must travel in a VL
124. IEEE 802 Networks 1191 Path MTU Discovery 1305 Network Time Protocol Version 3 Specification Implementation and Analysis 310 Standards Compliance RFC 1519 1542 1812 2131 2338 3021 3046 3069 3128 Network Management Full Name Classless Inter Domain Routing CIDR an Address Assignment and Aggregation Strategy Clarifications and Extensions for the Bootstrap Protocol Requirements for IP Version 4 Routers Dynamic Host Configuration Protocol Virtual Router Redundancy Protocol VRRP Using 31 Bit Prefixes on IPv4 Point to Point Links DHCP Relay Agent Information Option VLAN Aggregation for Efficient IP Address Allocation Protection Against a Variant of the Tiny Fragment Attack The following table lists the Dell Networking OS support per platform for network management protocol Table 29 Network Management RFC 1155 1156 1157 1212 1215 1493 1901 2011 2012 2013 2024 2096 2570 Standards Compliance Full Name Structure and Identification of Management Information for TCP IP based Internets Management Information Base for Network Management of TCP IP based internets A Simple Network Management Protocol SNMP Concise MIB Definitions A Convention for Defining Traps for use with the SNMP Definitions of Managed Objects for Bridges except for the dotidTpLearnedEntryDiscards object Introduction to Community based SNMPv2 SNMPv2 Management Information Base for the Internet Pr
125. Initialized SNMP WARM START Dell show running config snmp snmp server community mycommunity ro Dell Reading Managed Object Values You may only retrieve read managed object values if your management station is a member of the same community as the SNMP agent Dell Networking supports RFC 4001 Textual Conventions for Internet Work Addresses that defines values representing a type of internet address These values display for ipAddressTable objects using the snmpwalk command There are several UNIX SNMP commands that read data Read the value of a single managed object snmpget v version c community agent ip identifier instance descriptor instance Simple Network Management Protocol SNMP 177 e Read the value of the managed object directly below the specified object snmpgetnext v version c community agent ip identifier instance descriptor instance e Read the value of many objects at once snmpwalk v version c community agent ip identifier instance descriptor instance In the following example the value 4 displays in the OID before the IP address for IPv4 For an IPv6 IP address a value of 16 displays Example of Reading the Value of a Managed Object gt snmpget v 2c c mycommunity 10 11 131 161 sysUpTime 0 DISMAN EVENT MIB sysUpTimeInstance Timeticks 32852616 3 days 19 15 26 16 snmpget v 2c c mycommunity 10 11 131 161 1 3 6 1 2 1 1 3 0 DISMAN EVENT MIB sysUpTimeInstance
126. L enter the number of minutes to add during the summer time period The range is from 1 to 1440 The default is 60 minutes System Time and Date Example of the clock summer time Command Dell conf clock summer time pacific date Mar 14 2012 00 00 Nov 7 2012 00 00 Dell conf Setting Recurring Daylight Saving Time Set a date and time zone on which to convert the switch to daylight saving time on a specific day every year If you have already set daylight saving for a one time setting you can set that date and time as the recurring setting with the clock summer time time zone recurring command To set a recurring daylight saving time use the following command Set the clock to the appropriate timezone and adjust to daylight saving time every year CONFIGURATION mode clock summer time time zone recurring start week start day start month start time end week end day end month end time offset time zone Enter the three letter name for the time zone This name displays in the show clock output start week OPTIONAL Enter one of the following as the week that daylight saving begins and then enter values for start day through end time week number Enter a number from 1 to 4 as the number of the week in the month to start daylight saving time first Enter the keyword first to start daylight savings time in the first week of the month last Enter the keyword last to start daylight saving time in the last week
127. LT Sample Configurations Troubleshooting VET ii RE RUE ERE UE OH ee re Ee 22 FC Flex IO Modules ti eto o 259 EG El amp x IO Modules einen he tiat te aee bae RUE RE 259 Understanding and Working of the FC Flex IO Modules 259 EC Flex IO Modules Overview sinet PUER m pel e tta ct dn 259 FC Flex IO Module Capabilities and Operations 261 Guidelines for Working with FC Flex IO Modules sssseseeenreeerenrnnrnn nnns 261 Processing of Data Traffic isset eene nente tnt 263 Installing and Configuring the Switch nennen 264 Interconnectivity of FC Flex IO Modules with Cisco MDS Switches 267 Fibre Channel over Ethernet for FC Flex IO Modules issssssssssssseeeeeneneeeeeennnn 268 NPIV Proxy Gateway for FC Flex IO Modules 269 NPIV Proxy Gateway Configuration on FC Flex IO Modules sss 269 NPIV Proxy Gateway Operations and Capabilities Configuring an NPIV Proxy Gateway eee nnne nennen nennen Displaying NPIV Proxy Gateway Information ssssssssssssssssseeee ener 25 Upgrade Procedure Sinn iia 286 Get Help with Upgrades xo e erre o e qe t Oe tbe e qe e ons 286 24 Debugging and DiaQnostics cecccecceecceeeeeeeeeeeeeeeeeeeeeneeeeeeeeseneeeeeeeneeeenes 287 Debugging Aggregator Operation nennen enne 287 All interfaces on the Aggregator are operationally down 287 Broadcast unknown multicast and DLF packets switched at a very low rate 288 Flood
128. Layer 2 overheads found in Dell Networking OS and the number of bytes Difference between Link MTU and IP MTU Layer 2 Overhead Difference between Link MTU and IP MTU Ethernet untagged 18 bytes VLAN Tag 22 bytes Untagged Packet with VLAN Stack Header 22 bytes Tagged Packet with VLAN Stack Header 26 bytes Link MTU and IP MTU considerations for port channels and VLANS are as follows Interfaces 109 Port Channels e All members must have the same link MTU value and the same IP MTU value he port channel link MTU and IP MTU must be less than or equal to the link MTU and IP MTU values configured on the channel members For example if the members have a link MTU of 2100 and an IP MTU 2000 the port channels MTU values cannot be higher than 2100 for link MTU or 2000 bytes for IP MTU VLANs All members of a VLAN must have the same IP MTU value e Members can have different link MTU values Tagged members must have a link MTU 4 bytes higher than untagged members to account for the packet tag he VLAN link MTU and IP MTU must be less than or equal to the link MTU and IP MTU values configured on the VLAN members For example the VLAN contains tagged members with a link MTU of 1522 and an IP MTU of 1500 and untagged members with a link MTU of 1518 and an IP MTU of 1500 The VLAN s Link MTU cannot be higher than 1518 bytes and its IP MTU cannot be higher than 1500 bytes Auto Negotiation on Ethernet Interfaces Setting Speed
129. Local Remote Local Remote Local Remote Local Remote Local Remote Local Remote Local Remote Local Remote Local LLDP MED MIB Object lldpXMedLocMediaP olicyTagged lldpXMedLocMediaP olicyTagged lldpXMedLocMediaP olicyVlanID UdpXMedRemMedia PolicyVlanID lldpXMedLocMediaP olicyPriority UdpXMedRemMedia PolicyPriority lldpXMedLocMediaP olicyDscp UdpXMedRemMedia PolicyDscp lldpXMedLocLocatio nSubtype lldpXMedRemLocati onSubtype lldpXMedLocLocatio ninfo UdpXMedRemLocati oninfo lldpXMedLocXPoED eviceType lidpXMedRemXPoED eviceType lidpXMedLocXPoEPS EPowerSource lldpXMedLocXPoEP DPowerSource UdpXMedRemXPoEP SEPowerSource UdpXMedRemXPoEP DPowerSource lldpXMedLocXPoEP DPowerPriority 151 TLV Sub Type TLV Name 152 TLV Variable Power Value System Remote Local Remote LLDP MED MIB Object lidpXMedLocXPoEPS EPortPDPriority UdpXMedRemXPoEP SEPowerPriority lldpXMedRemXPoEP DPowerPriority lidpXMedLocXPoEPS EPortPowerAv lldpXMedLocXPoEP DPowerReq UdpXMedRemXPoEP SEPowerAv UdpXMedRemXPoEP DPowerReq Link Layer Discovery Protocol LLDP 14 Port Monitoring The Aggregator supports user configured port monitoring See Configuring Port Monitoring for the configuration commands to use Port monitoring copies all incoming or outgoing packets on one port and forwards mirrors them to another port The source port is the
130. MIB addresses of aggregated links LAG In the following example R1 has one dynamic MAC address learned off of port TenGigabitEthernet 0 7 which is a member of the default VLAN VLAN 1 The SNMP walk returns the values for dotidTpFdbAddress dotidTpFdbPort and dotidTpFdbStatus Each object is comprised of an OID concatenated with an instance number In the case of these objects the instance number is the decimal equivalent of the MAC address derive the instance number by converting each hex pair to its decimal equivalent For example the decimal equivalent of E8 is 232 and so the instance number for MAC address 00 01 e8 06 95 ac is 0 1 232 6 149 172 The value of dotidTpFdbPort is the port number of the port off which the system learns the MAC address In this case of TenGigabitEthernet 0 7 the manager returns the integer 118 Example of Fetching Dynamic MAC Addresses on the Default VLAN SYSTEMS SSS SPT HSS RSs SS RA Dell show mac address table Vlanld Mac Address Type Interface State 1 00 01 e8 06 95 ac Dynamic Tengig 0 7 Active A A Query from Management Station 180 Simple Network Management Protocol SNMP gt snmpwalk v 2c c techpubs 10 11 131 162 1 3 6 1 2 1 17 4 3 1 SNMPv2 SMI mib 2 17 4 3 1 1 0 1 232 6 149 172 Hex STRING 00 01 E8 06 95 AC Example of Fetching Dynamic MAC Addresses on a Non default VLANs In the following example TenGigabitEthernet 0 7 is moved to VLAN 1000 a non defau
131. NK STATE GROUP mode NOTE If installed servers do not have connectivity to a switch check the Link Status LED of uplink ports on the aggregator If all LEDs are on to ensure the LACP is correctly configured check the LACP configuration on the ToR switch that is connected to the aggregator Configuring VLANs By default in Standalone mode all aggregator ports belong to all 4094 VLANs and are members of untagged VLAN 1 To configure only the required VLANs on a port use the CLI or CMC interface You can configure VLANs only on server ports The uplink LAG will automatically get the VLANs based on the server ports VLAN configuration When you configure VLANs on server facing interfaces ports from 1 to 8 you can assign VLANs to a port or a range of ports by entering the vlan tagged or vlan untagged commands in Interface Configuration mode for example Dell conf interface range tengigabitethernet 0 2 4 Dell conf if range te 0 2 4 vlan tagged 5 7 10 12 Dell conf if range te 0 2 4 vlan untagged 3 Uplink LAG The tagged VLAN membership of the uplink LAG is automatically configured based on the VLAN configuration of all server facing ports ports from 1 to 32 The untagged VLAN used for the uplink LAG is always the default VLAN 18 Before You Start Server Facing LAGs The tagged VLAN membership of a server facing LAG is automatically configured based on the server facing ports that are members of the LAG Th
132. OGI priority Range 1 255 Default 128 Enable the monitoring FIP keep alive keepalive FCoE MAP messages if it is disabled to detect if other FCoE devices are reachable Default FIP keep alive monitoring is enabled Configure the time interval in seconds used fka adv period FCoE MAP to transmit FIP keepalive advertisements seconds Range 8 90 seconds Default 8 seconds Applying an FCoE Map on Server facing Ethernet Ports You can apply multiple FCoE maps on an Ethernet port or port channel When you apply an FCoE map on a server facing port or port channel The port is configured to operate in hybrid mode accept both tagged and untagged VLAN frames The associated FCoE VLAN is enabled on the port or port channel When you enable a server facing Ethernet port the servers respond to the FIP advertisements by performing FLOGIs on upstream virtualized FCF ports The NPG forwards the FLOGIs as fabric discovery FDISC messages to a SAN switch Step Task Command Command Mode 1 Configure a server facing Ethernet port or interface CONFIGURATION port channel with an FCoE map tengigabitEthernet slot port fortygigabitEthernet FC Flex IO Modules 277 Step Task Command Command Mode slot port port channel num 2 Apply the FCoE FC configuration in an FCoE fcoe map map name INTERFACE or map on the Ethernet port Repeat this step to INTERFACE apply an FCoE map to more than one port for PORT CHANNEL example
133. ON mode buffer csf linecard Dell Networking OS Behavior If you attempt to apply a buffer profile to a non existent port pipe the system displays the following message DIFFSERV 2 DSA BUFF CARVING INVALID PORT SET Invalid FP port set 2 for linecard 2 Valid range of port set is lt 0 1 gt However the configuration still appears in the running config Configuration changes take effect immediately and appear in the running configuration Because under normal conditions all ports do not require the maximum allocation the configured dynamic allocations can exceed the actual amount of available memory this allocation is called oversubscription If you choose to oversubscribe the dynamic allocation a burst of traffic on one interface might prevent other interfaces from receiving the configured dynamic allocation which causes packet loss You cannot allocate more than the available memory for the dedicated buffers If the system determines that the sum of the configured dedicated buffers allocated to the queues is more than the total available memory the configuration is rejected returning a syslog message similar to the following 00 04 20 S50N 0 SDIFFSERV 2 DSA DEVICE BUFFER UNAVAILABLE Unable to allocate dedicated buffers for stack unit 0 port pipe 0 egress port 25 due to unavailability of cells Dell Networking OS Behavior When you remove a buffer profile using the no buffer profile fp cs command
134. OTE For static LAG commands refer to the Interfaces chapter based on the standards specified in the IEEE 802 3 Carrier sense multiple access with collision detection CSMA CD access method and physical layer specifications Configuration Tasks for Port Channel Interfaces To configure a port channel LAG use the commands similar to those found in physical interfaces By default no port channels are configured in the startup configuration These are the mandatory and optional configuration tasks e Creating a Port Channel mandatory e Adding a Physical Interface to a Port Channel mandatory e Reassigning an Interface to a New Port Channel optional e Configuring the Minimum Oper Up Links in a Port Channel optional e Adding or Removing a Port Channel from a VLAN optional e Deleting or Disabling a Port Channel optional Creating a Port Channel You can create up to 128 port channels with eight port members per group on the Aggregator To configure a port channel use the following commands 1 Create a port channel CONFIGURATION mode interface port channel id number 2 Ensure that the port channel is active INTERFACE PORT CHANNEL mode no shutdown After you enable the port channel you can place it in Layer 2 mode To configure an IP address to place the port channel in Layer 2 mode use the switchport command You can configure a port channel as you would a physical interface by enabling o
135. S 5 12 ETS 6 12 ETS 7 12 ETS Oper status is init Conf TLV Tx Status is disabled Traffic Class TLV Tx Status is disabled Example of the show interface ets detail Command Dell show interfaces tengigabitethernet 0 4 ets detail Interface TenGigabitEthernet 0 4 Max Supported TC Groups is 4 Number of Traffic Classes is 8 Admin mode is on Admin Parameters Admin is enabled TC grp Priority Bandwidth TSA 0 A 671 100 ETS T 0 ETS 2 ETS 3 0 ETS 4 ETS 5 ETS 6 0 ETS 7 0 ETS Remote Parameters Remote is disabled Local Parameters Local is enabled PG grp Priority Bandwidth TSA 0 0715 27 394 57 677 100 ETS 1 ETS 2 ETS 3 0 ETS 4 ETS 5 0 ETS 6 0 ETS 54 Data Center Bridging DCB gt Oper status is init ETS DCBX Oper status is o Ao Down State Machine Type is Asymmetric Conf TLV Reco 0 Input Conf TLV Pkts 0 Input Reco TLV Pkts Tx Status is enabled TLV Tx Status is enabled 0 Output Conf 0 Output Reco m m LV Pkts O0 LV Pkts O0 ETS Error Conf TLV Pkts Error Reco TLV Pkts The following table describes the show interface ets detail command fields Table 5 show interface ets detail Command Description Field Interface Max Supported TC Group Number of Traffic Classes Admin mode Admin Parameters Remote Parameters Local Parameters Operational status local port Data
136. S Pv2 S Pv2 S Pv2 S Pv2 S Pv2 S Pv2 S Pv2 S Pv2 S Pv2 S Pv2 S Pv2 S Pv2 S Pv2 S Pv2 S Pv2 S Pv2 S Pv2 S uuu ca ca c2 002 C0 C2 aaa aa va va vaa vaa CD ca a aan aan an C0 un un Z222222 22222222222 222222222222 mmm mm ARARARAAARAARAARAARAARARARARAARARAARARAARARAARARA mib 2 mib 2 mib 2 mib 2 mib 2 mib 2 mib 2 mib 2 mib 2 mib 2 mib 2 mib 2 mib 2 mib 2 mib 2 mib 2 mib 2 mib 2 mib 2 mib 2 mib 2 mib 2 simrb 2 mib 2 mib 2 mib 2 mib 2 mib 2 mib 2 mib 2 mib 2 mib 2 mib 2 mib 2 mib 2 BS bs GPS GPS GPS ds GPS GPS GPS GP GPS GPS GPS GPS GPS GPS GPS GPS GPS GPS GPS GPS GPS GPS GPS GPS GPS GPS GPS GPS BBR DR BBW SN NN NN NIN AA NI SN NI NN A A NN SN NN NN NN NY YN J I 1 2 76 STRIN 1 2 77 STRIN 1 2 78 STRIN 1 2 79 STRIN 1 2 80 STRIN 1 2 81 STRIN 1 2 82 STRIN 1 2 83 STRIN 1 2 84 STRIN 1 2 85 STRIN 1 2 86 STRIN 1 2 87 STRIN 1 2 88 STRIN 1 2 89 STRIN 1 2 90 STRIN 1 2 91 STRIN 1 2 92 STRIN 1 2 93 STRI 1 2 94 STRIN 1 2 95 STRI 1 2 96 STRIN 1 2 97 STRIN 1 2 98 STRIN 1 2 99 STRIN 1 2 100 STRI 1 2 101 STRI 1 2 105 STRIN 1 2 106 STRIN 1 2 110 STRIN 1 2 111 STRIN 1 2 112 STRIN 1 2 113 STRIN 1 2 114 STRIN 1 2 115 STRIN 1 2 120 STRIN Standard VLAN MIB When the A
137. S6 0 txPkt COS7 0 txPkt UNITO 0 The show hardware stack unit cpu party bus statistics command displays input and output statistics on the party bus which carries inter process communication traffic between CPUs Example of Viewing Party Bus Statistics Dell show hardware stack unit 2 cpu party bus statistics Input Statistics 27550 packets 2559298 bytes 0 dropped 0 errors Output Statistics 1649566 packets 1935316203 bytes 0 errors Displaying Stack Port Statistics The show hardware stack unit stack port command displays input and output statistics for a stack port interface Example of Viewing Stack Unit Statistics Dell show hardware stack unit 2 stack port 49 Input Statistics 27629 packets 3411731 bytes 0 64 byte pkts 27271 over 64 byte pkts 207 over 127 byte pkts 17 over 255 byte pkts 56 over 511 byte pkts 78 over 1023 byte pkts O Multicasts 5 Broadcasts Debugging and Diagnostics 305 0 throttles discarded 0 runts 0 giants 0 CRC 0 overrun O0 Output Statistics 1649714 packets O 64 byte pkts 34 over 255 byte pkts O Multicasts 0 Broadcasts 1649714 Unicasts O throttles 0 discarded 0 collisions Rate info interval 45 seconds Input 00 00 Mbits sec 2 packets sec Output 00 06 Mbits sec 8 packets sec Dell 1948622676 bytes 0 underruns 27234 over 64 byte pkts 107970 over 127 byte pkts 504838 over 511 byte pkts 1009638 over 1023 byte pkts 0 00 0 00 of line rate of line r
138. SE e 8 DHCPINFORM Parameter Request Option 55 List Clients use this option to tell the server which parameters it requires It is a series of Octets where each octet is DHCP option code Renewal Time Option 58 Specifies the amount of time after the IP address is granted that the client attempts to renew its lease with the original server Rebinding Time Option 59 Specifies the amount of time after the IP address is granted that the client attempts to renew its lease with any server if the original server does not respond End Option 255 Signals the last option in the DHCP packet Option 82 RFC 3046 the relay agent information option or Option 82 is used for class based IP address assignment The code for the relay agent information option is 82 and is comprised of two sub options circuit ID and remote ID Circuit ID This is the interface on which the client originated message is received Remote ID This identifies the host from which the message is received The value of this sub option is the MAC address of the relay agent that adds Option 82 The DHCP relay agent inserts Option 82 before forwarding DHCP packets to the server The server can use this information to e track the number of address requests per relay agent Restricting the number of addresses available per relay agent can harden a server against address exhaustion attacks e associate client MAC addresses with a relay agent to prevent offering an IP add
139. STKUNITO M CP SIFMGR 5 OSTATE UP Changed interface state to up Te 0 5 00 11 51 SSTKUNITO M CP SIFMGR 5 OSTATE UP Changed interface state to up Te 0 6 Uplink Failure Detection UFD 219 Displaying Uplink Failure Detection To display information on the UFD feature use any of the following commands Display status information on a specified uplink state group or all groups EXEC mode show uplink state group group id detail group id The values are 1 to 16 detail displays additional status information on the upstream and downstream interfaces in each group Display the current status of a port or port channel interface assigned to an uplink state group EXEC mode show interfaces interface interface specifies one of the following interface types 10 Gigabit Ethernet enter tengigabitethernet slot port 40 Gigabit Ethernet enter fortygigabitethernet slot port Port channel enter port channel 1 512 If a downstream interface in an uplink state group is disabled Oper Down state by uplink state tracking because an upstream port is down the message error disabled UFD displays in the output Display the current configuration of all uplink state groups or a specified group EXEC mode or UPLINK STATE GROUP mode For EXEC mode show running config uplink state group group id For UPLINK STATE GROUP mode show configuration group id The values are from 1 to 16 Example of Viewing Uplink Sta
140. Ssages Sent HeartBeat M ICL Hello s ICL Hello s HeartBeat M Ssages Received Sent Received VLT Statistics Ssages Sent HeartBeat M ICL Hello s ICL Hello s Ssages Received Sent Received Dell VLTpeerl show vlt statistics 987 986 148 98 Dell VLTpeer2 show vlt statistics 994 978 89 89 Additional VLT Sample Configurations To configure VLT configure a backup link and interconnect trunk create a VLT domain configure a backup link and interconnect trunk and connect the peer switches in a VLT domain to an attached access device switch or server Review the following examples of VLT configurations Configuring Virtual Link Trunking VLT Peer 1 Enable VLT and create a VLT domain with a backup link and interconnect trunk VLTi Dell VLTpeerl Dell VLTpeerl Dell VLTpeerl Dell VLTpeerl Configure the backup link Dell VLTpeerl Dell VLTpeerl Dell VLTpeerl Dell VLTpeerl conf interfac conf fvlt domain 999 conf vlt domain peer link port channel 100 conf vlt domain back up destination 10 11 206 35 conf vlt domain exit ManagementEthernet 0 0 Configure the VLT interconnect VLTi conf if ma 0 0 ip address 10 11 206 23 conf if ma 0 0 no shutdown conf if ma 0 0 exit Dell VLTpeerl conf interface port channel 100 Dell VLTpeerl conf if po 100 no ip address Dell VLTpeerl conf if po 100 PMUX Mode of the lO Aggreg
141. System MAC address Remote System MAC address Configured System MAC address Remote system version Delay Restore timer 5 1 00 01 e8 8a e7 e7 00 01 e8 8a e9 70 00 0a 0a 01 01 0a 5 1 90 seconds Example of the show vlt detail Command Dell VLTpeer AG Id 14 show vlt detail Peer LAG Id Local Status Peer Status Active VLANs 00 UP UP TO 30 2 UP UP 20 30 show vlt de tail ocal LAG Id Peer LAG Id 2 1 277 100 100 Local Status Peer Status Active VLANs Example of the show vlt role Command Dell VLTpeerl show vlt role VLT Role VLT Role System MAC address System Role Priority Local System MAC address VLT Role VLT Role System MAC address System Role Priority Local System MAC address Local System Role Priority Local System Role Priority Dell VLTpeer2 show vlt role UP UP 20 30 UP UP 10 20 30 Primary 00 01 e8 8a df bc 32768 00 01 e8 8a df bc 32768 Secondary 00 01 e8 8a df bc 32768 00 01 e8 8a df e6 32768 Example of the show running config vlt Command Dell VLTpeerl show running config vlt vlt domain 30 peer link port channel 60 back up destination 10 11 200 18 254 PMUX Mode of the lO Aggregator Dell VLTpeer2 show running config vlt vlt domain 30 peer link port channel 60 back up destination 10 11 200 20 Example of the show vlt statistics Command HeartBeat M VLT Statistics
142. T interconnect ports Data Center Bridging Support To eliminate packet loss and provision links with required bandwidth Data Center Bridging DCB enhancements for data center networks are supported The aggregator provides zero touch configuration for DCB The aggregator auto configures DCBX port roles as follows e Server facing ports are configured as auto downstream interfaces e Uplink ports are configured as auto upstream interfaces In operation DCBx auto configures uplink ports to match the DCB configuration in the ToR switches to which they connect The Aggregator supports DCB only in standalone mode FCoE Connectivity and FIP Snooping Many data centers use Fiber Channel FC in storage area networks SANs Fiber Channel over Ethernet FCoE encapsulates Fiber Channel frames over Ethernet networks On an Aggregator the internal ports support FCoE connectivity and connects to the converged network adapter CNA in servers FCoE allows Fiber Channel to use 10 Gigabit Ethernet networks while preserving the Fiber Channel protocol The Aggregator also provides zero touch configuration for FCoE connectivity The Aggregator auto configures to match the FCoE settings used in the switches to which it connects through its uplink ports FIP snooping is automatically configured on an Aggregator The auto configured port channel LAG 128 operates in FCF port mode ISCSI Operation Support for iSCSI traffic is turned on by
143. Up Links in a Port Channel You can configure the minimum links in a port channel LAG that must be in oper up status to consider the port channel to be in oper up status To set the oper up status of your links use the following command Enter the number of links in a LAG that must be in oper up status INTERFACE mode minimum links number The default is 1 Example of Configuring the Minimum Oper Up Links in a Port Channel Dell config t Dell conf tint po 1 Dell conf if po 1 minimum links 5 Dell conf if po 1 128 Link Aggregation Configuring VLAN Tags for Member Interfaces To configure and verify VLAN tags for individual members of a port channel perform the following 1 Configure VLAN membership on individual ports NTERFACE mode Dell conf if te 0 2 vlan tagged 2 3 4 an individual interface NTERFACE mode Dell conf if te 0 2 switchport Use the switchport command in INTERFACE mode to enable Layer 2 data transmissions through Verify the manually configured VLAN membership show interfaces switchport interface command EXEC mode Dell conf interface tengigabitethernet 0 1 Dell conf if te 0 1 switchport Dell conf if te 0 1 vlan tagged 2 5 100 4010 Dell show interfaces switchport te 0 1 Codes U Untagged T Tagged x Dotlx untagged X Dotlx tagged G GVRP tagged M Trunk H VSN tagged i Internal untagged I Internal tagged VLT tagged Nam
144. V lanSupported UldpXdotlLocProtoVI anEnabled UdpXdotlRemProtoV lanEnabled 149 TLV Type TLV Name 127 VLAN Name TLV Variable PPVID VID VLAN name length VLAN name Table 12 LLDP MED System MIB Objects TLV Sub Type TLV Name 1 LLDP MED Capabilities 2 Network Policy 150 TLV Variable LLDP MED Capabilities LLDP MED Class Type Application Type Unknown Policy Flag System Local Remote Local Remote Local Remote Local Remote System Local Remote Local Remote Local Remote Local Remote LLDP MIB Object UldpXdotlLocProtoVI anld ludpXdotiRemProtoV lanid lldpXdotiLocVlanid lldpXdotiRemVlanid lldpXdotiLocVlanNa me lldpXdotiRemVlanN ame UdpXdotlLocVlanNa me lidpXdotiRemVlanN ame LLDP MED MIB Object lidpXMedPortCapSu pported lidpXMedPortConfig TLVsTx Enable lidpXMedRemCapSu pported lidpXMedRemConfig TLVsTxEnable lldpXMedLocDevice Class UdpXMedRemDevice Class lldpXMedLocMediaP olicyAppType UdpXMedRemMedia PolicyAppType lldpXMedLocMediaP olicyUnknown lldpXMedLocMediaP olicyUnknown Link Layer Discovery Protocol LLDP TLV Sub Type TLV Name Location Identifier Extended Power via MDI Link Layer Discovery Protocol LLDP TLV Variable Tagged Flag VLAN ID L2 Priority DSCP Value Location Data Format Location ID Data Power Device Type Power Source Power Priority System
145. a Dell Networking OS version go to http support dell com e Stacking is supported only with other Aggregators A maximum of six Aggregators are supported in a single stack You cannot stack the Aggregator with MXL 10 40GbE Switches or another type of switch Amaximum of four stack groups 40GbE ports is supported on a stacked Aggregator e Interconnect the stack units by following the instructions in Cabling Stacked Switches e You cannot stack a Standalone IOA and a PMUX Cabling Stacked Switches Before you configure MXL switches in a stack connect the 40G direct attach or QSFP cables and transceivers to connect 40GbE ports on two Aggregators in the same or different chassis Cabling Restrictions The following restrictions apply when setting up a stack of Aggregators e Only daisy chain or ring topologies are supported star and full mesh topologies are not supported e Stacking is supported only on 40GbE links by connecting 40GbE ports on the base module Stacking is not supported on 10GbE ports or 4x10GbE ports e Use only QSFP transceivers and QSFP or direct attach cables purchased separately to connect stacking ports Cabling Redundancy Connect the units in a stack with two or more stacking cables to avoid a stacking port or cable failure Removing one of the stacked cables between two stacked units does not trigger a reset Cabling Procedure The following cabling procedure uses the stacking topology shown earlier in
146. ace type slot port e Enable the display of log messages for the following events on DHCP client interfaces IP address acquisition IP address release Renewal of IP address and lease time and Release of an IP address EXEC Privilege no debug ip dhcp client events interface type slot port The following example shows the packet and event level debug messages displayed for the packet transmissions and state transitions on a DHCP client interface DHCP Client Debug Messages Logged during DHCP Client Enabling Disabling Dell conf if Ma 0 0 ip address dhcp 1w2d23h SSTKUNITO M CP SDHCLIENT 5 DHCLIENT LOG DHCLIENT DBG EVT Interface a 0 0 DHCP ENABLE CMD Received in state START 1w2d23h SSTKUNITO M CP SDHCLIENT 5 DHCLIENT LOG DHCLIENT DBG_EVT Interface a 0 0 Transitioned E I to state SELECTING 1w2d23h SSTKUNITO M CP SDHCLIENT 5 DHCLIENT LOG DHCLIENT DBG PKT DHCP DISCOVER sent in Interface a 0 0 1w2d23h SSTKUNITO M CP SDHCLIENT 5 DHCLIENT LOG DHCLIE T DBG PKT Received DHCPOFFER packet in Interface Ma 0 0 with Lease ip 10 16 134 250 Mask 255 255 0 0 Server Id 10 16 134 249 1w2d23h SSTKUNITO M CP SDHCLIENT 5 DHCLIENT LOG DHCLIENT DBG EVT Interface a 0 0 Transitioned to state REQUESTING 1w2d23h SSTKUNITO M CP DHCLIENT 5 DHCLIENT LOG DHCLIENT DBG PKT DHCP REQUEST sent in Interface a 0 0 1w2d23h SSTKUNITO M CP SDHCLIENT 5 DHCLIENT LOG DHCLIENT DBG PKT Received DHCPACK
147. address is not refreshed until the stack is reloaded and a different unit becomes the stack master Stacking LAG When you use multiple links between stack units Dell Networking Operating System automatically bundles them in a stacking link aggregation group LAG to provide aggregated throughput and redundancy The stacking LAG is established automatically and transparently by operating system without user configuration after peering is detected and behaves as follows e The stacking LAG dynamically aggregates it can lose link members or gain new links e Shortest path selection inside the stack if multiple paths exist between two units in the stack the shortest path is used Stacking VLANs When you configure an Aggregator to operate in stacking mode Configuring and Bringing Up a Stack VLANs are reconfigured as follows e If an Aggregator port belonged to all 4094 VLANs in standalone mode default all VLAN membership is removed and the port is assigned only to default VLAN 1 You must configure additional VLAN membership as necessary e If you had manually configured an Aggregator port to belong to one or more VLANs non default in standalone mode the VLAN configuration is retained in stacking mode only on the master switch When you reconfigure an Aggregator from stacking to standalone mode e Aggregator ports that you manually configured for VLAN membership in stacking mode retain their VLAN configuration in standalone mo
148. affic is latency sensitive ETS allows different traffic types to coexist without interruption in the same converged link K NOTE The IEEE 802 1Qaz CEE and CIN versions of ETS are supported 42 Data Center Bridging DCB ETS is implemented on an Aggregator as follows Traffic in priority groups is assigned to strict queue or WERR scheduling in an ETS output policy and is managed using the ETS bandwidth assignment algorithm Dell Networking OS de qeues all frames of strict priority traffic before servicing any other queues A queue with strict priority traffic can starve other queues in the same port ETS assigned bandwidth allocation and scheduling apply only to data queues not to control queues Dell Networking OS supports hierarchical scheduling on an interface Dell Networking OS control traffic is redirected to control queues as higher priority traffic with strict priority scheduling After control queues drain out the remaining data traffic is scheduled to queues according to the bandwidth and scheduler configuration in the ETS output policy The available bandwidth calculated by the ETS algorithm is equal to the link bandwidth after scheduling non ETS higher priority traffic By default equal bandwidth is assigned to each port queue and each dotlp priority in a priority group By default equal bandwidth is assigned to each priority group in the ETS output policy applied to an egress port The sum of auto configured bandwidth al
149. agged and vlan untagged commands To view which interfaces are tagged or untagged and to which VLAN they belong use the show vlan command Displaying VLAN Membership To reconfigure an interface as a member of only specified tagged VLANs enter the vlan tagged command in INTERFACE mode Command Syntax Command Mode Purpose vlan tagged vlan id INTERFACE Add the interface as a tagged member of one or more VLANs where Interfaces 99 vlan id specifies a tagged VLAN number Range 2 4094 To reconfigure an interface as a member of only specified untagged VLANs enter the vlan untagged command in INTERFACE mode Command Syntax Command Mode Purpose vlan untagged vlan id INTERFACE Add the interface as an untagged member of one or more VLANs where vlan id specifies an untagged VLAN number Range 2 4094 If you configure additional VLAN membership and save it to the startup configuration the new VLAN configuration takes place immediately Dell Networking OS Behavior When two or more server facing ports with VLAN membership are configured in a LAG based on the NIC teaming configuration in connected servers learned via LACP the resulting LAG is a tagged member of all the configured VLANs and an untagged member of the VLAN to which the port with the lowest port ID belongs For example if port 0 3 is an untagged member of VLAN 2 and port 0 4 is an untagged member of VLAN 3 the resulting LAG consisting of the two ports is an untagg
150. akes precedence over the strict priority group whose traffic is Mapped to two queues Therefore in this example scheduling traffic to priority group 1 mapped to one strict priority queue takes precedence over scheduling traffic to priority group 3 mapped to two strict priority queues 60 Data Center Bridging DCB 5 Dynamic Host Configuration Protocol DHCP The Aggregator is auto configured to operate as a DHCP client The DHCP server DHCP relay agent and secure DHCP features are not supported The dynamic host configuration protocol DHCP is an application layer protocol that dynamically assigns IP addresses and other configuration parameters to network end stations hosts based on configuration policies determined by network administrators DHCP relieves network administrators of manually configuring hosts which can be a tedious and error prone process when hosts often join leave and change locations on the network and it reclaims IP addresses that are no longer in use to prevent address exhaustion DHCP is based on a client server model A host discovers the DHCP server and requests an IP address and the server either leases or permanently assigns one There are three types of devices that are involved in DHCP negotiation DHCP Server This is a network device offering configuration parameters to the client DHCP Client This is a network device requesting configuration parameters from the server Relay Agent This is an int
151. ammable mux programmable mux Dell The IOA is now ready for PMUX operations Configuring the Commands without a Separate User Account Starting with Dell Networking OS version 9 3 0 0 you can configure the PMUX mode CLI commands without having to configure a new separate user profile The user profile you defined to access and log in to the switch is sufficient to configure the PMUX mode commands The IOA PMUX Mode CLI Commands section lists the PMUX mode CLI commands that you can now configure without a separate user account Multiple Uplink LAGs Unlike IOA Automated modes Standalone VLT and Stacking Modes the IOA Programmable MUX can support multiple uplink LAGs You can provision multiple uplink LAGs NOTE In order to avoid loops only disjoint VLANs are allowed between the uplink ports uplink LAGs and uplink to uplink switching is disabled PMUX Mode of the lO Aggregator 225 Multiple Uplink LAGs with 10G Member Ports The following sample commands configure multiple dynamic uplink LAGs with 10G member ports based on LACP 1 Bring up all the ports Dell configure Dell conf int range tengigabitethernet 0 1 56 Dell conf if range te 0 1 56 no shutdown 2 Associate the member ports into LAG 10 and 11 Dell configure Dell conf int range tengigabitethernet 0 41 42 Dell conf if range te 0 41 42 port channel protocol lacp Dell conf if range te 0 41 42 lacp port channel 10 mode active Dell conf if ran
152. and Enable environmental monitoring enable optic info update interval Example of the show interfaces transceiver Command Dell show int ten 0 49 transceiver SFP is present SFP 49 Serial Base ID fields SFP 49 Id 0x03 SFP 49 Ext Id 0x04 SFP 49 Connector 0x07 SFP 49 Transceiver Cod 0x00 0x00 0x00 0x01 0x20 0x40 0x0c 0x01 SFP 49 Encoding 0x01 294 Debugging and Diagnostics SFP 49 BR Nominal 0x0c SFP 49 Length 9um Km 0x00 SFP 49 Length 9um 100m 0x00 SFP 49 Length 50um 10m 0x37 SFP 49 Length 62 5um 10m Oxle SFP 49 Length Copper 10m 0x00 SFP 49 Vendor Rev SFP 49 Laser Wavelength 850 nm SFP 49 CheckCodeBas 0x78 SFP 49 Serial Extended ID fields SFP 49 Options 0x00 0x12 SFP 49 BR max 0 SFP 49 BR min 0 SFP 49 Vendor SN P11COBO SFP 49 Datecode 020919 SFP 49 CheckCodeExt 0xb6 SFP 49 Diagnostic Information SFP 49 Rx Power measurement typ SFP 49 Temp High Alarm threshold SFP 49 Voltage High Alarm threshold SFP 49 Bias High Alarm threshold SFP 49 TX Power High Alarm threshold SFP 49 RX Power High Alarm threshold SFP 49 Temp Low Alarm threshold SFP 49 Voltage Low Alarm threshold SFP 49 Bias Low Alarm threshold SFP 49 TX Power Low Alarm threshold SFP 49 RX Power Low Alarm threshold SFP 49 Temp High Warning threshold SFP 49 Voltage High Warning threshold SFP 49 Bias High Warning threshold
153. and Authorization Dell Networking OS retrieves the access class from the local database To use this feature 1 Create a username 2 Enter a password 3 Assign an access class 4 Enter a privilege level You can assign line authentication on a per VTY basis it is a simple password authentication using an access class as authorization Configure local authentication globally and configure access classes on a per user basis Dell Networking OS can assign different access classes to different users by username Until users attempt to log in Dell Networking OS does not know if they will be assigned a VTY line This means that incoming users always see a login prompt even if you have excluded them from the VTY line with a deny all access class After users identify themselves Dell Networking OS retrieves the access class from the local database and applies it Dell Networking OS then can close the connection if a user is denied access K NOTE If a VTY user logs in with RADIUS authentication the privilege level is applied from the RADIUS server only if you configure RADIUS authentication The following example shows how to allow or deny a Telnet connection to a user Users see a login prompt even if they cannot log in No access class is configured for the VTY line It defaults from the local database Example of Configuring VTY Authorization Based on Access Class Retrieved from a Local Database Per User Dell con
154. and VLT modes and disabled in VLT mode IGMP multicast flooding enabled VLAN configuration in Standalone mode all ports belong to all VLANs You can change any of these default settings using the CLI Refer to the appropriate chapter for details Es NOTE You can also change many of the default settings using the chassis management controller CMC interface For information about how to access the CMC to configure the aggregator refer to the Dell Chassis Management Controller CMC User s Guide on the Dell Support website at http support dell com Other Auto Configured Settings After the Aggregator powers on it auto configures and is operational with software features enabled including 16 Ports Ports are administratively up and auto configured to operate as hybrid ports to transmit tagged and untagged VLAN traffic Ports 1 to 32 are internal server facing ports which can operate in LOGbE mode Ports 33 to 56 are external ports auto configured to operate by default as follows The base module ports operate in standalone 4x10GbE mode You can configure these ports to operate in 40GbE stacking mode When configured for stacking you cannot use 40GbE base module ports for uplinks Ports on the 2 Port 40 GbE QSFP module operate only in 4x10GbE mode You cannot user them for stacking Ports on the 4 Port 10 GbE SFP and 4 Port LOGBASE T modules operate only in LOGbE mode For more information about how ports
155. and have not configured authentication a message is logged stating this During authorization the next method in the list if present is used or if another method is not present an error is reported To view the configuration use the show config in LINE mode or the show running config command in EXEC Privilege mode 160 Security for M I O Aggregator Defining a AAA Method List to be Used for RADIUS To configure RADIUS to authenticate or authorize users on the system create a AAA method list Default method lists do not need to be explicitly applied to the line so they are not mandatory To create a method list use the following commanas e Enter a text string up to 16 characters long as the name of the method list you wish to use with the RADIUS authentication method CONFIGURATION mode aaa authentication login method list name radius e Create a method list with RADIUS and TACACS as authorization methods CONFIGURATION mode aaa authorization exec method list name default radius tacacs Typical order of methods RADIUS TACACS Local None If RADIUS denies authorization the session ends RADIUS must not be the last method specified Applying the Method List to Terminal Lines To enable RADIUS AAA login authentication for a method list apply it to a terminal line To configure a terminal line for RADIUS authentication and authorization use the following commands e Enter LINE mode CONFIGURATION mode lin
156. ange 0 7 Default None Maximum number of lossless queues supported on an Ethernet port 2 Separate priority values with a comma Specify a priority range with a dash for example pfc priority 3 5 7 1 You cannot configure PFC using the pfc priority command on an interface on which a DCB map has been applied or which is already configured for lossless queues p c no drop queues command Configuring Lossless Queues Command Command Mode fortygigabitEthernet slot port pfc priority INTERFACE priority range DCB also supports the manual configuration of lossless queues on an interface after you disable PFC mode in a DCB map and apply the map on the interface The configuration of no drop queues provides flexibility for ports on which PFC is not needed but lossless traffic should egress from the interface Lossless traffic egresses out the no drop queues Ingress 802 1p traffic from PFC enabled peers is automatically mapped to the no drop egress queues When configuring lossless queues on a port interface consider the following points e By default no lossless queues are configured on a port e Alimit of two lossless queues are supported on a port If the number of lossless queues configured exceeds the maximum supported limit per port two an error message is displayed You must re configure the value to a smaller number of queues e f you configure lossless queues on an interface that already has a
157. ard functionality are required Software features supported on VLT physical ports PMUX Mode of the IO Aggregator 249 Ina VLT domain the following software features are supported on VLT physical ports 802 1p LLDP flow control port monitoring and jumbo frames Software features not supported with VLT Ina VLT domain the following software features are supported on non VLT ports 802 1x DHCP snooping FRRP IPv6 dynamic routing ingress and egress QOS Failure scenarios Ona link failover when a VLT port channel fails the traffic destined for that VLT port channel is redirected to the VLTi to avoid flooding When a VLT switch determines that a VLT port channel has failed and that no other local port channels are available the peer with the failed port channel notifies the remote peer that it no longer has an active port channel for a link The remote peer then enables data forwarding across the interconnect trunk for packets that would otherwise have been forwarded over the failed port channel This mechanism ensures reachability and provides loop management If the VLT interconnect fails the VLT software on the primary switch checks the status of the remote peer using the backup link If the remote peer is up the secondary switch disables all VLT ports on its device to prevent loops If all ports in the VLT interconnect fail or if the messaging infrastructure fails to communicate across the interconnect t
158. are hardware hardware stack unit stack unit stack unit 0 5 cpu data plane statistics 0 5 cpu party bus statistics 0 5 stac k port 33 56 Displaying Drop Counters To display drop counters use the following commanas Identify which stack unit port pipe and port is experiencing internal drops show hardware stack unit 0 11 drops unit 0 port 0 63 Display drop counters show hardware stack unit drops unit port Example of the show hardware stack unit Command to View Drop Counters Statistics Dell show hardware stack unit 0 drops UNIT No 0 Total Ingress Drops 0 Total IngMac Drops 0 Total Mmu Drops 0 Total EgMac Drops 0 Total Egress Drops 0 NIT No 1 Total Ingress Drops 0 Total IngMac Drops 0 Total Mmu Drops 0 Total EgMac Drops 0 Total Egress Drops 0 Debugging and Diagnostics Dell show hardware stack unit 0 drops unit 0 Port Ingress Drops IngMac Drops Total Mmu Drops EgMac Drops Egress Drops 100000 200000 300000 400000 500000 600000 700000 800000 Dell show hardware stack unit 0 drops unit 0 port 1 Ingress Drops Ingress Drops 30 IBP CBP Full Drops p0 PortSTPnotFwd Drops 230 IPv4 L3 Discards zo Policy Discards 0 Packets dropped by FP 14 L2 L3 Drops 0 Port bitmap zero Drops 16 Rx VLAN Drops fo Ingress MAC counters Ingress FCSDrops 0 Ingress MTUExceeds 70 gt U Drops HOL DROPS 0 TxPurge CellEr
159. are elects a primary master and secondary standby management unit The forwarding database resides on the master switch the standby unit maintains a synchronized local copy Each unit in the stack makes forwarding decisions based on their local copy The following example shows how you can stack two Aggregators The Aggregators are connected to operate as a single stack in a ring topology using only the 40GbE ports on the base modules 188 Stacking UML QUES NU SHE HUND AIR 2 o woe ee A Fo sl Sl i pa ay j amp Jr dr Jr UP s wur 8 10GbE LAN Uplinks LAG 4 40GbE Stack Links Figure 25 A Two Aggregator Stack Stack Management Roles The stack elects the management units for the stack management Stack master primary management unit e Standby secondary management unit The master holds the control plane and the other units maintain a local copy of the forwarding databases From Stack master you can configure e System level features that apply to all stack members e Interface level features for each stack member The master synchronizes the following information with the standby unit Stack unit topology e Stack running configuration which includes LACP SNMP etc Logs The master switch maintains stack operation with minimal impact in the event of Stacking 189 e Switch failure e Inter switch stacking link failure Switch insertion e Switch removal If the master
160. as eee 183 Example of Sample Entity MIBS outputs 183 Standard LANMIB o de ed les alle al e 185 Enhancements o omg A E EN A EAEL a rio lu n decem e deer ete Free eade 185 Fetching the Switchport Configuration and the Logical Interface Configuration 186 SNMP TrapstoriLink Salus s 187 TSU dj po t PR 188 Stacking Aggregators a e onte e edet eet ee e nee dotum et bat ete eof 188 Stack Management Roles de bci e e re d e HI e ce dre ede 189 Stack Master Election cce eh e b tama Md 190 Eallover Roles eas en n Hh I M DUM eI uH MU NU ue dee m 190 MAG Addressing tuc st RD TU wired emu E EE 191 Stacking BAG cocto NE IH e eret du t SOURCES 191 Stacking VDANS A ia deiade HA Re RU T ERE 191 Stacking Port N knbers sette tenter aen t Pte P P a fee bod e P pee Pret 192 Configuring a Switch Stack 4 ich s sede des e deed dan rer dee ege e pneu eri vn pin 194 Stacking Prerequisites some dtm e te ec ate ll ntu fo 194 Cablinig stack d Switcliesnes Tute ete tu eee emit e 194 Accessing the C LI a TA eei e ee erae e da e ete an er es 195 Gorifig ring and Bringing UPa Stack nada e tee beo dee ee EE e 195 Adding a Stack Dni ii Loco ttr iL tes 196 Resetting Unit onra Stackz io E TEGERE EP I URS 196 Removing an Aggregator from a Stack and Restoring Quad Mode 197 Configuring the Uplink Speed of Interfaces as 40 Gigabit Ethernet 197 Verifying a Stack Configuration tec peteret Hi e eli te tete 199 Using Show Co
161. assis management controller CMC GUI The only supported FCoE functionality is NPIV proxy gateway Configure the other FCoE services such as name server zone server and login server on an external FC switch e With the FC Flex IO module the MXL 10 40GbE Switch continues to support bare metal provisioning BMP on any Ethernet port BMP is not supported on FC ports BMP improves accessibility to the MXL 10 40GbE Switch by automatically loading pre defined configurations and boot images that are stored in file servers You can use BMP on a single switch or on multiple switches e FC Flex IOM module is a field replaceable unit FRU Its memory type is electrically erasable programmable read only memory EEPROM which enables it to save manufacturing information such as the serial number It is hot swappable assuming that the module that is removed is replaced by the same type of module in that same slot The FC Flex IO does not have persistent storage for any runtime configuration All the persistent storage for runtime configuration is on the MXL and IOA baseboard FC Flex IO Modules 261 e With both FC Flex IO modules present in the MXL or I O Aggregator switches the power supply requirement and maximum thermal output are the same as these parameters needed for the M1000 chassis e Each port on the FC Flex IO module contains status indicators to denote the link status and transmission activity For traffic that is being transmitted
162. ate Displaying Drop Counters To display drop counters use the following commands e Identify which stack unit port pipe and port is experiencing internal drops show hardware stack unit 0 11 drops port 0 12 e Display drop counters unit 0 show hardware stack unit drops unit port Example of the show hardware stack unit Command to View Drop Counters Statistics Dell show hardware stack unit 0 drops UNIT No 0 Total Ingress Drops 0 Total IngMac Drops 0 Total Mmu Drops 0 Total EgMac Drops 0 Total Egress Drops 0 UNIT No 1 Total Ingress Drops 0 Total IngMac Drops 0 Total Mmu Drops 0 Total EgMac Drops 0 Total Egress Drops 0 Dell show hardware stack unit 0 drops unit 0 Port Ingress Drops IngMac Drops Total Mmu Drops EgMac Drops Egress Drops 100000 200000 300000 400000 500000 600000 700000 800000 Dell show hardware stack unit 0 drops unit 0 port 1 Ingress Drops Ingress Drops 30 IBP CBP Full Drops 0 PortSTPnotFwd Drops 0 IPv4 L3 Discards 0 Policy Discards 0 Packets dropped by FP 14 L2 L3 Drops 0 Port bitmap zero Drops 16 Rx VLAN Drops 0 306 Debugging and Diagnostics Ingress MAC counters Ingress FCSDrops 0 Ingress MTUExceeds ic mA U Drops HOL DROPS 0 TxPurge CellErr 0 Aged Drops 0 Egress MAC counters Egress FCS Drops 40 Egress FORWARD PROCESSOR Drops IPv4 L3UC
163. ate 0 0 0 IcT Test Info 0x0 ax Power Req 31488 Fabric Type 0x3 Fabric Maj Ver 0x1 Fabric Min Ver 0x0 SW Manageability 0x4 HW Manageability 0x1 Max Boot Time 3 minutes Link Tuning unsupported Auto Reboot enabled Burned In MAC 00 1le c9 f1 03 42 No Of MACs 3 Dell 290 Debugging and Diagnostics Offline Diagnostics The offline diagnostics test suite is useful for isolating faults and debugging hardware The diagnostics tests are grouped into three levels Level O Level O diagnostics check for the presence of various components and perform essential path verifications In addition Level O diagnostics verify the identification registers of the components on the board Level 1 A smaller set of diagnostic tests Level 1 diagnostics perform status self test access and read write tests for all the components on the board and test their registers for appropriate values In addition Level 1 diagnostics perform extensive tests on memory devices for example SDRAM flash NVRAM EEPROM wherever possible Level 2 The full set of diagnostic tests Level 2 diagnostics are used primarily for on board MAC level Physical level external Loopback tests and more extensive component diagnostics Various components on the board are put into Loopback mode and test packets are transmitted through those components These diagnostics also perform snake tests using vi
164. ating a DEB Map A is 234 Important Points to Remietmber s ioter datada 234 Applying a DCB Map on Server Facing Ethernet Ports 234 r ating arm FCoE VAN e te be Ote E tdt tende Rs 255 Creating an EGoEM puss s mt elites 255 Applying a DCB Map on Server Facing Ethernet Ports eee 256 Applying an FCoE Map on Fabric Facing FC Ports 256 Sample Corifiquratioh ie dece tete Dep e Trad ec se ie t e PO RES RE e Fede a 237 A do 237 Displaying NPIV Proxy Gateway Information sssssssssseeeeememmeeeenn eee 237 Link Layer Discovery Protocol LE DD oii ao talks 238 eSa e SAAND AEE E AE c iat p REESE 238 CONFIGURATION versus INTERFACE Configurations ccccccceceeseeceeceeeeeeeeeeeneteeeeeeeeeeaes 239 Enabling EEDP nues e ted al o m OS MANS ad e e deed 239 Advertising TEVE dd A A A a Ss ee ende 240 Viewing the LLDP Configuration ener enne nnne 241 Viewing Information Advertised by Adjacent LLDP Agents 242 Configuring ELDPDU Int rvals s ii ER etit ge p en Rte ER t de dace Configuring a Time to Ve oe tede A ERR reed Debugging LLD Puente oe tete ant iot HL edi du hut Cenni Deren VirtGa LIA ranking AMET o Ashe S AN NL DU dioec uo p catesdoret ae Fore E ea enge OVENIEW non i gt id MET Terminology 2 efe rrt ERE re ERE e ie ease po ina a een EE RE egt ends Configure Virtual Link Trunking ane i re e e rn ene hee e heo de RR ges Verifying a VLT Configuration Additional V
165. ator Dell VLTpeerl conf if po 100 channel member fortyGigE 0 56 60 no shutdown Dell VLTpeerl conf if po 100 exit 255 Configure the port channel to an attached device Dell VLTpeerl conf interface port channel 110 Dell VLTpeerl conf if po 110 no ip address Dell VLTpeerl conf if po 110 switchport E Dell VLTpeerl conf if po 110 channel member fortyGigE 0 52 Dell VLTpeerl conf if po 110 no shutdown Dell VLTpeerl conf if po 110 vlt peer lag port channel 110 Dell _VLTpeerl conf if po 110 end Verify that the port channels used in the VLT domain are assigned to the same VLAN Dell VLTpeerl show vlan id 10 Codes Default VLAN G GVRP VLANs P Primary C Community I Isolated Q U Untagged T Tagged x Dotlx untagged X Dotix tagged G GVRP tagged M Vlan stack H Hyperpull tagged NUM Status Description Q Ports 10 Active U Pol10 Fo 0 52 T Pol00 Fo 0 56 960 Configuring Virtual Link Trunking VLT Peer 2 Enable VLT and create a VLT domain with a backup link VLT interconnect VLTi Dell VLTpeer2 Dell VLTpeer2 Dell VLTpeer2 Dell VLTpeer2 conf vlt domain 999 conf vlt domain peer link port channel 100 conf vlt domain back up destination 10 11 206 23 conf vlt domain exit Configure the backup link Dell VLTpeer2 Dell VLTpeer2 Dell VLTpeer2 Dell VLTpeer2 conf finterface ManagementEthernet 0 0 conf if ma 0 0 ip a
166. ault buffer profile is to remove the 1Q profile configured and then reload the chassis If you have already applied a custom buffer profile on an interface the buffer profile global command fails and a message similar to the following displays Error User defined buffer profile already applied Failed to apply global pre defined buffer profile Please remove all user defined buffer profiles Similarly when you configure buffer profile global you cannot not apply a buffer profile on any single interface A message similar to the following displays Error Global pre defined buffer profile already applied Failed to apply user defined buffer profile on interface Gi 0 1 Please remove global pre defined buffer profile If the default buffer profile 4Q is active the system displays an error message instructing you to remove the default configuration using the no buffer profile global command To apply a predefined buffer profile use the following command Apply one of the pre defined buffer profiles for all port pipes in the system CONFIGURATION mode buffer profile global 10 40 Sample Buffer Profile Configuration The two general types of network environments are sustained data transfers and voice data Dell Networking recommends a single queue approach for data transfers Example of a Single Queue Application with Default Packet Pointers buffer profile fp fsqueue fp buffer dedicated queue0 3
167. auto configured for FCF FIP snooping bridge FCF mode on a FIP snooping enabled VLAN Multiple FCF trusted interfaces are auto configured in a VLAN e A maximum of eight VLANs are supported for FIP snooping on an Aggregator FIP snooping processes FIP packets in traffic only from the first eight incoming VLANs FC MAP Value The FC MAP value that is applied globally by the Aggregator on all FCoE VLANs to authorize FCoE traffic is auto configured The FC MAP value is used to check the FC MAP value for the MAC address assigned to ENodes in incoming FCoE frames If the FC MAP values does not match FCoE frames are dropped A session between an ENode and an FCF is established by the switch bridge only when the FC MAP value on the FCF matches the FC MAP value on the FIP snooping bridge FIP Snooping 75 Bridge to FCF Links A port directly connected to an FCF is auto configured in FCF mode Initially all FCOE traffic is blocked only FIP frames are allowed to pass FCoE traffic is allowed on the port only after a successful FLOGI request response and confirmed use of the configured FC MAP value for the VLAN Impact on other Software Features FIP snooping affects other software features on an Aggregator as follows e MAC address learning MAC address learning is not performed on FIP and FCoE frames which are denied by ACLs dynamically created by FIP snooping in server facing ports in ENode mode e MTU auto configuration MTU size is se
168. automatically detects the DCBX version on a peer port Legacy CIN and CEE versions are supported in addition to the standard IEEE version 2 5 DCBX A DCBx port detects a peer version after receiving a valid frame for that version The local DCBx port reconfigures to operate with the peer version and maintains the peer version on the link until one of the following conditions occurs The switch reboots The link is reset goes down and up The peer times out e Multiple peers are detected on the link DCBX operations on a port are performed according to the auto configured DCBX version including fast and slow transmit timers and message formats If a DCBX frame with a different version is received a syslog message is generated and the peer version is recorded in the peer status table If the frame cannot be processed it is discarded and the discard counter is incremented Data Center Bridging DCB 47 DCBX Example The following figure shows how DCBX is used on an Aggregator installed in a Dell PowerEdge M I O Aggregator chassis in which servers are also installed The external 40GbE ports on the base module ports 33 and 37 of two switches are used for uplinks configured as DCBx auto upstream ports The Aggregator is connected to third party top of rack ToR switches through 40GbE uplinks The ToR switches are part of a Fibre Channel storage network The internal ports ports 1 32 connected to the 10GbE backplane are confi
169. ay be interrupted due to an interface flap going down and coming up when you reconfigure the lossless queues for no drop priorities in a PFC input policy and reapply the policy to an interface To apply PFC a PFC peer must support the configured priority traffic as detected by DCBx To honor a PFC pause frame multiplied by the number of PFC enabled ingress ports the minimum link delay must be greater than the round trip transmission time the peer requres If you apply an input policy with PFC disabled no pfc mode on e You can enable link level flow control on the interface To delete the input policy first disable link level flow control PFC is then automatically enabled on the interface because an interface is by default PFC enabled e PFC still allows you to configure lossless queues on a port to ensure no drop handling of lossless traffic K NOTE You cannot enable PFC and link level flow control at the same time on an interface When you apply an input policy to an interface an error message displays if e The PFC dotlp priorities result in more than two lossless port queues globally on the switch e Link level flow control is already enabled You cannot be enable PFC and link level flow control at the same time on an interface e Ina switch stack configure all stacked ports with the same PFC configuration A DCB input policy for PFC applied to an interface may become invalid if you reconfigure dotip queue mapping
170. ay experience internal drops View the input and output statistics for a stack port interface EXEC Privilege mode show hardware stack unit 0 5 stack port 33 56 View the counters in the field processors of the stack unit EXEC Privilege mode show hardware stack unit 0 5 unit 0 0 counters View the details of the FP Devices and Hi gig ports on the stack unit EXEC Privilege mode show hardware stack unit 0 5 unit 0 0 details e Execute a specified bShel11 command from the CLI without going into the bShell EXEC Privilege mode show hardware stack unit 0 5 unit 0 0 execute shell cmd command e View the Multicast IPMC replication table from the bShell EXEC Privilege mode show hardware stack unit 0 5 unit 0 0 ipmc replication View the internal statistics for each port pipe unit on per port basis EXEC Privilege mode show hardware stack unit 0 5 unit 0 0 port stats detail View the stack unit internal registers for each port pipe EXEC Privilege mode show hardware stack unit 0 5 unit 0 0 register e View the tables from the bShell through the CLI without going into the bShell EXEC Privilege mode show hardware stack unit 0 5 unit 0 0 table dump table name Environmental Monitoring Aggregator components use environmental monitoring hardware to detect transmit power readings receive power readings and temperature updates To receive periodic power updates you must enable the following comm
171. aylight savings time once use the following command 210 Set the clock to the appropriate timezone and daylight saving time CONFIGURATION mode clock summer time time zone date start month start day start year start time end month end day end year end time offset time zone Enter the three letter name for the time zone This name displays in the show clock output start month Enter the name of one of the 12 months in English You can enter the name of a day to change the order of the display to time day month year start day enter the number of the day The range is from 1 to 31 You can enter the name of a month to change the order of the display to time day month year start year enter a four digit number as the year The range is from 1993 to 2035 start time enter the time in hours minutes For the hour variable use the 24 hour format example 17 15 is 5 15 pm end month enter the name of one of the 12 months in English You can enter the name of a day to change the order of the display to time day month year end day enter the number of the day The range is from 1 to 31 You can enter the name of a month to change the order of the display to time day month year end year enter a four digit number as the year The range is from 1993 to 2035 end time enter the time in hours minutes For the hour variable use the 24 hour format example 17 15 is 5 15 pm offset OPTIONA
172. below Dynamic Host Configuration Protocol DHCP 61 DHCPDECLINE DHCPINFORM DHCPNAK DHCPRELEASE A client sends this message to the server in response to a DHCPACK if the configuration parameters are unacceptable for example if the offered address is already in use In this case the client starts the configuration process over by sending a DHCPDISCOVER A client uses this message to request configuration parameters when it assigned an IP address manually rather than with DHCP The server responds by unicast A server sends this message to the client if it is not able to fulfill a DHCPREQUEST for example if the requested address is already in use In this case the client starts the configuration process over by sending a DHCPDISCOVER A DHCP client sends this message when it is stopped forcefully to return its IP address to the server DHCP Server 10 11 2 5 mm o Broadcast Unicast P 10 111 r 1 DHCP Server 10 11 1 5 Figure 5 Assigning Network Parameters using DHCP 62 Dynamic Host Configuration Protocol DHCP Dell Networking OS Behavior DHCP is implemented in Dell Networking OS based on RFC 2131 and 3046 Debugging DHCP Client Operation To enable debug messages for DHCP client operation enter the following debug commands Enable the display of log messages for all DHCP packets sent and received on DHCP client interfaces EXEC Privilege no debug ip dhcp client packets interf
173. ber or word to be entered in the CLI X Keywords and parameters within braces must be entered in the CLI X Keywords and parameters within brackets are optional xly Keywords and parameters separated by a bar require you to choose one option x ly Keywords and parameters separated by a double bar allows you to choose any or all of the options About this Guide 15 Related Documents For more information about the Dell PowerEdge M I O Aggregator MXL 10 40GbE Switch IO Module refer to the following documents e Dell Networking OS Command Line Reference Guide for the M I O Aggregator e Dell Networking OS Getting Started Guide for the M I O Aggregator e Release Notes for the M I O Aggregator 14 About this Guide 2 Before You Start To install the Aggregator in a Dell PowerEdge M1000e Enclosure use the instructions in the Dell PowerEdge M I O Aggregator Getting Started Guide that is shipped with the product The I O Aggregator also Known as Aggregator installs with zero touch configuration After you power it on an Aggregator boots up with default settings and auto configures with software features enabled This chapter describes the default settings and software features that are automatically configured at startup To reconfigure the Aggregator for customized network operation use the tasks described in the other chapters IOA Operational Modes IOA supports three operational modes Select the operational mode that meets your dep
174. bit Full Te 1 20 Down Auto Auto Te 1 21 Down Auto Auto i Table 19 show interfaces status Field Descriptions Field Description Port Description Status Speed Duplex VLAN Server facing 10GbE Ethernet Te 4AOGbE Ethernet Fo or fabric facing Fibre Channel Fc port with slot port information Text description of port Operational status of port Ethernet ports up transmitting FCoE and LAN storage traffic or down not transmitting traffic Fibre Channel ports up link is up and transmitting FC traffic or down link is down and not transmitting FC traffic link wait link is up and waiting for FLOGI to complete on peer SW port or removed port has been shut down Transmission speed in Megabits per second of Ethernet and FC iports including auto negotiated speed Auto Data transmission mode Full allows communication in both directions at the same time Half allows communication in both directions but not at the same time Auto auto negotiated transmission VLAN IDs of the VLANs in which the port is a member show fcoe map Command Examples Dell show fcoe map brief Fabric Name Fabric Id Vlan Id FC MAP State Oper State fid 1003 1003 1003 0efc03 ACTIVE UP fid 1004 1004 1004 Oefc04 ACTIVE DOWN Dell show fcoe map fid_1003 Fabric Name fid_1003 Fabric Id 1003 Vlan Id 1003 Vlan priority 3 FC MAP 0efc03 FKA ADV Period 8 FC Flex IO Modules FCF Pri
175. btype ldpLocChassisld ldpRemChassisld ldpLocPortldSubtyp e ldpRemPortldSubty pe lldpLocPortld UldpRemPortld UdpLocPortDesc UdpRemPortDesc ldpLocSysName ldpRemSys ame lldpLocSysDesc ldpRemSysDesc ldpLocSysCapSupp orted ldpRemSysCapSupp orted Link Layer Discovery Protocol LLDP TLV Type TLV Name 8 Management Address Table 11 LLDP 802 1 Organizationally specific TLV MIB Objects TLV Type TLV Name 127 Port VLAN ID 127 Port and Protocol VLAN ID Link Layer Discovery Protocol LLDP TLV Variable enabled capabilities management address length management address subtype management address interface numbering subtype interface number OID TLV Variable PVID port and protocol VLAN supported port and protocol VLAN enabled System Local Remote Local Remote Local Remote Local Remote Local Remote Local Remote Local Remote System Local Remote Local Remote Local Remote LLDP MIB Object ldpLocSysCapEnabl ed ldpRemSysCapEnab led lldpLocManAdarLen UdpRemManAddrLen lidpLocManAdarSubt ype lidpRemManAddrSu btype lidpLocManAddar UldpRemManAdar ldpLocManAdarlfSu btype ldpRemManAdarlfS ubtype ldpLocManAdarlfld ldpRemManAdadrlfld ludpLocManAddrOID lidpRemManAddrOl D LLDP MIB Object lidpXdotiLocPortVla nld UdpXdotlRemPortVl anld UdpXdotlLocProtoVI anSupported UdpXdotlRemProto
176. by is uninterrupted The control plane prepares for operation in Warm Standby mode Stack Link Flapping Error Problem Resolution Stacked Aggregators monitor their own stack ports and disable any stack port that flaps five times within 10 seconds If the stacking ports that flap are on the master or standby KERN 2 INT error messages note the units To re enable a downed stacking port power cycle the stacked switch on which the port is installed Stacking 203 The following is an example of the stack link flapping error message Error Stack Port 49 has flapped 5 times within 10 seconds Shutting down this stack port now Error Please check the stack cable module and power cycle the stack 10 55 20 SSTKUNIT1 M CP KERN 2 INT Error Stack Port 50 has flapped 5 times within 10 seconds Shutting down this stack port now 10 55 20 STKUNIT1 M CP SKERN 2 INT Error Please check the stack cable module and power cycle the stack 10 55 18 STKUNIT1 M CP SKERN 2 INT Error Stack Port 50 has flapped 5 times within 10 seonds Shutting down this stack port now 10 55 18 STKUNIT1 M CP KERN 2 INT Error Please check the stack cable module and power cycle the stack Master Switch Recovers from Failure e Problem The master switch recovers from a failure after a reboot and rejoins the stack as the standby unit or member unit Protocol and control plane recovery requires time before the switch is
177. capable multicast routers address 224 0 0 22 Maximum Response Time Bit flag that when set to Query Interval derived Source addresses to be derived from this value 1 suppresses router query from this value fierd response timer updates Number of times thata Number of source addeesses Code 0x11 Membership Query dd sobe rad query or report to insure that it is received Figure 11 IGMP version 3 Membership Query Packet Format O12 IGMP version Membership Report None defined in RFC 3376 Range 1 6 Mumbar of source addresses Source addremes Code l Garem state isinckude tobe Shered tobe Ghered 2 Current cite is Exchude 3 State change to lechude de State change to Fixchude Si Allow new sources and no state change 6 Block old sources and no state change Figure 12 IGMP version 3 Membership Report Packet Format Joining and Filtering Groups and Sources The below illustration shows how multicast routers maintain the group and source information from unsolicited reports e The first unsolicited report from the host indicates that it wants to receive traffic for group 224 1 1 1 e The host s second report indicates that it is only interested in traffic from group 224 1 1 1 source 10 11 1 1 Include messages prevent traffic from all other sources in the group from reaching the subnet so before recording this request the querier sends a group and source query to verify that there are no hosts interested in any other
178. ce Configurations R1 conf 11dp exit R1 conf interface tengigabitethernet 0 3 R1 conf if te 0 3 show config interface tengigabitEthernet 0 3 no ip address switchport no shutdown R1 conf if te 0 3 protocol lldp R1 conf if te 0 3 lldp show config PMUX Mode of the IO Aggregator 241 protocol lldp R1 conf if te 0 3 11dp Viewing Information Advertised by Adjacent LLDP Agents To view brief information about adjacent devices or to view all the information that neighbors are advertising use the following commands e Display brief information about adjacent devices show lldp neighbors e Display all of the information that neighbors are advertising show lldp neighbors detail Example of Viewing Brief Information Advertised by Neighbors Dell conf if te 0 3 11dp end Dell conf if te 0 3 do show lldp neighbors Loc PortID Rem Host Name Rem Port Id Rem Chassis Id Te 0 1 TenGigabitEthernet 0 5 00 01 e8 05 40 46 Te 0 2 TenGigabitEthernet 0 6 00 01 e8 05 40 46 Dell conf if te 0 3 Example of Viewing Details Advertised by Neighbors Dell show lldp neighbors detail Local Interface Te 0 4 has 1 neighbor Total Frames Out 6547 otal Frames In 4136 otal Neighbor information Age outs 0 Total Frames Discarded 0 o o m m Total In Error Frames 0 tal Unrecognized TLVs O0 Total TLVs Discarded 0 Next packet will be sent after 7 seconds The neighbors are given below
179. ce Dell conf if te 0 1 4 interface INTERFACE modes Interface Range Dell conf if range f interface INTERFACE modes Management Ethernet Interface Dell conf if ma 0 0 interface INTERFACE modes MONITOR SESSION Dell conf mon sess monitor session IP COMMUNITY LIST Dell config community ip community list list CONSOLE Dell config line line LINE Modes console VIRTUAL TERMINAL Dell config line vty line LINE Modes The following example shows how to change the command mode from CONFIGURATION mode to INTERFACE configuration mode Example of Changing Command Modes Dell conf interface tengigabitethernet 0 2 Dell conf if te 0 2 The do Command You can enter an EXEC mode command from any CONFIGURATION mode CONFIGURATION INTERFACE and so on without having to return to EXEC mode by preceding the EXEC mode command with the do command The following example shows the output of the do command Dell conf tdo show system brief Stack MAC 00 01 e8 00 ab 03 Stack Info Slot UnitType Status ReqTyp CurTyp Version Ports 0 Member not present 1 Management online 1 0 Aggregator I O Aggregator 8 3 17 38 56 2 Member not present 3 Member not present 4 Member not present 5 Member not present 22 Configuration Fundamentals Dell conf Undoing Commands When you enter a command the command line is added to the running configuration file running config To disable a command and remove it fro
180. ce tengigabitethernet 0 3 Dell conf if te 0 3 protocol lldp Dell conf if te 0 3 lldp advertise Advertise TLVs disable Disable LLDP protocol on this interface end Exit from configuration mode exit Exit from LLDP configuration mode hello LDP hello configuration mode LDP mode configuration default rx and tx multiplier LDP multiplier configuration no Negate a command or set its defaults show Show LLDP configuration Dell conf if te 0 3 11dp Enabling LLDP LLDP is enabled by default Enable and disable LLDP globally or per interface If you enable LLDP globally all UP interfaces send periodic LLDPDUs To enable LLDP use the following command 1 Enter Protocol LLDP mode CONFIGURATION or INTERFACE mode PMUX Mode of the IO Aggregator 239 protocol lldp 2 Enable LLDP PROTOCOL LLDP mode no disable Disabling and Undoing LLDP To disable or undo LLDP use the following command e Disable LLDP globally or for an interface disable To undo an LLDP configuration precede the relevant command with the keyword no Advertising TLVs You can configure the system to advertise TLVs out of all interfaces or out of specific interfaces e f you configure the system globally all interfaces send LLDPDUS with the specified TLVs e f you configure an interface only the interface sends LLDPDUS with the specified TLVs e f you configure LLDP both globally and at interface level the inte
181. ces to create redundant links in networks without STP Dell cont if te 3 41 switchport Dell conf if te 3 41 switchport backup gi 3 42 Delliconf if te 3 41 no shutdown Delliconf if te 4 31 switchport Dell conf if te 4 31 no shutdown Egi m eu a Ea ES VE Backup Link En Delliconf if te 3 42 switchport Delliconf if te 4 32 switchport Dell conf if te 3 42 no shutdown Dell conf if te 4 32 4tno shutdown Figure 18 Redundant NOCs with NIC Teaming MAC Address Station Move When you use NIC teaming consider that the server MAC address is originally learned on Port 0 1 of the switch see figure below If the NIC fails the same MAC address is learned on Port 0 5 of the switch The MAC address is disassociated with one port and re associated with another in the ARP table in other words the ARP entry is moved The Aggregator is auto configured to support MAC Address station moves Layer 2 139 mac address table station move refresh arp configured at time of NIC teaming Figure 19 MAC Address Station Move MAC Move Optimization Station move detection takes 5000ms because this is the interval at which the detection algorithm runs 140 Layer 2 15 Link Layer Discovery Protocol LLDP Link layer discovery protocol LLDP advertises connectivity and management from the local station to the adjacent stations on an IEEE 802 LAN LLDP facilitates multi vendor interoperability by using standard management
182. ch operates as an FCF and FCoE gateway Fibre Channel Storage Array Fibre Channel Installed Aggregator Aggregators Installed in M1000e Chassis Servers Installed in M1000e Chassis Figure 9 FIP Snooping on an Aggregator In tbe above figure DCBX and PFC are enabled on the Aggregator FIP snooping bridge and on the FCF ToR switch On the FIP snooping bridge DCBX is configured as follows Aserver facing port is configured for DCBX in an auto downstream role e An FCF facing port is configured for DCBX in an auto upstream or configuration source role FIP Snooping 83 The DCBX configuration on the FCF facing port is detected by the server facing port and the DCB PFC configuration on both ports is synchronized For more information about how to configure DCBX and PFC on a port refer to FIP Snooping After FIP packets are exchanged between the ENode and the switch a FIP snooping session is established ACLS are dynamically generated for FIP snooping on the FIP snooping bridge switch Debugging FIP Snooping To enable debug messages for FIP snooping events enter the debug fip snooping command Task Enable FIP snooping debugging on for all or a specified event type where all enables all debugging options acl enables debugging only for ACL specific events error enables debugging only for error conditions ifm enables debugging only for IFM events info enables debugging only for information events
183. ch to case insensitive For example the commands show run grep Ethernet returns a search result with instances containing a capitalized Ethernet such as interface TenGigabitEthernet 0 1 show run grep ethernet does not return that search result because it only searches for instances containing a non capitalized ethernet show run grep Ethernet ignore case returns instances containing both Ethernet and ethernet The grep command displays only the lines containing specified text The following example shows this command used in combination with the show linecard all command Dell conf tdo show stack unit all stack ports all pfc details grep 0 stack unit 0 stack port all 0 Pause Tx pkts 0 Pause Rx pkts 0 Pause Tx pkts 0 Pause Rx pkts O Pause Tx pkts 0 Pause Rx pkts O Pause Tx pkts 0 Pause Rx pkts 0 Pause Tx pkts 0 Pause Rx pkts 0 Pause Tx pkts 0 Pause Rx pkts K NOTE Dell accepts a space or no space before and after the pipe To filter a phrase with spaces underscores or ranges enclose the phrase with double quotation marks The except keyword displays text that does not match the specified text The following example shows this command used in combination with the show linecard all command Example of the except Keyword Dell conf do show stack unit all stack ports all pfc details except 0 Configuration Fundamentals 25 Admin mode is On Admin is enabled Local is enabled
184. ch unique remote system id and port key combination a new LAG is formed and the port automatically becomes a member of the LAG All ports with the same combination of system ID and port key automatically become members of the same LAG Ports are automatically removed from the LAG if the NIC teaming configuration on a server facing port changes or if the port goes operationally down Also a server facing LAG is removed when the last port member is removed from the LAG The benefit of supporting a dynamic LAG is that the Aggregator s server facing ports can toggle between participating in the LAG or acting as individual ports based on the dynamic information exchanged with a server NIC LACP supports the exchange of messages on a link to allow their LACP instances to e Reach agreement on the identity of the LAG to which the link belongs e Attach the link to that LAG e Enable the transmission and reception functions in an orderly manner e Detach the link from the LAG if one of the partner stops responding LACP Modes The Aggregator supports only LACP active mode as the default mode of operation In active mode a port interface is considered to be not part of a LAG but rather in an active negotiating state A port in active mode automatically initiates negotiations with other ports by sending LACP packets If you configure server facing ports for LACP based NIC teaming LACP negotiations take place to aggregate the port in a dynamic LAG If
185. coe map SAN FABRIC A Dell config fcoe name fabric id 1002 vlan 1002 Dell config fcoe name description SAN FABRIC A fc map Oefc00 Dell config fcoe name keepaliv Dell Dell config fcoe name fcf priority 128 Dell config fcoe name config fcoe name fka adv period 8 5 Enable an upstream FC port Dell config interface fibrechannel 0 0 Dell config if fc 0 no shutdown Dell config if fc 0 fabric SAN FABRIC A 6 Enable a downstream Ethernet port Dell config interface tengigabitEthernet 0 0 Dell conf if te 0 no shutdown Dell conif if te 0 fcoe map SAN FABRIC A Displaying NPIV Proxy Gateway Information To display NPG operation information use the following show commands Command Command Mode Show interfaces status Displays the operational status of Ethernet and Fibre Channel interfaces PMUX Mode of the IO Aggregator 257 show fcoe map brief map name Displays the FC and FCoE configuration parameters in FCoE maps show qos dcb map map name Displays the configuration parameters in a specified DCB map show npiv devices brief Displays information on FCoE and FC devices currently logged in to the NPG show fc switch Displays the FC mode of operation and worldwide node WWN name For more information about NPIV Proxy Gateway information refer to the 9 3 0 0 Addendum Link Layer Discovery Protocol LLDP Link layer discove
186. col Hardware address is 00 01 e8 el el cl Current address is 00 01 e8 el el cl Interface index is 1107755136 nimum number of links to bring Port channel up is 1 ternet address is not set de of IP Address Assignment NONE n i In O DHCP Client ID 1ag1280001e8elel1c1 T Y e U 12000 bytes IP MTU 11982 bytes LineSpeed 40000 Mbit mbers in this channel Te 0 41 U Te 0 42 U Te 0 43 U Te 0 44 U ARP type ARPA ARP Timeout 04 00 00 Last clearing of show interface counters 00 11 50 Queueing strategy fifo Input Statistics 182 packets 17408 bytes 92 64 byte pkts 0 over 64 byte pkts 90 over 127 byte pkts 0 over 255 byte pkts 0 over 511 byte pkts 0 over 1023 byte pkts 182 Multicasts 0 Broadcasts 0 runts 0 giants 0 throttles 0 CRC 0 overrun 0 discarded Output Statistics 2999 packets 383916 bytes 0 underruns 5 64 byte pkts 214 over 64 byte pkts 2727 over 127 byte pkts 53 over 255 byte pkts 0 over 511 byte pkts 0 over 1023 byte pkts 2904 Multicasts 95 Broadcasts 0 Unicasts 0 throttles 0 discarded 0 collisions 0 wreddrops Rate info interval 299 seconds Input 00 00 Mbits sec 0 packets sec 0 00 of line rate Output 00 00 Mbits sec 4 packets sec 0 00 of line rate Time since last interface status change 00 11 42 show lacp 128 Command Example Dell show lacp 128 Port channel 128 admin up oper up mode lacp Actor System ID Priority 32768 Address 0001 e8e1 e1c3 Partner Sys
187. configured based on the server facing ports that are members of the LAG The untagged VLAN of a server facing LAG is auto configured based on the untagged VLAN to which the lowest numbered server facing port in the LAG belongs Interfaces 103 Displaying Port Channel Information To view the port channel s status and channel members in a tabular format use the show interfaces port channel brief command in EXEC Privilege mode Dell show int port brief Codes L LACP Port channel LAG Mode Status Uptime Ports 1 L2 down 00 00 00 Te 0 16 Down Dell To display detailed information on a port channel enter the show interfaces port channel command in EXEC Privilege mode The below example shows the port channel s mode L2 for Layer 2 L3 for Layer 3 and L2L3 for a Layer 2 port channel assigned to a routed VLAN the status and the number of interfaces belonging to the port channel In this example the Port channel 1 is a dynamically created port channel based on the NIC teaming configuration in connected servers learned via LACP Also the Port channel 128 is the default port channel to which all the uplink ports are assigned by default Dell show interface port channel Port channel 1 is up line protocol is up Created by LACP protocol Hardware address is 00 le c9 f1 03 58 Current address is 00 1e c9 f1 03 58 Interface index is 1107755009 nimum number of links to bring Port channel up is 1 ternet address is not set de of IP Ad
188. connecting it to the existing VLT peer switch using the VLTi connection e VLT backup link Inthe backup link between peer switches heartbeat messages are exchanged between the two chassis for health checks The default time interval between heartbeat messages over the backup link is 1 second You can configure this interval The range is from 1 to 5 seconds DSCP marking on heartbeat messages is CS6 In order that the chassis backup link does not share the same physical path as the interconnect trunk Dell Networking recommends using the management ports on the chassis and traverse an out of band management network The backup link can use user ports but not the same ports the interconnect trunk uses The chassis backup link does not carry control plane information or data traffic Its use is restricted to health checks only e Virtual link trunks VLTs between access devices and VLT peer switches To connect servers and access switches with VLT peer switches you use a VLT port channel as shown in Overview Up to 48 port channels are supported up to eight member links are supported in each port channel between the VLT domain and an access device The discovery protocol running between VLT peers automatically generates the ID number of the port channel that connects an access device and a VLT switch The discovery protocol uses LACP properties to identify connectivity to a common client device and automatically generates
189. ctivate AAA accounting the Dell Networking OS software issues accounting records for all users on the system including users whose username string is NULL because of protocol translation An example of this is a user who comes in on a line where the AAA authentication login method list none command is applied To prevent accounting records from being generated for sessions that do not have usernames associated with them use the following command Prevent accounting records from being generated for users whose username string is NULL CONFIGURATION mode aaa accounting suppress null username Security for M I O Aggregator 165 Configuring Accounting of EXEC and Privilege Level Command Usage The network access server monitors the accounting functions defined in the TACACS attribute value AV pairs e Configure AAA accounting to monitor accounting functions defined in TACACS CONFIGURATION mode aaa accounting system default start stop tacacs aaa accounting command 15 default start stop tacacs System accounting can use only the default method list Example of Configuring AAA Accounting to Track EXEC and EXEC Privilege Level Command Use In the following sample configuration AAA accounting is set to track all usage of EXEC commands and commanas on privilege level 15 Dell conf ftaaa accounting exec default start stop tacacs Dell conf taaa accounting command 15 default start stop tacacs Configuring AAA Accounting for Termi
190. ctory contains files that save trace information when there has been a task crash or timeout e On a MASTER unit you can reach the TRACE_LOG_DIR files by FTP or by using the show file command from the flash TRACE_LOG_DIR directory e Ona Standby unit you can reach the TRACE_LOG_DIR files only by using the show file command from the flash TRACE_LOG_DIR directory K NOTE Non management member units do not support this functionality Example of the dir flash Command Dell dir flash TRACE LOG DIR Directory of flash TRACE LOG DIR 1 drwx 4096 Jan 17 2011 15 02 16 00 00 2 drwx 4096 Jan 01 1980 00 00 00 00 00 3 rwx 100583 Feb 11 2011 20 41 36 00 00 failure trace0 RPMO CP 292 Debugging and Diagnostics flash 2143281152 bytes total 2069291008 bytes free Using the Show Hardware Commands The show hardware command tree consists of commands used with the Aggregator switch These commands display information from a hardware sub component and from hardware based feature tables K NOTE Use the show hardware commands only under the guidance of the Dell Technical Assistance Center e View internal interface status of the stack unit CPU port which connects to the external management interface EXEC Privilege mode show hardware stack unit 0 5 cpu management statistics e View driver level statistics for the data plane port on the CPU for the specified stack unit EXEC Privilege mode show hardware stack unit
191. d use a spanning tree protocol After VLT is established you may use rapid spanning tree protocol RSTP to prevent loops from forming with new links that are incorrectly connected and outside the VLT domain VLT provides Layer 2 multipathing creating redundancy through increased bandwidth enabling multiple parallel paths between nodes and load balancing traffic where alternative paths exist Virtual link trunking offers the following benefits e Allows a single device to use a LAG across two upstream devices e Eliminates STP blocked ports e Provides a loop free topology e Uses all available uplink bandwidth e Provides fast convergence if either the link or a device fails e Optimized forwarding with virtual router redundancy protocol VRRP e Provides link level resiliency e Assures high availability CAUTION Dell Networking does not recommend enabling Stacking and VLT simultaneously If you enable both features at the same time unexpected behavior occurs As shown in the following example VLT presents a single logical Layer 2 domain from the perspective of attached devices that have a virtual link trunk terminating on separate chassis in the VLT domain However the two VLT chassis are independent Layer2 Layer3 L2 L3 switches for devices in the 246 PMUX Mode of the lO Aggregator upstream network L2 L3 control plane protocols and system management features function normally in VLT mode Features such as VRRP and in
192. d in the snmp query snmpwalk Os c public v 1 10 16 151 151 1 3 6 1 2 1 17 7 1 4 2 1 4 0 1107526642 mib 2 17 7 1 4 2 1 4 0 1107526642 Hex STRING F9FFFFFF 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 80 00 00 00 00 00 00 00 00 00 00 00 00 00 00 01 00 186 Simple Network Management Protocol SNMP SNMP Traps for Link Status To enable SNMP traps for link status changes use the snmp server enable traps snmp linkdown linkup command Simple Network Management Protocol SNMP 187 17 Stacking An Aggregator auto configures to operate in standalone mode To use an Aggregator in a stack you must manually configure it using the CLI to operate in stacking mode Stacking is supported only on the 40GbE ports on the base module Stacking is limited to six Aggregators in the same or different m1000e chassis To configure a stack you must use the CLI Stacking provides a single point of management for high availability and higher throughput This chapter contains the following sections e Stacking Aggregators e Stacking Port Numbers e Configuring a Switch Stack e Verifying a Stack Configuration Troubleshooting a Switch Stack e Upgrading a Switch Stack e Upgrading a Single Stack Unit Stacking Aggregators A stack of Aggregators operates as a virtual chassis with management units primary and standby and member units The Dell Networking operating softw
193. ddress 10 11 206 35 conf if ma 0 0 no shutdown conf if ma 0 0 exit Configure the VLT interconnect VLTi Dell VLTpeer2 conf interface port channel 100 Dell VLTpeer2 conf if po 100 no ip address Dell VLTpeer2 conf if po 100 channel member fortyGigE 0 46 50 Dell VLTpeer2 conf if po 100 no shutdown Dell VLTpeer2 conf if po 100 exit Configure the port channel to an attached device Dell VLTpeer2 conf interface port channel 110 Dell VLTpeer2 conf if po 110 no ip address Dell VLTpeer2 conf if po 110 switchport Dell VLTpeer2 conf if po 110 channel member fortyGigE 0 48 Dell VLTpeer2 conf if po 110 no shutdown Dell VLTpeer2 conf if po 110 vlt peer lag port channel 110 Dell VLTpeer2 conf if po 110 end 256 PMUX Mode of the lO Aggregator Verify that the port channels used in the VLT domain are assigned to the same VLAN Dell VLTpeer2 show vlan id 10 Codes Isolated Q U Untagged Default VLAN G GVRP VLANs T Tagged x Dotlx untagged X Dotix tagged P Primary G GVRP tagged M Vlan stack H Hyperpull tagged NUM Status Description Q Ports U Pol10 Fo 0 48 T Pol00 Fo 0 46 50 10 Active C Community I Verifying a Port Channel Connection to a VLT Domain From an Attached Access Switch On an access device verify the port channel connection to a VLT domain Dell TORswitch conf f show running config in
194. ddresses The information that is preserved as the frame moves through the network The below figure shows the structure of a frame with a tag header The VLAN ID is inserted in the tag header i i 1 rr Figure 15 Tagged Frame Format The tag header contains some key information used by Dell Networking OS e The VLAN protocol identifier identifies the frame as tagged according to the IEEE 802 10 specifications 2 bytes e Tag control information TCI includes the VLAN ID 2 bytes total The VLAN ID can have 4 096 values but two are reserved NOTE The insertion of the tag header into the Ethernet frame increases the size of the frame to more than the 1518 bytes specified in the IEEE 802 5 standard Some devices that are not compliant with IEEE 802 5 may not support the larger frame size Information contained in the tag header allows the system to prioritize traffic and to forward information to ports associated with a specific VLAN ID Tagged interfaces can belong to multiple VLANs while untagged interfaces can belong only to one VLAN Configuring VLAN Membership By default all Aggregator ports are member of all 4094 VLANs including the default untagged VLAN 1 You can use the CLI or CMC interface to reconfigure VLANs only on server facing interfaces 1 8 so that an interface has membership only in specified VLANs To assign an Aggregator interface in Layer 2 mode to a specified group of VLANs use the vlan t
195. de e To restore the default auto VLAN mode of operation in which all ports are members of all 4094 VLANs on a port enter the auto vlan command for example Dell conf interface tengigabitethernet 0 2 Dell conf if te 0 2 auto vlan Stacking 191 Stacking Port Numbers By default each Aggregator in Standalone mode is numbered stack unit O Stack unit numbers are assigned to member switches when the stack comes up The following example shows the numbers of the 40GbE stacking ports on an Aggregator 192 Stacking el ee T Figure 26 Stack Groups on an Aggregator Stacking Stack Unit 0 Port 37 Stack Unit O Port 33 193 Configuring a Switch Stack To configure and bring up a switch stack follow these steps 1 Connect the 40GbE ports on the base module of two Aggregators using 40G direct attach or QSFP fibre cables Configure each Aggregator to operate in stacking mode Reload each Aggregator one after the other in quick succession Stacking Prerequisites Before you cable and configure a stack of MXL 10 40GbE switches review the following prerequisites e All Aggregators in the stack must be powered up with the initial or startup configuration before you attach the cables e All stacked Aggregators must run the same Dell Networking OS version The minimum Dell networking OS version required is 8 5 17 0 To check the version that a switch is running use the show version command To download
196. default when the aggregator powers up No configuration is required When an aggregator powers up it monitors known TCP ports for iSCSI storage devices on all interfaces When a session is detected an entry is created and monitored as long as the session is active Before You Start 17 An aggregator also detects iSCSI storage devices on all interfaces and autoconfigures to optimize performance Performance optimization operations are applied automatically such as Jumbo frame size support on all the interfaces disabling of storm control and enabling spanning tree port fast on interfaces connected to an iSCSI equallogic EQL storage device Link Aggregation All uplink ports are configured in a single LAG LAG 128 Server facing ports are auto configured as part of link aggregation groups if the corresponding server is configured for LACP based network interface controller NIC teaming Static LAGs are not supported K NOTE The recommended LACP timeout is Long Timeout mode Link Tracking By default all server facing ports are tracked by the operational status of the uplink LAG If the uplink LAG goes down the aggregator loses its connectivity and is no longer operational all server facing ports are brought down after the specified defer timer interval which is 10 seconds by default If you have configured VLAN you can reduce the defer time by changing the defer timer value or remove it by using the no defer timer command from UPLI
197. des with the uplink interfaces being part of different LAG bundles Stacking 197 After you restart the Aggregator the 4 Port 10 GbE Ethernet modules or the 40GbE QSFP port that is split into four LOGbE SFP ports cannot be configured to be part of the same uplink LAG bundle that is set up with the uplink speed of 40 GbE In such a condition you can perform a hot swap of the 4 port 10 GbE Flex IO modules with a 2 port 40 GbE Flex IO module which causes the module to become a part of the LAG bundle that is set up with 40 GbE as the uplink speed without another reboot The Aggregator supports native 40 GbE mode for QSFP ports only in simple MUX mode and stacking mode of operation In stacking mode the base 40 GbE module ports are used for stacking and native 40 GbE uplink speed is enabled for only the QSFP ports on the optional 2 Port 40 Gigabit Ethernet QSFP FlexlO modules The following table describes the various speeds in different Aggregator modes If a 4x10G SFP ora 4x10BASE T module is plugged in and 40 GbE mode is configured it is in error disabled state Table 15 Speeds in Different Aggregator Modes Module Type Standalone Standalone Stacking 10G Stacking VLT 10G VLT 40G 10G mode 40G Mode Mode 40G mode Mode Mode Base module 10G 40G 40G HiGig 40G 40G Native 40G HiGig Native Optional 10G 40G 10G 40G 10G 40G module 2 40GbE Optional 10G Error 10G Error 10G Error modules 4 10GbE FC module 10G 10G 10G 10G 10G 10G
198. detected and describes the configuration change that are automatically performed SSTKUNITO M CP SIFMGR 5 IFM ISCSI AUTO CONFIG This switch is being configured for optimal conditions to support iSCSI traffic which will cause some automatic configuration to occur including jumbo frames and flow control on all ports no storm control to be enabled on the port of detection The following syslog message is generated the first time an EqualLogic array is detected SSTKUNITO M CP LLDP 5 LLDP EQL DETECTED EqualLogic Storage Array detected on interface Te 1 43 e At the first detection of an EqualLogic array a maximum transmission unit MTU of 12000 is enabled on all ports and port channels if it has not already been enabled e Unicast storm control is disabled on the interface identified by LLDP SCSI Optimization Operation When the Aggregator auto configures with iSCSI enabled the following occurs e Link level flow control is enabled on PFC disabled interfaces e iSCSI session snooping is enabled 118 iSCSI Optimization e SCSI LLDP monitoring starts to automatically detect EqualLogic arrays SCSI optimization requires LLDP to be enabled LLDP is enabled by default when an Aggregator auto configures The following message displays when you enable iSCSI on a switch and describes the configuration changes that are automatically performed SSTKUNITO M CP IFMGR 5 IFM ISCSI ENABLE iSCSI has been enab
199. downstream port bandwidth in the same uplink state group This calculation ensures that there is no traffic drops due to insufficient bandwidth on the upstream links to the routers switches By default if all upstream interfaces in an uplink state group go down all downstream interfaces in the same uplink state group are put into a Link Down state Uplink Failure Detection UFD 215 Using UFD you can configure the automatic recovery of downstream ports in an uplink state group when the link status of an upstream port changes The tracking of upstream link status does not have a major impact on central processing unit CPU usage UFD and NIC Teaming To implement a rapid failover solution you can use uplink failure detection on a switch with network adapter teaming on a server For more information refer to Network Interface Controller NIC Teaming For example as shown previously the switch router with UFD detects the uplink failure and automatically disables the associated downstream link port to the server To continue to transmit traffic upstream the server with NIC teaming detects the disabled link and automatically switches over to the backup link in order to continue to transmit traffic upstream Important Points to Remember When you configure UFD the following conditions apply e You can configure up to 16 uplink state groups By default no uplink state groups are created in PMUX mode and uplink state group 1 is cr
200. dress Assignment NONE n i In O DHCP Client ID 1ag1001ec9f10358 T i e R U 12000 bytes IP MTU 11982 bytes LineSpeed 50000 Mbit mbers in this channel Te 1 2 U Te 1 3 U Te 1 4 U Te 1 5 U Te 1 7 U ARP type ARPA ARP Timeout 04 00 00 Last clearing of show interface counters 00 13 56 Queueing strategy fifo Input Statistics 836 packets 108679 bytes 412 64 byte pkts 157 over 64 byte pkts 135 over 127 byte pkts 132 over 255 byte pkts 0 over 511 byte pkts 0 over 1023 byte pkts 836 Multicasts 0 Broadcasts 0 runts 0 giants 0 throttles 0 CRC 0 overrun 0 discarded Output Statistics 9127965 packets 3157378990 bytes 0 underruns 0 64 byte pkts 133 over 64 byte pkts 3980 over 127 byte pkts 9123852 over 255 byte pkts 0 over 511 byte pkts 0 over 1023 byte pkts 4113 Multicasts 9123852 Broadcasts 0 Unicasts 0 throttles 0 discarded 0 collisions 0 wreddrops Rate info interval 299 seconds Input 00 00 Mbits sec 1 packets sec 0 00 of line rate Output 34 00 Mbits sec 12318 packets sec 0 07 of line rate Time since last interface status change 00 13 49 Port channel 128 is up line protocol is up Created by LACP protocol Hardware address is 00 le c9 f1 03 58 Current address is 00 1e c9 f1 03 58 Interface index is 1107755136 Minimum number of links to bring Port channel up is 1 Internet address is not set Mode of IP Address Assignment NONE 104 Interfaces DHCP Client ID 1ag1
201. dware MIB Buffer Statistics 1 3 6 1 4 1 6027 3 16 1 1 4 fpPacketBufferTable View the modular packet buffers details per stack unit and the mode of allocation 1 3 6 1 4 1 6027 3 16 1 1 5 fpStatsPerPortTable View the forwarding plane statistics containing the packet buffer usage per port per stack unit 1 3 6 1 4 1 6027 3 16 1 1 6 fpStatsPerCOSTable View the forwarding plane statistics containing the packet buffer statistics per COS per port Debugging and Diagnostics 297 Buffer Tuning Buffer tuning allows you to modify the way your switch allocates buffers from its available memory and helps prevent packet drops during a temporary burst of traffic The application specific integrated circuit ASICs implement the key functions of queuing feature lookups and forwarding lookups in hardware Forwarding processor FP ASICs provide Ethernet MAC functions queueing and buffering as well as store feature and forwarding tables for hardware based lookup and forwarding decisions 1G and 10G interfaces use different FPs You can tune buffers at three locations 1 CSF Output queues going from the CSF 2 FP Uplink Output queues going from the FP to the CSF IDP links 3 Front End Link Output queues going from the FP to the front end PHY All ports support eight queues four for data traffic and four for control traffic All eight queues are tunable Physical memory is organized into cells of 128 bytes The cells are or
202. e Create the dedicated VLAN for interface vlan vlan id CONFIGURATION FCoE traffic The range is from 2 to 4094 VLAN 1002 is commonly used to transmit FCoE traffic Creating an FCoE Map The values for the FCoE VLAN fabric ID and FC MAP must be unique Apply an FCoE map on downstream server facing Ethernet ports and upstream fabric facing Fibre Channel ports Task Command Command Mode Create an FCoE map fcoe map map name CONFIGURATION Configure the association fabric id fabric num vlan FCoE MAP between the dedicated VLAN and vlan id the fabric where the desired storage arrays are installed The fabric and VLAN ID numbers must be the same The range is from 2 to 4094 Add a text description The description text FCoE MAP maximum is 32 characters Specify the FC MAP value used fc map fc map value FCoE MAP to generate a fabric provided MAC address You must enter a unique MAC address prefix as the FC MAP value for each fabric The range is from OEFCOO to OEFCFF The default is None Configure the priority used bya fcf priority priority FCoE MAP server CNA to select the FCF for a fabric login FLOGI The range PMUX Mode of the IO Aggregator 255 is from O to 128 The default is 128 Enable the monitoring FIP keepalive FCOE MAP keepalive messages if it is disabled to detect if other FCoE devices are reachable The default is enabled Configure the time in seconds fka adv period seconds FCoE MAP us
203. e TenGigabitEthernet 0 1 802 10Tagged True Vlan membership Q Vlans T 2 5 100 4010 Dell Deleting or Disabling a Port Channel To delete or disable a port channel use the following commands Delete a port channel CONFIGURATION mode no interface portchannel channel number Disable a port channel shutdown v VLT untagged V When you disable a port channel all interfaces within the port channel are operationally down also Link Aggregation 129 Configuring the Minimum Number of Links to be Up for Uplink LAGs to be Active You can activate the LAG bundle for uplink interfaces or ports the uplink port channel is LAG 128 on the I O Aggregator only when a minimum number of member interfaces of the LAG bundle is up For example based on your network deployment you may want the uplink LAG bundle to be activated only if a certain number of member interface links is also in the up state If you enable this setting the uplink LAG bundle is brought up only when the specified minimum number of links are up and the LAG bundle is moved to the down state when the number of active links in the LAG becomes less than the specified number of interfaces By default the uplink LAG 128 interface is activated when at least one member interface is up To configure the minimum number of member links that must be up for a LAG bunale to be fully up perform the following steps Specify the minimum number of member interfaces of the
204. e aux 0 console 0 vty number end number Enable AAA login authentication for the specified RADIUS method list LINE mode login authentication method list name default This procedure is mandatory if you are not using default lists e To use the method list CONFIGURATION mode authorization exec methodlist Specifying a RADIUS Server Host When configuring a RADIUS server host you can set different communication parameters such as the UDP port the key password the number of retries and the timeout To specify a RADIUS server host and configure its communication parameters use the following command Enter the host name or IP address of the RADIUS server host CONFIGURATION mode radius server host hostname ip address auth port port number retransmit retries timeout seconds key encryption typel key Configure the optional communication parameters for the specific host Security for M I O Aggregator 161 auth port port number the range is from 0 to 65335 Enter a UDP port number The default is 1812 retransmit retries the range is from 0 to 100 Default is 3 timeout seconds the range is from O to 1000 Default is 5 seconds key encryption typel key enter 0 for plain text or 7 for encrypted text and a string for the key The key can be up to 42 characters long This key must match the key configured on the RADIUS server host If you do not configure these optional parameters
205. e AD Partner is not present HJLMP Key 128 Priority 32768 HJLMP Key 128 Priority 32768 1 IM 154 Link Aggregation Port Te 0 51 is disabled LACP is disabled and mode is lacp Port State Bundle Actor Admin State AD Oper State AD Partner is not present HJLMP Key 128 Priority 32768 HJLMP Key 128 Priority 32768 ST DET Port Te 0 52 is disabled LACP is disabled and mode is lacp Port State Bundle Actor Admin State AD Oper State AD Partner is not present HJLMP Key 128 Priority 32768 HJLMP Key 128 Priority 32768 E Port Te 0 53 is disabled LACP is disabled and mode is lacp Port State Bundle Actor Admin State AD Oper State AD Partner is not present HJLMP Key 128 Priority 32768 HJLMP Key 128 Priority 32768 4 Port Te 0 54 is disabled LACP is disabled and mode is lacp Port State Bundle Actor Admin State AD Oper State AD Partner is not present HJLMP Key 128 Priority 32768 HJLMP Key 128 Priority 32768 3j T Port Te 0 55 is disabled LACP is disabled and mode is lacp Port State Bundle Actor Admin State AD Oper State AD Partner is not present HJLMP Key 128 Priority 32768 HJLMP Key 128 Priority 32768 29 Port Te 0 56 is disabled LACP is disabled and mode is lacp Port State Bundle Actor Admin State AD Oper State AD Partner is not present HJLMP Key 128 Priority 32768 HJLMP Key 128 Priority 32768 1 St
206. e after a save and reload confirm yes no yes Please save and reset unit 0 for the changes to take effect Dell conf Hno stack unit 0 port 37 portmode quad Disabling quad mode on stack unit 0 port 37 will make interface configs Te 0 37 Te 0 38 Te 0 39 Te 0 40 obsolete after a save and reload confirm yes no yes Please save and reset unit 0 for the changes to take effect Dell conf no stack unit 0 port 49 portmode quad Disabling quad mode on stack unit 0 port 49 will make interface configs Te 0 49 Te 0 50 Te 0 51 Te 0 52 obsolete after a save and reload confirm yes no yes Please save and reset unit 0 for the changes to take effect Dell conf no stack unit 0 port 53 portmode quad Disabling quad mode on stack unit 0 port 53 will make interface configs PMUX Mode of the lO Aggregator of of of of 227 Te 0 53 Te 0 54 Te 0 55 Te 0 56 obsolete after a save and reload confirm yes no yes Please save and reset unit 0 for the changes to take effect Dell conf 2 Save the configuration Dell write memory 01 05 48 SSTKUNITO M CP SFILEMGR 5 FILESAVED Copied running config to startup config in flash by default Dell reload Proceed with reload confirm yes no yes 3 Configure the port channel with 40G member ports Dell configure Dell conf interface range fortygige 0 33 fortygige 0 37 Dell conf if range fo 0 33 f0 0 37 no shut Dell conf if range fo 0 33
207. e appears 9 Warning The following users are currently configuring the system User lt username gt on line console0 e Onthe system that is connected over the console this message appears 9 Warning User lt username gt on line vty0 10 11 130 2 is in configuration mode If either of these messages appears Dell Networking recommends coordinating with the users listed in the message so that you do not unintentionally overwrite each other s configuration changes 26 Configuration Fundamentals 4 Data Center Bridging DCB On an I O Aggregator data center bridging DCB features are auto configured in standalone mode You can display information on DCB operation by using show commands K NOTE DCB features are not supported on an Aggregator in stacking mode Ethernet Enhancements in Data Center Bridging DCB refers to a set of IEEE Ethernet enhancements that provide data centers with a single robust converged network to support multiple traffic types including local area network LAN server and storage traffic Through network consolidation DCB results in reduced operational cost simplified management and easy scalability by avoiding the need to deploy separate application specific networks For example instead of deploying an Ethernet network for LAN traffic additional storage area networks SANs to ensure lossless fibre channel traffic and a separate InfiniBand network for high performance inter proce
208. e default VLAN ID number the Aggregator Sends a DHCP release to the DHCP server to release the IP address Sends a DHCP request to obtain a new IP address The IP address assigned by the DHCP server is used for the new default management VLAN How DHCP Client is Implemented The Aggregator is enabled by default to receive DHCP server assigned dynamic IP addresses on an interface This setting persists after a switch reboot If you enter the shutdown command on the interface DHCP transactions are stopped and the dynamically acquired IP address is saved Use the show interface type slot port command to display the dynamic IP address and DHCP as the mode of P address assignment If you later enter the no shutdown command and the lease timer for the dynamic IP address has expired the IP address is unconfigured and the interface tries to acquire a new dynamic address from DHCP server f you later enter the no shutdown command and the lease timer for the dynamic IP address has expired the IP address is released When you enter the release dhcp command although the IP address that was dynamically acquired from a DHCP server is released from an interface the ability to acquire a new DHCP server assigned Dynamic Host Configuration Protocol DHCP 65 address remains in the running configuration for the interface To acquire a new IP address enter either the renew dhcp command at the EXEC privilege level or the ip address dhcp command a
209. e disabled VLT Routing VLT routing is supported on the Aggregator Layer 2 protocols from the ToR to the server are intra rack and inter rack No spanning tree is required but interoperability with spanning trees at the aggregation layer is supported Communication is active active with no blocked links MAC tables are synchronized between VLT nodes for bridging and you can enable IGMP snooping Spanned VLANs Any VLAN configured on both VLT peer nodes is referred to as a Spanned VLAN The VLT Interconnect VLTi port is automatically added as a member of the Spanned VLAN As a result any adjacent router connected to at least one VLT node on a Spanned VLAN subnet is directly reachable from both VLT peer nodes at the routing level PMUX Mode of the IO Aggregator 251 Non VLT ARP Sync In the Dell Networking OS version 9 2 0 0 ARP entries including ND entries learned on other ports are synced with the VLT peer to support station move scenarios Prior to Dell Networking OS version 9 2 0 0 only ARP entries learned on VLT ports were synced between peers Additionally ARP entries resulting from station movements from VLT to non VLT ports or to different non VLT ports are learned on the non VLT port and synced with the peer node The peer node is updated to use the new non VLT port K NOTE ARP entries learned on non VLT non spanned VLANs are not synced with VLT peers Verifying a VLT Configuration To monitor the operation or
210. e feto eet esce fae oe NUR 17 FCoE Connectivity and FIP SNOOPIND ccococncconononononanenanonononononinanonanno nono innn nnne ndisse strain inan 17 ISCSI Operational dapes 17 LEinlcAgdgregation Wow ocio as ores ro sr e icc sue nt iA DEM M M LM Ma n 18 Link Tracking UN tt eie seu rene pe abe ed tid 18 Configuring VbANS cnt netter Me ea tbe te t te tis 18 UDINKLAG e ecu A AOI Um MEL 18 Server Facing AS 25 on dede A ode eoa 19 Where to Go From Heie a ee cred oh I ee PRU PRG e 19 3 Configuration Fundamentals cesses eiereeee etna nnnnann 20 Accessing the Cormand bine oie tube ae aet 20 GEI MOGBS at ta cA dub au ed e A LE a E 20 Navigating CUI MOGdeS xs dre oe MR P I E n TE RR 21 ThedoComtnaand sscns om cbe emt de eco re d o i edere tal Ite sem as 22 Undoing Commands a 52 2 edoceri ero done tia 235 Obtaining Help s EUR E e 23 Entering and Editing Commands ereere eaea ia E aeea E E aA a Aea nnne nennt nennen 24 Command EIStOry a nene a ehe ea hada iii E ve ais dada n tt 25 Filteririg Show Command Outputs ete teure te he Eu A T Len te cete eee 25 Multiple Users in Configuration Mode oooiocccicnnooonococnocncononnconn nono cnn cnn narran narran nent 26 4 Data Center Bridging DCB ccssesccessesesseeeeeeseeseeeeneeneeeeaeeeeeeeeeeeeeeeneeees 27 Ethernet Enhancements in Data Center Bridging sssssssssse eem 27 Priority Based Flow Entra eee ER b e e D e e HE rtg 28
211. e following message displays Dell conf fmon ses 1 Dell conf mon sess 1 source tengig 0 1 destination tengig 0 8 direction both Dell conf mon sess 1 do show monitor session 154 Port Monitoring Direction Mode Destination SessionID Source 1 TenGig 0 1 Type Port based interface Dell conf mon sess 1 mon ses 2 Dell conf mon sess 2 source tengig 0 1 destination tengig 0 8 direction both 9 Error MD port is already being monitored NOTE There is no limit to the number of monitoring sessions per system provided that there are only four destination ports per port pipe If each monitoring session has a unique destination port the maximum number of session is four per port pipe Port Monitoring The Aggregator supports multiple source destination statements in a monitor session but there may only be one destination port in a monitoring session There may only be one destination port in a monitoring session Error Only one MG port is allowed in a session The number of source ports the Dell Networking OS allows within a port pipe is equal to the number of physical ports in the port pipe n Multiple source ports may have up to four different destination ports Exceeding max MG ports for this MD port pipe In the following examples ports 0 1 0 2 0 3 and 0 4 all belong to the same port pipe These ports mirror traffic to four different destinations 0 9 0 10 0 11 and 0 12
212. e types e 10 Gigabit Ethernet enter tengigabitethernet slot port slot port range e 40 Gigabit Ethernet enter fortygigabitethernet slot port slot port range e Port channel enter port channel 1 512 port channel range Where port range and port channel range specify a range of ports separated by a dash and or individual ports port channels in any order for example upstream gigabitethernet 1 1 2 5 9 11 12 downstream port channel 1 3 5 e Acomma is required to separate each port and port range entry To delete an interface from the group use the no upstream downstream interface command Optional Configure the number of downstream links in the uplink state group that will be disabled Oper Down state if one upstream link in the group goes down UPLINK STATE GROUP mode downstream disable links number all e number specifies the number of downstream links to be brought down The range is from 1 to 1024 all brings down all downstream links in the group The default is no downstream links are disabled when an upstream link goes down To revert to the default setting use the no downstream disable links command Optional Enable auto recovery so that UFD disabled downstream ports in the uplink state group come up when a disabled upstream port in the group comes back up UPLINK STATE GROUP mode downstream auto recover Uplink Failure Detection UFD 217 The default is auto recovery of UFD disabled do
213. e untagged VLAN of a server facing LAG is configured based on the untagged VLAN to which the lowest numbered server facing port in the LAG belongs K NOTE Dell Networking recommends configuring the same VLAN membership on all LAG member ports Where to Go From Here You can customize the Aggregator for use in your data center network as necessary To perform additional switch configuration do one of the following e For remote out of band management enter the OOB management interface IP address into a Telnet or SSH client and log in to the switch using the user ID and password to access the CLI e For local management using the CLI use the attached console connection e For remote in band management from a network management station enter the IP address of the default VLAN and log in to the switch to access the CLI In case of a Dell upgrade you can check to see that an Aggregator is running the latest Dell version by entering the show versioncommand To download Dell version go to http support dell com For detailed information about how to reconfigure specific software settings refer to the appropriate chapter Before You Start 19 Configuration Fundamentals The Dell Networking Operating System OS command line interface CLI is a text based interface you can use to configure interfaces and protocols The CLI is structured in modes for security and management purposes Different sets of commands are available i
214. eat this step to apply an FCoE map to more than one FC port Enable the port for FC no shutdown INTERFACE FIBRE CHANNEL transmission 256 PMUX Mode of the IO Aggregator You can apply a DCB or FCoE map to a range of Ethernet or Fibre Channel interfaces by using the interface range command for example Dell config interface range tengigabitEthernet 1 12 23 tengigabitEthernet 2 24 35 Dell config interface range fibrechannel 0 0 3 fibrechannel 0 8 del Enter the keywords interface range then an interface type and port range The port range must contain spaces before and after the dash Separate each interface type and port range with a space comma and space Sample Configuration 1 Configure a DCB map with PFC and ETS settings Dell config dcb map SAN DCB MAP Dell config dcbx name priority group 0 bandwidth 60 pfc off Dell config dcbx name priority group 1 bandwidth 20 pfc on Dell config dcbx name priority group 2 bandwidth 20 pfc on Dell config dcbx name priority pgid 00012000 2 Apply the DCB map on a downstream server facing Ethernet port Dell config interface tengigabitethernet 1 0 Dell config if te 0 0 dcb map SAN DCB MAP 3 Create the dedicated VLAN to be used for FCoE traffic Dell conf interface vlan 1002 4 Configure an FCoE map to be applied to the downstream server facing Ethernet and upstream core facing FC ports Dell config f
215. eated in Standalone and VLT modes An uplink state group is considered to be operationally up if it has at least one upstream interface in the Link Up state An uplink state group is considered to be operationally down if it has no upstream interfaces in the Link Up state No uplink state tracking is performed when a group is disabled or in an Operationally Down state e You can assign physical port or port channel interfaces to an uplink state group in PMUX mode You can assign an interface to only one uplink state group Configure each interface assigned to an uplink state group as either an upstream or downstream interface but not both You can assign individual member ports of a port channel to the group An uplink state group can contain either the member ports of a port channel or the port channel itself but not both If you assign a port channel as an upstream interface the port channel interface enters a Link Down state when the number of port channel member interfaces in a Link Up state drops below the configured minimum number of members parameter e f one of the upstream interfaces in an uplink state group goes down either a user configurable set of downstream ports or all the downstream ports in the group are put in an Operationally Down state with an UFD Disabled error The order in which downstream ports are disabled is from the lowest numbered port to the highest If one of the upstream interfaces in
216. ect whether peer devices support CEE or not and enable ETS and PFC or link level flow control accordingly e Interfaces come up with DCB disabled and link level flow control enabled to control data transmission between the Aggregator and other network devices see Flow Control Using Ethernet Pause Frames When DCB is disabled on an interface PFC and ETS are also disabled e When DCBx protocol packets are received interfaces automatically enable DCB and disable link level flow control DCB is required for PFC ETS DCBx and FCoE initialization protocol FIP snooping to operate K NOTE Normally interfaces do not flap when DCB is automatically enabled DCB processes VLAN tagged packets and dotlp priority values Untagged packets are treated with a dot1p priority of O For DCB to operate effectively ingress traffic is classified according to its dot1p priority so that it maps to different data queues The dotip queue assignments used on an Aggregator are shown in Table 6 1 in dcb enable auto detect on next reload Command Example QoS dotlp Traffic Classification and Queue Assignment When DCB is Disabled Default By default Aggregator interfaces operate with DCB disabled and link level flow control enabled When an interface comes up it is automatically configured with e Flow control enabled on input interfaces ADCB MAP policy is applied with PFC disabled The following example shows a default interface configuration with DC
217. ection ccccccccccccccssssscsecsecsecsscsecsaesscesecsecsecsessassascsessecstsseseessecsseateaees 220 Sample Configuration Uplink Failure Detection sss nennen 222 Uplink Failure Detection SMUX mode nennen nnns 223 21 PMUX Mode of the IO Aggregator eene 224 Introductions o et eto eR idad t 224 I O Aggregator IOA Programmable MUX PMUX Mode ssssssssseseeeeeeneneenenne 224 Configuring and Changing to PMUX Mode ssssssssssesssseeeeeeenenneeneeee ener 224 Configuring the Commands without a Separate User Account 225 Multiple UplinS LAGS 5 atendido o de o leet tt 225 Multiple Uplink LAGs with 10G Member Ports nennen 226 dedit rto uestre cruise otn e mor os es dl add aa E cd ei 226 Multiple Uplink LAGs with 40G Member Ports teen nnne nnns 227 ei dco 227 Uplink Failure Detection UFD ssssssssssesssseeneeeeenenne nennen nennt entr 229 Ai Sete oche Nimo Lo c e DLL ret Mete hic Ue MU e m ceto D PUMA e i de 229 Virtual Link Trunking VLT in PMUX Mode ssssssssssssseneeeeneeneeneeene nennen entere 230 EAS A AA ae ns A fera eo aaa 230 Stacking in PMUX Modna i sane Li b t e HR AG PE be alone eee natin 232 O UU MM M ME 232 Configuring an NPIV Proxy Gateway pe tpa b ted eade P aee deae eee ic Le d dua 233 Enabling Fibre Channel Capability on the Switch ssssssssee eee 233 Cre
218. ed Sep 03 1993 09 36 52 Start up Config succeeded Sep 03 1993 09 36 52 Latest sync of config Runtime Event Log succeeded Sep 03 1993 09 36 52 Running Config succeeded Sep 03 1993 09 36 52 ACL Mgr succeeded Sep 03 1993 09 36 52 LACP no block sync done STP no block sync done SPAN no block sync done Example of the show hardware stack unit port stack Command Dell show hardware stack unit 1 stack port 53 Input Statistics 7934 packets 1049269 bytes 0 64 byte pkts 7793 over 64 byte pkts 100 over 127 byte pkts 0 over 255 byte pkts 7 over 511 byte pkts 34 over 1023 byte pkts 70 Multicasts 0 Broadcasts 0 runts 0 giants 0 throttles 0 CRC 0 overrun 0 discarded Output Statistics 438 packets 270449 bytes 0 underruns 0 64 byte pkts 57 over 64 byte pkts 181 over 127 byte pkts 54 over 255 byte pkts 0 over 511 byte pkts 146 over 1023 byte pkts 72 Multicasts 0 Broadcasts 221 Unicasts 202 0 throttles 0 discarded 0 collisions 0 wredDrops Rate info interval 45 seconds Input 00 00 Mbits sec 0 packets sec 0 00 of line rate Output 00 00 Mbits sec 0 packets sec 0 00 of line rate Failure Scenarios The following sections describe some of the common fault conditions that can happen in a switch stack and how they are resolved Stack Member Fails e Problem A unit that is not the stack master fails in an operational stack e Resolution If a stack member fails in a daisy chain topology a split stac
219. ed and mode is lacp Port State Bundle Actor Admin State ADEHJLMP Key 128 Priority 32768 Oper State ADEGIKNP Key 128 Priority 32768 Partner Admin State BDFHJLMP Key 0 Priority 0 Oper State ACEGIKNP Key 128 Priority 32768 Port Te 0 45 is disabled LACP is disabled and mode is lacp Port State Bundle Actor Admin State ADEHJLMP Key 128 Priority 32768 Oper State ADEHJLMP Key 128 Priority 32768 Partner is not present Port Te 0 46 is disabled LACP is disabled and mode is lacp Port State Bundle Actor Admin State AD Oper State AD Partner is not present HJLMP Key 128 Priority 32768 HJLMP Key 128 Priority 32768 Ar Ter Port Te 0 47 is disabled LACP is disabled and mode is lacp Port State Bundle Actor Admin State AD Oper State AD Partner is not present Port Te 0 48 is disabled LACP is disabled and mode is lacp Port State Bundle Actor Admin State AD Oper State AD Partner is not present HJLMP Key 128 Priority 32768 HJLMP Key 128 Priority 32768 Ie HJLMP Key 128 Priority 32768 HJLMP Key 128 Priority 32768 oy UR Port Te 0 49 is disabled LACP is disabled and mode is lacp Port State Bundle Actor Admin State AD Oper State AD Partner is not present HJLMP Key 128 Priority 32768 HJLMP Key 128 Priority 32768 3i p Port Te 0 50 is disabled LACP is disabled and mode is lacp Port State Bundle Actor Admin State AD Oper Stat
220. ed information one of the following actions is taken e If the peer configuration received is compatible with the internally propagated port configuration the link with the DCBx peer is enabled e If the received peer configuration is not compatible with the currently configured port configuration the link with the DCBX peer port is disabled and a syslog message for an incompatible configuration is generated The network administrator must then reconfigure the peer device so that it advertises a compatible DCB configuration The internally propagated configuration is not stored in the switch s running configuration On a DCBX port in an auto downstream role all PFC application priority ETS recommend and ETS configuration TLVs are enabled Data Center Bridging DCB 45 Default DCBX port role Uplink ports are auto configured in an auto upstream role Server facing ports are auto configured in an auto downstream role K NOTE On a DCBx port application priority TLV advertisements are handled as follows e The application priority TLV is transmitted only if the priorities in the advertisement match the configured PFC priorities on the port e Onauto upstream and auto downstream ports Ifa configuration source is elected the ports send an application priority TLV based on the application priority TLV received on the configuration source port When an application priority TLV is received on the configuration source port the au
221. ed member of VLAN 2 and a tagged member of VLAN 3 Displaying VLAN Membership To view the configured VLANs enter the show vlan command in EXEC privilege mode Dell show vlan Codes Default VLAN G GVRP VLANs R Remote Port Mirroring VLANs P Primary C Community I Isolated Q U Untagged T Tagged x Dotlx untagged X Dotlx tagged G GVRP tagged M Vlan stack H VSN tagged i Internal untagged I Internal tagged v VLT untagged V VLT tagged NUM Status Description Q Ports 1 Inactive a 20 Active U Po32 U Te 0 3 5 13 53 56 1002 Active T Te 0 3 13 55 56 Dell K NOTE A VLAN is active only if the VLAN contains interfaces and those interfaces are operationally up In the above example VLAN 1 is inactive because it does not contain any interfaces The other VLANS listed contain enabled interfaces and are active In a VLAN the shutdown command stops Layer 3 routed traffic only Layer 2 traffic continues to pass through the VLAN If the VLAN is not a routed VLAN that is configured with an IP address the shutdown command has no affect on VLAN traffic 100 Interfaces Adding an Interface to a Tagged VLAN The following example shows you how to add a tagged interface Tel 7 to a VLAN VLAN 2 Enter the vlan tagged command to add interface Te 1 7 to VLAN 2 which is as shown below Enter the show vlan command to verify that interface Te 1 7 is a tagged member of VLAN 2 Dell conf if te 1
222. ed packets on all VLANs are received on a server 288 Software show Commands cns tto eset de et C e EO eaa 289 Offlirie DIAGNOSES eec ae iR RB tp tp b eU 291 Important Points to Remember lost dl en odi uin 291 Running Offline Diagnostics Trace LOGS C EUER Auto Save on Crash or Rollover sssssseen eene nono conan nn entren EEEE EEEn 292 Using the Show Hardware Commands eene tnnt entes 293 Environmental MOnitoring orc gente Som a Re etis ap al ee 294 Recognize an Over Temperature Condition sss 295 Troubleshoot an Over Temperature Condition nani 296 Recognize an Under Voltage Condition 297 Troubleshoot an Under Voltage Condition 297 Buffer Tuning Deciding to Tune Buffers Sample Buffer Profile Configuration Troubleshooting Packet Loss Displaying Drop Counters Dataplane Statistics Displaying Stack Port Statistics Displaying Drop Counters Restoring the Factory Default Settings mportant Points to Remember 25 Standards Compliant Britt iii One dna OO Eid 309 EEECOMplance du ea ao ld e a al ee le tthe 309 RECON AD CAMP dedo 309 General Internet Protocols ete e e rete i et re db Feb as 310 General IPVA BEOEOCOILS ita AA AAA AAA AAA ea 310 Network Management MIB Location EET 314 About this Guide This guide describes the supported protocols and software features and provides configuration instructions and examples for the Dell Networking M I O Aggre
223. ed to transmit FIP keepalive advertisements The range is 8 to 90 seconds The default is 8 seconds Applying a DCB Map on Server Facing Ethernet Ports You can apply a DCB map only on a physical Ethernet interface and can apply only one DCB map per interface Task Command Command Mode Enter Interface Configuration interface CONFIGURATION mode on a server facing portto tengigabitEthernet slot apply a DCB map port fortygigabitEthernet slot port Apply the DCB map on an dcb map name INTERFACE Ethernet port Repeat this step to apply a DCB map to more than one port Applying an FCoE Map on Fabric Facing FC Ports The IOA and MXL switch with the FC Flex IO module FC ports are configured by default to operate in N port mode When you apply an FCoE map on a fabric facing FC port the FC port becomes part of the FCoE fabric Each IOA and MXL switch with the FC Flex IO module FC port is associated with an Ethernet MAC address FCF MAC address When you enable a fabric facing FC port the FCoE map applied to the port starts sending FIP multicast advertisements using the parameters in the FCoE map over server facing Ethernet ports A server sees the FC port with its applied FCoE map as an FCF port Task Command Command Mode Configure a fabric facing FC interface fibrechannel CONFIGURATION port slot port Apply the FCoE and FC fabric fabric map name INTERFACE FIBRE CHANNEL configurations in an FCoE map to the port Rep
224. elds predictable results across switch resets and chassis reloads A physical interface can belong to only one port channel at a time Each port channel must contain interfaces of the same interface type speed Port channels can contain a mix of 1000 or 10000 Mbps Ethernet interfaces The interface speed 100 1000 or 10000 Mbps used by the port channel is determined by the first port channel member that is physically up Dell Networking OS disables the interfaces that do not match the interface speed set by the first channel member That first interface may be the first interface that is physically brought up or was physically operating when interfaces were added to the port channel For example if the first operational interface in the port channel is a TenGigabit Ethernet interface all interfaces at 1000 Mbps are kept up and all 100 1000 10000 interfaces that are not set to 1000 Mbps speed or auto negotiate are disabled 1GbE and 10GbE Interfaces in Port Channels When both Gigabit and TenGigabitEthernet interfaces are added to a port channel the interfaces must share a common speed When interfaces have a configured speed different from the port channel speed the software disables those interfaces The common speed is determined when the port channel is first enabled At that time the software checks the first interface listed in the port channel configuration If that interface is enabled its speed configuration becomes the common speed
225. ell Networking OS version To upgrade your system type follow the procedures in the Dell Networking OS Release Notes Get Help with Upgrades Direct any questions or concerns about the Dell Networking OS upgrade procedures to the Dell Technical Support Center You can reach Technical Support e On the web http support dell com e By email Dell Force10_Technical_SupportODell com By phone US and Canada 866 965 5800 International 408 965 5800 286 Upgrade Procedures 24 Debugging and Diagnostics This chapter contains the following sections Debugging Aggregator Operation Software Show Commands Offline Diagnostics Trace Logs Show Hardware Commands Debugging Aggregator Operation This section describes common troubleshooting procedures to use for error conditions that may arise during Aggregator operation All interfaces on the Aggregator are operationally down This section describes how you can troubleshoot the scenario in which all the interfaces are down Symptom All Aggregator interfaces are down Resolution Ensure the port channel 128 is up and that the Aggregator facing port channel on the top of rack switch is correctly configured Steps to Take 1 Verify that uplink port channel 128 is up show interfaces port channel 128 brief command and display the status of member ports show uplink state group 1 detail command Dell show interfaces port channel 128 brief Codes L LACP Port chan
226. ell show ip interface configured Dell show ip interface tengigabitEthernet 1 configured Dell show ip interface brief configured Dell show running config interfaces configured Dell show running config interface tengigabitEthernet 1 configured n EXEC mode show interfaces switchportcommand displays only interfaces in Layer 2 mode and their relevant configuration information The show interfaces switchport command displays the interface whether the interface supports IEEE 802 1Q tagging or not and the VLANs to which the interface belongs show interfaces switchport Command Example Dell show interfaces switchport Name TenGigabitEthernet 13 0 802 10Tagged True Vlan membership Vlan 2 Interfaces 113 Name TenGigabitEthernet 13 1 802 10Tagged True Vlan membership Vlan 2 Name TenGigabitEthernet 13 2 802 10Tagged True Vlan membership Vlan Name 2 TenGigabitEthernet 13 3 802 10Tagged True Vlan membership Vlan 2 rMore Clearing Interface Counters The counters in the show interfaces command are reset by the clear counters command This command does not clear the counters captured by any SNMP program To clear the counters use the following command in EXEC Privilege mode Command Syntax clear counters interface Command Mode EXEC Privilege Purpose Clear the counters used in the show interface commands for all VRRP groups VLANs and phy
227. em does not occur if Ethernet traffic is not involved and only FCoE traffic is transmitted Also if DCB on the TOR switch is disabled traffic disruption does not occur Port Numbering for FC Flex IO Modules Even numbered ports are at the bottom of the I O panel and for modules odd numbered ports are at the top of the I O panel When installed in a PowerEdge M1000e Enclosure the MXL 10 40GbE Switch and Aggregator ports are numbered 33 to 56 from the bottom to the top of the switch The following port numbering convention applies to the FC Flex IO module e In expansion slot O the ports are numbered 41 to 44 e In expansion slot 1 the ports are numbered 49 to 52 Installing the Optics The following optical ports are supported on the FC Flex IO module using one of the supported breakout cables e 4G or 8G Fibre Channel small form factor pluggable plus SFP optics module and LC connectors over a distance of 150 meters e 4Gor 8G Fibre Channel SFP optics module and LC connectors over a distance of 4 km AN CAUTION Electrostatic discharge ESD damage can occur if the components are mishandled Always wear an ESD preventive wrist or heel ground strap when handling the FC Flex IO module and its components WARNING When working with optical fibres follow all the warning labels and always wear eye protection Never look directly into the end of a terminated or unterminated fibre or connector as it may cause eye damage 1 e Pos
228. ement Unit Thermal Sensor Readings deg C Unit Sensor0 Sensorl Sensor2 296 Debugging and Diagnostics Recognize an Under Voltage Condition If the system detects an under voltage condition it sends an alarm To recognize this condition look for the following system message CHMGR 1 CARD SHUTDOWN Major alarm Line card 2 down auto shutdown due to under voltage This message indicates that the specified card is not receiving enough power In response the system first shuts down Power over Ethernet PoE Troubleshoot an Under Voltage Condition To troubleshoot an under voltage condition check that the correct number of power supplies are installed and their Status light emitting diodes LEDs are lit The following table lists information for SNMP traps and OIDs which provide information about environmental monitoring hardware and hardware components Table 26 SNMP Traps and OIDs OID String OID Name Description Receiving Power 1 3 6 1 4 1 6027 3 10 1 2 5 1 6 chSysPortXfpRecvPower OID displays the receiving power of the connected optics Transmitting power 1 5 6 1 4 1 6027 5 10 1 2 5 1 8 chSysPortXfpTxPower OID displays the transmitting power of the connected optics Temperature 1 3 6 1 4 1 6027 3 10 1 2 5 1 7 chSysPortXfpRecvTemp OID displays the temperature of the connected optics K NOTE These OIDs only generate if you enable the enable optic info update interval is enabled command Har
229. eport that contains the multicast address of the group it wants to join the packet is addressed to the same group If multiple hosts want to join the same multicast group only the report from the first host to respond reaches the querier and the remaining hosts suppress their responses for how the delay timer mechanism works refer to IGMP Snooping The querier receives the report for a group and adds the group to the list of multicast groups associated with its outgoing port to the subnet Multicast traffic for the group is then forwarded to that subnet e Sending an Unsolicited IGMP Report A host does not have to wait for a general query to join a group It may send an unsolicited IGMP membership report also called an IGMP Join message to the querier Leaving a Multicast Group Ahost sends a membership report of type 0x17 IGMP Leave message to the all routers multicast address 224 0 0 2 when it no longer cares about multicast traffic for a particular group The querier sends a group specific query to determine whether there are any remaining hosts in the group There must be at least one receiver in a group on a subnet for a router to forward multicast traffic for that group to the subnet e Any remaining hosts respond to the query according to the delay timer mechanism refer to IGMP Snooping If no hosts respond because there are none remaining in the group the querier waits a specified period and sends another quer
230. er which a RADIUS host server is declared dead CONFIGURATION mode radius server deadtime seconds seconds the range is from 0 to 2147483647 The default is O seconds Configure a key for all RADIUS communications between the system and RADIUS server hosts CONFIGURATION mode radius server key encryption type key encryption type enter 7 to encrypt the password Enter O to keep the password as plain text key enter a string The key can be up to 42 characters long You cannot use spaces in the key e Configure the number of times Dell Networking OS retransmits RADIUS requests CONFIGURATION mode radius server retransmit retries 162 Security for M I O Aggregator retries the range is from 0 to 100 Default is 3 retries e Configure the time interval the system waits for a RADIUS server host response CONFIGURATION mode radius server timeout seconds seconds the range is from O to 1000 Default is 5 seconds To view the configuration of RADIUS communication parameters use the show running config command in EXEC Privilege mode Monitoring RADIUS To view information on RADIUS transactions use the following command e View RADIUS transactions to troubleshoot problems EXEC Privilege mode debug radius TACACS Dell Networking OS supports terminal access controller access control system TACACS client including support for login authentication Configuration Task List for TACACS The f
231. erface Ma DHCP RELEASE ay 27 15 55 DHCP RELEASE ay zx So Interface Ma Transitioned ay 27 15 55 Interface Ma dhcp interface managementethernet 0 0 22 SSTKUNITO M CP DHCLIENT 5 DHC 0 0 CMD Received in state BOUND 22 SSTKUNITO M CP SDHCLIENT 5 DHC sent in Interface Ma 0 0 22 SSTKUNITO M CP SDHCLIENT 5 DHC 0 0 to state STOPPED 22 SSTKUNITO M CP SDHCLIENT 5 DHC 0 0 DHCP IP RELEA 64 iT ENT LOG DHCLIENT DBG EVT ENT LOG DHCLIENT DBG PKT ENT LOG DHCLIENT DBG EVT ENT LOG DHCLIENT DBG EVT SED CMD sent to FTOS in state STOPPI Dynamic Host Configuration Protocol DHCP Dell renew dhcp interface tengigabitethernet 0 1 Dell May 27 15 55 28 SSTKUNITO M CP DHCLIENT 5 DHCLIENT LOG DHCLIENT DBG EVT Interface Ma 0 0 DHCP RENEW CMD Received in state STOPPED ay 27 15 55 31 SSTKUNITO M CP SDHCLIENT 5 DHCLIENT LOG DHC TIENT DBG EVT Interface Ma 0 0 Transitioned to state SELECTING ay 27 15 55 31 SSTKUNITO M CP SDHCLIENT 5 DHCLIENT LOG DHC TIENT DBG PKT DHCP DISCOVER sent in Interface Ma 0 0 ay 27 15 55 31 STKUNITO M CP SDHCLIENT 5 DHCLIENT LOG DHC IENT DBG PKT Received DHCPOFFER packet in Interface Ma 0 0 with Lease Ip 10 16 134 250 ask 255 255 0 0 Server Id 10 16 134 249 DHCP Client An Aggregato
232. erface slot port CONFIGURATION 5 Set the local port speed speed 100 1000 10000 auto INTERFACE 6 Optionally set full or half duplex duplex half full INTERFACE 7 Disable auto negotiation on the no negotiation auto INTERFACE port If the speed is set to 1000 you do not need to disable auto negotiation 8 Verify configuration changes show config INTERFACE K NOTE The show interfaces status command displays link status but not administrative status For link and administrative status use the show ip interface interface brief configuration command show interface status Command Example Dell show interfaces status Port Description Status Speed Duplex Vlan Te 0 1 Down Auto Auto Te 0 2 Down Auto Auto Zo Te 0 3 Down Auto Auto Te 0 4 Down Auto Auto Te 0 5 Down Auto Auto zc Te 0 6 Down Auto Auto Te 0 7 Down Auto Auto Te 0 8 Down Auto Auto Te 0 9 Down Auto Auto Te 0 10 Down Auto Auto Te 0 11 Down Auto Auto Te 0 12 Down Auto Auto Te 0 13 Down Auto Auto output omitted In the above example several ports display Auto in the speed field including port 0 1 Now in the below example the speed of port 0 1 is set to 100 Mb and then its auto negotiation is disabled Setting Port Speed Example Dell configure Dell conf interface tengig 0 1 Dell conf if te 0 1 speed 1000 Dell conf if te 0 1 no negotiation auto Dell conf if te 0 1 show config l
233. ering the question mark first lists all available commands including the possible submodes The CLI modes are EXEC Privilege CONFIGURATION INTERFACE 10 GIGABIT ETHERNET INTERFACE RANGE MANAGEMENT ETHERNET LINE CONSOLE VIRTUAL TERMINAL MONITOR SESSION Navigating CLI Modes The Dell prompt changes to indicate the CLI mode The following table lists the CLI mode its prompt and information about how to access and exit the CLI mode Move linearly through the command modes except for the end command which takes you directly to EXEC Privilege mode and the exit command which moves you up one command mode level NOTE Sub CONFIGURATION modes all have the letters conf in the prompt with more modifiers to identify the mode and slot port information Table 1 Dell Command Modes CLI Command Mode Prompt Access Command EXEC Dell Access the router through the console or Telnet EXEC Privilege Dell e From EXEC mode enter the enable command From any other mode use the end command Configuration Fundamentals 21 CLI Command Mode Prompt Access Command CONFIGURATION Dell conf e From EXEC privilege mode enter the configure command From every mode except EXEC and EXEC Privilege enter the exit command NOTE Access all of the following modes from CONFIGURATION mode 10 Gigabit Ethernet Interfa
234. ermediary network device that passes DHCP messages between the client and server when the server is not on the same subnet as the host K NOTE The DHCP server and relay agent features are not supported on an Aggregator Assigning an IP Address using DHCP The following section describes DHCP and the client in a network When a client joins a network 1 The client initially broadcasts a DHCPDISCOVER message on the subnet to discover available DHCP servers This message includes the parameters that the client requires and might include suggested values for those parameters 2 Servers unicast or broadcast a DHCPOFFER message in response to the DHCPDISCOVER that offers to the client values for the requested parameters Multiple servers might respond to a single DHCPDISCOVER the client might wait a period of time and then act on the most preferred offer 3 The client broadcasts a DHCPREQUEST message in response to the offer requesting the offered values 4 After receiving a DHCPREQUEST the server binds the clients unique identifier the hardware address plus IP address to the accepted configuration parameters and stores the data in a database called a binding table The server then broadcasts a DHCPACK message which signals to the client that it may begin using the assigned parameters There are additional messages that are used in case the DHCP negotiation deviates from the process previously described and shown in the illustration
235. es in NPIV mode e With FC Flex IO modules you can connect the IOA in Simple MUX mode to a single fabric e With FC Flex IO modules on an IOA the FC port speed is set to auto The following parameters are automatically configured on the ENode facing and FC ports Description SAN FABRIC e Fabric id 1002 e Fcoe vlan 1002 e Fc map Ox0efc00 e Fcf priority 128 e Fka adv period 8000mSec e Keepalive enable e Vlan priority 5 e Onan IOA the FCoE virtual local area network VLAN is automatically configured e With FC Flex IO modules on an IOA the following DCB maps are applied on all of the ENode facing ports e dcb map SAN DCB MAP e priority group O bandwidth 50 pfc off e priority group 1 bandwidth 30 pfc off e priority group 2 bandwidth 40 pfc on e priority pgid 00021000 262 FC Flex IO Modules e On I O Aggregators uplink failure detection UFD is disabled if FC Flex IO module is present to allow server ports to communicate with the FC fabric even when the Ethernet upstream ports are not operationally up Ensure that the NPIV functionality is enabled on the upstream switches that operate as FC switches or FCoE forwarders FCF before you connect the FC port of the MXL or I O Aggregator to these upstream switches While storage traffic traverses through FC Flex IO modules and the Ethernet uplink port channel status changes with DCB enabled on an adjacent switch FCoE traffic is disrupted This probl
236. esponds to the FIP VLAN discovery request from the host based on the configured FCoE VLANs For every ENode and VN Port that is logged in the FIP application responds to keepalive messages for the virtual channeL If the FC link becomes inactive or a logging off of the switch occurs the FIP engine sends clear virtual link CVL messages to the host The FIP application also responds to solicited advertisements from the end device In addition the FIP application periodically sends advertisement packets to the end devices for each FCF that is part of the NPIV proxy gateway If FC Flex IO modules are installed the I O Aggregator does not perform FIP snooping because the FIP frames are terminated on the switch for NPIV operations However on MXL Switches you can configure the switch to operate in FIP Snooping or NPIV mode If the MXL 10 40GbE Switch functions in the NPIV mode and you attempt to set the uplink port to be an FCF or a bridge port a warning message displays and the settings are not saved On the Aggregator if the FC module is present the uplink ports are not automatically set up as FCF or bridge ports The FC Flex module cannot function as both an NPIV proxy gateway and a FIP snooping bridge at the same time Operation of the NPIV Proxy Gateway The NPIV application on the FC Flex IO module manages the FC functionalities configured in Dell Networking OS After the FC link comes up the gateway sends the initial FLOGI request to the con
237. et to be isolated all the packets of originating from the server ports for that VLAN Isolated Network will be redirected to uplink LAG including the packets destined for the server ports on the same blade ToR applies required ACLs and other necessary actions before sending the packet to destination If the packet is destined to server on the same IOA blade it is routed back on the uplink lag where it was received Traffic that hits at the uplink ports are regularly switched based on the L2 MAC lookup Unknown Unicast and Multicast packets from Uplink Port towards server port on an isolated network enabled VLAN is dropped The isolated network feature is supported only in the standalone mode Isolated network is currently not supported in the following modes e VLT mode e Stacking mode e PMUX mode Es NOTE Isolated Networks is not enabled for FCOE VLANs and on default VLAN It can be managed via CLl or AFM For more information refer to AFM user manual Configuring and Verifying Isolated Network Settings Enable the isolated network functionality for a particular VLAN or a set of VLANs using below command Dell conf io aggregator isolated network vlan lt vlan range gt To disable the isolated network functionality use the no form of command Dell conf no io aggregator isolated network vlan lt vlan range gt To display the VLANs that are configured to be part of an isolated network on the Aggregator use the below command
238. etworking OS software versions on the VLT peers is compatible For more information refer to the Release Notes for this release Verify the VLT LAG ID is configured correctly on both VLT peers Perform a mismatch check after the VLT peer is established PMUX Mode of the IO Aggregator 22 FC Flex IO Modules This part provides a generic broad level description of the operations capabilities and configuration commands of the Fiber Channel FC Flex IO module FC Flex IO Modules This part provides a generic broad level description of the operations capabilities and configuration commands of the Fiber Channel FC Flex IO module Understanding and Working of the FC Flex IO Modules This chapter provides a generic broad level description of the operations and functionality of the Fiber Channel FC Flex IO module and contains the following sections e FCFlexIO Modules Overview FC Flex IO Module Capabilities and Operations Guidelines for Working with FC Flex IO Modules Processing of Data Traffic Installing and Configuring the Switch Interconnectivity of FC Flex IO Modules with Cisco MDS Switches FC Flex IO Modules Overview The Fibre Channel FC Flex IO module is supported on MXL 10 40GbE Switch and M I O Aggregator IOA The MXL or IOA switch installed with the FC Flex IO module functions as a top of rack edge switch that supports converged enhanced Ethernet CEE traffic Fibre Channel over Et
239. f tuser gooduser password abc privilege 10 access class permitall Dell conf user baduser password abc privilege 10 access class denyall Dell conf Dell conf taaa authentication login localmethod local Dell conf Dell conf line vty 0 9 Dell config line vty ilogin authentication localmethod Dell config line vty end VTY Line Remote Authentication and Authorization Dell Networking OS retrieves the access class from the VTY line The Dell Networking OS takes the access class from the VTY line and applies it to ALL users Dell Networking OS does not need to know the identity of the incoming user and can immediately apply the access Class If the authentication method is RADIUS TACACS or line and you have configured an access Class for the VTY line Dell Networking OS immediately applies it If the access class is set to deny all or deny for the incoming subnet Dell Networking OS closes the connection without displaying the login prompt The following example shows how to deny incoming connections from subnet 10 0 0 0 without displaying a login prompt The example uses TACACS as the authentication mechanism 174 Security for M I O Aggregator Example of Configuring VTY Authorization Based on Access Class Retrieved from the Line Per Network Address Dell conf tip access list standard denyl0 Dell conf ext nacl permit 10 0 0 0 8 Dell conf ext nacl deny any Dell conf Dell conf aaa authentication log
240. f vlt domain back up destination 169 254 31 23 Dell conf vlt domain system mac mac address 00 01 09 06 06 06 gt unit id O VLT Primary unit id 1 VLT Secondary Dell conf vlt domain unit id O Dell conf vlt domain end 3 Configure the VLT port channel In the following example the local and remote VLT port channels are the same but you can also use different VLT port channels Dell configure Dell conf int port channel 128 Dell conf if po 128 portmode hybrid Dell conf if po 128 switchport Dell conf if po 128 vlt peer lag port channel 128 Dell conf if po 128 link bundle monitor enable Dell conf if po 128 no shutdown Dell conf if po 128 end 4 Show the VLT peer status Dell show vlt brief VLT Domain Brief Domain ID 1 Role Primary Role Priority 32768 ICL Link Status Up HeartBeat Status Up VLT Peer Status Up Local Unit Id 0 Version 6 2 Local System MAC address 00 01 e8 el el c3 Remote System MAC address 8 5b1 56 0e b1 7 Configured System MAC address 00 01 09 06 06 06 Remote system version 6 2 Delay Restore timer 90 seconds Peer Routing Disabled Peer Routing Timeout timer 0 seconds 250 PMUX Mode of the IO Aggregator Multicast peer routing timeout 150 seconds Dell 5 Configure the secondary VLT NOTE Repeat steps from 1 through 4 on the secondary VLT ensuring you use the different backup destination and unit id
241. f0 0 37 port channel protocol lacp Dell conf if range fo 0 33 f0 0 37 lacp port channel 20 mode active Dell conf Dell conf int fortygige 0 49 Dell conf if fo 0 49 port channel protocol lacp Dell conf if fo 0 49 lacp port channel 21 mode active Dell conf if fo 0 49 lacp Dell conf if fo 0 49 no shut 4 Configure the port mode VLAN and so forth on the port channel Dell configure Dell conf int port channel 20 Dell conf if po 21 end Dell conf if po 21 no shut Dell conf if po 20 portmode hybrid Dell conf if po 20 switchport Dell conf if po 20 no shut Dell conf if po 20 ex Dell conf int port channel 21 Dell conf if po 21 portmode hybrid Dell conf if po 21 switchport Dell 5 Show the port channel status Dell sh int port channel br Codes L LACP Port channel O OpenFlow Controller Port channel LAG Mode Status Uptime Ports L 20 L2 up 00 00 53 Fo 0 33 Up Fo 0 37 Up L 241 L2 up 00 00 02 Fo 0 49 Up Dell Dell conf int port channel 20 Dell conf if po 20 vlan tagged 1000 Dell conf if po 20 Dell conf if po 21 vlan tagged 1000 Error Same VLAN cannot be added to more than one uplink port LAG Dell conf if po 21 vlan tagged 1001 Dell conf if po 21 228 PMUX Mode of the IO Aggregator 6 Show the VLAN status Dell show vlan Codes Default VLAN G GVRP VLANs R Remote Port Mirroring VLAN
242. fabric facing FC ports Enabling Fibre Channel Capability on the Switch Enable the FC Flex IO module on an MXL 10 40GbE Switch or an M I O Aggregator that you want to configure as an NPG for the Fibre Channel protocol When you enable Fibre Channel capability FCoE transit with FIP snooping is automatically enabled on all VLANs on the switch using the default FCoE transit settings Task Command Command Mode Enable an MXL 10 40GbE Switch and M I O feature fc CONFIGURATION Aggregator with the FC Flex IO module for the Fibre Channel protocol Creating a DCB Map Configure the priority based flow control PFC and enhanced traffic selection ETS settings in a DCB map before you apply them on downstream server facing ports on an MXL 10 40GbE Switch or an M I O Aggregator with the FC Flex IO module Step Task Command Command Mode 1 Create a DCB map to specify PFC and ETS dcb map name CONFIGURATION settings for groups of dot1p priorities 2 Configure the PFC setting on or off and the priority group DCB MAP ETS bandwidth percentage allocated to traffic group num bandwidth in each priority group Configure whether the A priority group traffic should be handled with strict priority scheduling The sum of all allocated bandwidth percentages must be 100 percent Strict priority traffic is serviced first Afterward bandwidth allocated to other priority groups is made available and allocated according to the specified percentages If a prior
243. fails the switch disables the downstream links Failures on the downstream links allow downstream devices to recognize the loss of upstream connectivity For example as shown in the following illustration Switches S1 and S2 both have upstream connectivity to Router R1 and downstream connectivity to the server UFD operation is shown in Steps A through C e In Step A the server configuration uses the connection to S1 as the primary path Network traffic flows from the server to S1 and then upstream to R1 e InStep B the upstream link between S1 and R1 fails The server continues to use the link to S1 for its network traffic but the traffic is not successfully switched through S1 because the upstream link is down e InStep C UFD on S1 disables the link to the server The server then stops using the link to S1 and switches to using its link to S2 to send traffic upstream to R1 NOTE In Standalone VLT and Stacking modes the UFD group number is 1 by default and cannot be changed Uplink Failure Detection UFD 213 R1 R1 R1 st y s2 1 x 2 1 x S2 oo Ve ve m el f e AX A Server Server Server A Switches 1 and 2 have upstream and downstream connections to Router1 and Server via primary Links B Upstream link between Switch1 and Router1 fails Downstream link to Server stays up temporarily C Switch1 disables downstream link to Server Server starts to connect with Router1 using backup link to Switch2 Switch2 star
244. fully online e Resolution When the entire stack is reloaded the recovered master switch becomes the master unit of the stack Stack Unit in Card Problem State Due to Incorrect Dell Networking OS Version e Problem A stack unit enters a Card Problem state because the switch has a different Dell Networking OS version than the master unit The switch does not come online as a stack unit e Resolution To restore a stack unit with an incorrect Dell Networking OS version as a member unit disconnect the stacking cables on the switch and install the correct Dell Networking OS version Then add the switch to the stack as described in Adding a Stack Unit To verify that the problem has been resolved and the stacked switch is back online use the show system brief command Dell show system brief Stack MAC 00 le c9 1 00 9b Stack Into Unit UnitType Status ReqTyp CurTyp Version Ports 0 anagement online I O Aggregator 1 0 Aggregator 8 3 17 46 56 1 Standby card problem 1 0 Aggregator unknown 56 2 Member not present 3 Member not present 4 Member not present 5 ember not present Card Problem Resolved Dell show system brief Stack MAC 00 1e c9 1 04 82 Stack Info Unit UnitType Status ReqTyp CurTyp Version Ports 0 anagement online I O Aggregator I O Aggregator 8 3 17 52 56 1 Standby online 1 0 Aggregator I O Aggregator 8 3 17 52 56 2 Member not present 3 Member not present 204 Stacking 4 Member no
245. fying a Stack Configuration The following lists the status of a stacked switch according to the color of the System Status light emitting diodes LEDs on its front panel Blue indicates the switch is operating as the stack master or as a standalone unit e Off indicates the switch is a member or standby unit e Amber indicates the switch is booting or a failure condition has occurred Using Show Commands To display information on the stack configuration use the show commands on the master switch e Displays stacking roles master standby and member units and the stack MAC address how system brief isplays the FlexiO modules currently installed in expansion slots O and 1 on a switch and the xpected module logically provisioned for the slot e oga how inventory optional module isplays the stack groups allocated on a stacked switch The range is from 0 to 5 e DO show system stack unit unit number stack group configured e Displays the port numbers that correspond to the stack groups on a switch The valid stack unit numbers are from O to 5 show system stack unit unit number stack group e Displays the type of stack topology ring or daisy chain with a list of all stacked ports port status link peed and peer stack unit connection 4 show system stack ports status topology Example of the show system brief Command Dell show system brief StStack MAC 00 1e c9 1 00 9b Stack Info
246. ganized into two buffer pools the dedicated buffer and the dynamic buffer e Dedicated buffer this pool is reserved memory that other interfaces cannot use on the same ASIC or by other queues on the same interface This buffer is always allocated and no dynamic re carving takes place based on changes in interface status Dedicated buffers introduce a trade off They provide each interface with a guaranteed minimum buffer to prevent an overused and congested interface from starving all other interfaces However this minimum guarantee means that the buffer manager does not reallocate the buffer to an adjacent congested interface which means that in some cases memory is under used Dynamic buffer this pool is shared memory that is allocated as needed up to a configured limit Using dynamic buffers provides the benefit of statistical buffer sharing An interface requests dynamic buffers when its dedicated buffer pool is exhausted The buffer manager grants the request based on three conditions The number of used and available dynamic buffers The maximum number of cells that an interface can occupy Available packet pointers 2k per interface Each packet is managed in the buffer using a unique packet pointer Thus each interface can manage up to 2k packets You can configure dynamic buffers per port on both 1G and 10G FPs and per queue on CSFs By default the FP dynamic buffer allocation is 10 times oversubscribed
247. gator running Dell Networking OS version 9 4 0 0 The MI O Aggregator is installed in a Dell PowerEdge M I O Aggregator For information about how to install and perform the initial switch configuration refer to the Getting Started Guides on the Dell Support website at http www dell com support manuals Though this guide contains information about protocols it is not intended to be a complete reference This guide is a reference for configuring protocols on Dell Networking systems For complete information about protocols refer to other documentation including IETF requests for comment RFCs The instructions in this guide cite relevant RFCs and Standards Compliance contains a complete list of the supported RFCs and management information base files MIBs K NOTE You can perform some of the configuration tasks described in this document by using either the Dell command line or the chassis management controller CMC graphical interface Tasks supported by the CMC interface are shown with the CMC icon CMC Audience This document is intended for system administrators who are responsible for configuring and maintaining networks and assumes knowledge in Layer 2 and Layer 3 networking technologies Conventions This guide uses the following conventions to describe command syntax Keyword Keywords are in Courier a monospaced font and must be entered in the CLI as listed parameter Parameters are in italics and require a num
248. ge te 0 41 42 lacp fend L L L Dell Dell configure L L L L Dell conf tint tengigabitethernet 0 43 Dell conf if te 0 43 port channel protocol lacp Dell conf if te 0 43 lacp port channel 11 mode active Dell conf if te 0 43 lacp end Dell 3 Show the LAG configurations and operational status Dell show interface port channel brief Codes L LACP Port channel O OpenFlow Controller Port channel LAG Mode Status Uptime Ports L 10 L3 up 00 01 00 Te 0 41 Up Te 0 42 Up L 11 L3 up 00 00 01 Te 0 43 Up Dell 4 Configure the port mode VLAN and so forth on the port channel Dell configure Dell conf int port channel 10 Dell conf if po 10 portmode hybrid Dell conf if po 10 switchport Dell conf if po 10 vlan tagged 1000 Dell conf if po 10 link bundle monitor enable Dell conf int port channel 11 Dell conf if po 11 portmode hybrid Dell conf if po 11 switchport Dell 9 Dell configure conf if po 11 vlan tagged 1000 E E H Error Same VLAN cannot be added to more than one uplink port LAG Dell conf if po 11 vlan tagged 1001 Dell conf if po 11 link bundle monitor enable Dell show vlan Codes Default VLAN G GVRP VLANs R Remote Port irroring VLANs P Primary C Community I Isolated O Openflow Q U Untagged T Tagged 226 PMUX Mode of the IO Aggregator Dotlx untag
249. ge up a Page down T Increase refresh interval t Decrease refresh interval a Quit Maintenance Using TDR The time domain reflectometer TDR is supported on all Dell Networking switch routers TDR is an assistance tool to resolve link issues that helps detect obvious open or short conditions within any of the four copper pairs TDR sends a signal onto the physical cable and examines the reflection of the signal that returns By examining the reflection TDR is able to indicate whether there is a cable fault when the cable is broken becomes unterminated or if a transceiver is unplugged TDR is useful for troubleshooting an interface that is not establishing a link that is when the link is flapping or not coming up Do not use TDR on an interface that is passing traffic When a TDR test is run on a physical cable it is important to shut down the port on the far end of the cable Otherwise it may lead to incorrect test results Es NOTE TDR is an intrusive test Do not run TDR on a link that is up and passing traffic To test the condition of cables on 100 1000 10000 BASE T modules follow the below steps using the tdr cable test command Step Command Syntax Command Mode Usage de tdr cable test tengigabitethernet EXEC Privilege To test for cable faults on the lt slot gt lt port gt TenGigabitEthernet cable e Between two ports you must not start the test on both ends of the cable Enable the interface before starting the
250. ged X Dotlx tagged OpenFlow untagged O OpenFlow tagged GVRP tagged M Vlan stack H VSN tagged i Internal untagged I Internal tagged v VLT untagged V VLT tagged QOX NUM Status Description Q Ports 1 Active U Pol0 Te 0 4 5 U Poll Te 0 6 1000 Active T Pol0 Te 0 4 5 1001 Active T Poll Te 0 6 Dell 5 Show LAG member ports utilization Dell show link bundle distribution Link bundle trigger threshold 60 LAG bundle 10 Utilization In Percent 0 Alarm State Inactive Interface Line Protocol Utilization In Percent Te 0 41 Up 0 Te 0 42 Up 0 LAG bundle 11 Utilization In Percent 0 Alarm State Inactive Interface Line Protocol Utilization In Percent Te 0 6 Up 0 Dell Multiple Uplink LAGs with 40G Member Ports By default in IOA native 40G QSFP optional module ports are used in Quad 4x10G mode to convert Quad mode to Native 40G mode refer to the sample configuration Also note converting between Quad mode and Native mode and vice versa requires that you reload the system for the configuration changes to take effect The following sample commands configure multiple dynamic uplink LAGs with 40G member ports based on LACP 1 Convert the quad mode 4x10G ports to native 40G mode Dell configure Dell conf no stack unit 0 port 33 portmode quad Disabling quad mode on stack unit 0 port 33 will make interface configs Te 0 33 Te 0 34 Te 0 35 Te 0 36 obsolet
251. ggregator is in Standalone mode where all the 4000 VLANs are part of all server side interfaces as well as the single uplink LAG it takes a long time 30 seconds or more for external Z2000000000000000000000000 ANDAADAA AAA AD Qe ee se se se oe se eo o Uni Uni Uni Uni Uni Uni Uni Uni Uni Uni Uni Uni Uni Uni Uni Uni Uni Uni Uni Uni Uni Uni Uni Unit ESPE a E Cr eer eer eet TE EE LE ATT OCT S i A AED RD EE a ETeET Unit UN LES Optiona 4 port Unit Unit Unit Unit 40G QSFP 40G QOSFP4 Pore Port 10 Port 11 Port 12 Port 13 Port 14 Port 15 Port 16 Port 17 Port 18 Port 19 Port 20 Port 21 Port 22 Port 23 Port 24 Port 25 Port 26 Port 27 Port 28 Port 29 Port 30 Port 31 Port 32 P port Port 33 P port Port 37 1 module 10GE SFP Port 41 Port 42 Port 43 Port 44 9 10G Level Optional module 106 106 106 106 106 106 106 106 106 106 106 106 106 106 106 106 106 106 106 106 106 106 106 40G 40G o Level Level Level Level Level Level Level Level Level Level LEVEL Level Level Level Level Level Level Level Level Level Level Level Level XI 10G 10G 10G 10G qM Level Level Level Level Level Level management entities to discover the entire VLAN membership table of all the ports Support for current status OID in the standard VLAN MIB is expected to
252. ging Fibre Channel over Ethernet and NPIV Proxy Gateway features are supported on the FC Flex IO modules For detailed information about these applications and their working see the corresponding chapters for these applications in this manual The following figures illustrate two deployment scenarios of configuring FC Flex IO modules FC Flex IO Modules 267 Comoelienti M1900e chassis Ad ag Sera cm FC linia Figure 31 Case 1 Deployment Scenario of Configuring FC Flex IO Modules Emerner Unis Ethernet iras M1000e chassis a PENAS Figure 32 Case 2 Deployment Scenario of Configuring FC Flex IO Modules Fibre Channel over Ethernet for FC Flex IO Modules FCOE provides a converged Ethernet network that allows the combination of storage area network SAN and LAN traffic on a Layer 2 link by encapsulating Fibre Channel data into Ethernet frames The Fibre Channel FC Flex IO module is supported on Dell Networking Operating System OS MXL 10 40GbE Switch and M I O Aggregator IOA The MXL and IOA switch installed with the FC Flex IO module functions as a top of rack edge switch that supports converged enhanced Ethernet CEE traffic Fibre channel over Ethernet FCOE for storage Interprocess Communication IPC for servers and 268 FC Flex IO Modules Ethernet local area network LAN IP cloud for data as well as FC links to one or more storage area network SAN fabrics FCoE works with the Ethernet enha
253. ging and Diag mode ion Operating System Software 1 0 ication Software Version E8 3 17 24 nostics onfiguration on an Aggregator in stacking 289 Copyright c 1999 2014 by Dell Inc All Rights Reserved Build Time Thu Jul 5 11 20 28 PDT 2012 Build Path sites sjc work build buildSpaces build05 E8 3 17 SW SRC Cp src Tacacs st sjc m1000e 3 72 uptime is 17 hour s 1 minute s System Type 1 O Aggregator Control Processor MIPS RMI XLP with 2147483648 bytes of memory 256M bytes of boot flash memory 1 34 port GE TE XL 56 Ten GigabitEthernet IEEE 802 3 interface s Dell show system stack unit 0 Command Example Dell show system stack unit 0 no more Unit 0 Unit Type Management Unit Status online Next Boot online Required Type I O Aggregator 34 port GE TE XL Current Type I O Aggregator 34 port GE TE XL aster priority 0 Hardware Rev Num Ports 56 Up Time 17 hr 8 min FTOS Version 8 3 17 15 Jumbo Capable yes POE Capable no Boot Flash A 4 0 1 0 booted B 4 0 1 0bt Boot Selector 4 0 0 0 emory Size 2147483648 bytes Temperature 64C Voltage ok Switch Power GOOD Product Name I O Aggregator fg By DELL fg Date 2012 05 01 Serial Number TW282921F00038 Part Number ONVH81 Piece Part ID TW 0NVH81 28292 1F0 0038 PPID Revision Service Tag N A Expr Svc Code N A PSOC FW Rev Oxb ICT Test D
254. gured as auto downstream ports On the Aggregator PFC and ETS use DCBX to exchange link level configuration with DCBX peer devices SAN Storage Network FC or iSCSI Storage Array FC or iSCSI Storage Array ToR Switch FCF 10GbE Backplane Connections MXL Switches Installed in M1000e Chassis Servers Installed in M1000e Chassis Figure 4 DCBx Sample Topology 48 Data Center Bridging DCB DCBX Prerequisites and Restrictions Th e following prerequisites and restrictions apply when you configure DCBx operation on a port DCBX requires LLDP in both send TX and receive RX mode to be enabled on a port interface If multiple DCBX peer ports are detected on a local DCBX interface LLDP is shut down The CIN version of DCBx supports only PFC ETS and FCOE it does not support iSCSI backward congestion management BCN logical link down LLD and network interface virtualization NIV DCBX Error Messages Th e following syslog messages appear when an error in DCBx operation occurs LDP MULTIPLE PEER DETECTED DCBx is operationally disabled after detecting mo pe re than one DCBx er on the port interface bDP PEER AGE OUT DCBx is disabled as a result of LLDP timing out on a DCBx pe O in ve DS DC a DS DC a H er interface DSM DCBx PEER VERSION CONFLICT A local port expected to receive the IEEE CIN CEE ver
255. h independent control and data planes for devices attached on non VLT ports One chassis in the VLT domain is assigned a primary role the other chassis takes the secondary role The primary and secondary roles are required for scenarios when connectivity between the chassis is lost VLT assigns the primary chassis role according to the lowest MAC address You can configure the primary role In a VLT domain the peer switches must run the same Dell Networking OS software version Separately configure each VLT peer switch with the same VLT domain ID and the VLT version If the system detects mismatches between VLT peer switches in the VLT domain ID or VLT version the VLT Interconnect VLTi does not activate To find the reason for the VLTi being down use the show vlt statistics command to verify that there are mismatch errors then use the show vlt brief command on each VLT peer to view the VLT version on the peer switch If the VLT version is more than one release different from the current version in use the VLTi does not activate The chassis members in a VLT domain support connection to orphan hosts and switches that are not connected to both switches in the VLT core e VLT interconnect VLTi 248 The VLT interconnect must consist of either 10G or 40G ports A maximum of eight 10G or four 40G ports is supported A combination of 10G and 40G ports is not supported A VLT interconnect over 1G ports is not supported The port channel m
256. h through the FC Flex IO module N port Server fabric login FLOGI requests are converted into fabric discovery FDISC requests before being forwarded by the FC Flex IO module to the FC core switch Servers use CNA ports to connect over FCoE to an Ethernet port in ENode mode on the NPIV proxy gateway FCOE transit with FIP snooping is automatically enabled and configured on the M1000e gateway to prevent unauthorized access and data transmission to the SAN network see FCoE Transit FIP is used by server CNAs to discover an FCoE switch operating as an FCoE forwarder FCF The NPIV proxy gateway aggregates multiple locally connected server CNA ports into one or more upstream N port links conserving the number of ports required on an upstream FC core switch while providing an FCoE to FC bridging functionality The upstream N ports on an M1000e can connect to the same or multiple fabrics Using an FCoE map applied to downstream server facing Ethernet ports and upstream fabric facing FC ports you can configure the association between a SAN fabric and the FCoE VLAN that connects servers over the NPIV proxy gateway to FC switches in the fabric An FCoE map virtualizes the upstream SAN fabric as an FCF to downstream CNA ports on FCoE enabled servers as follows e Assoon as an FC N port comes online no shutdown command the NPG starts sending FIP multicast advertisements which contain the fabric name derived from the 64 bit worldwide name WWN of
257. he FCoE map containing the FCoE FC configuration parameters for the server CNA fabric connection Worldwide port name of the server CNA port Worldwide node name of the server CNA Fabric provided MAC address FPMA The FPMA consists of the FC MAP value in the FCoE map and the FC ID provided by the fabric after a successful FLOGI In the FPMA the most significant bytes are the FC MAP the least significant bytes are the FC ID FC port ID provided by the fabric Method used by the server CNA to log in to the fabric for example FLOGI or FDISC Number of seconds that the fabric connection is up Status of the fabric connection logged in show fc switch Command Example Dell show fc switch Switch Mode NPG Switch WWN 10 00 5c f9 dd ef 10 c0 Dell Table 24 show fc switch Command Description Field Description Switch Mode Fibre Channel mode o f operation of an MXL 10 40GbE Switch or M I O Aggregator with the FC Flex IO module Default NPG configured as an NPIV proxy gateway Switch WWN Factory assigned worldwide node WWN name of the MXL 10 40GbE Switch or M I O Aggregator with the FC Flex IO module The MXL 10 40GbE Switch or M I O Aggregator with the FC Flex IO module WWN name is not user configurable FC Flex IO Modules 285 23 Upgrade Procedures To find the upgrade procedures go to the Dell Networking OS Release Notes for your system type to see all the requirements needed to upgrade to the desired D
258. hernet FCoE for storage Interprocess Communication IPC for servers and Ethernet local area network LAN IP cloud for data as well as FC links to one or more storage area network SAN fabrics Although the MXL 10 40GbE Switch and the I O Aggregator can act as a FIP snooping bridge FSB to provide FCoE transit switch capabilities the salient and significant advantage of deploying the FC Flex IO module is to enable more streamlined and cohesive FCoE N port identifier virtualization NPIV proxy gateway functionalities The NPIV proxy gateway NPG provides FCoE FC bridging behavior The FC Flex IO module offers a rich comprehensive set of FCoE functionalities on the M1000e chassis by splitting the Ethernet and Fibre Channel FC traffic at the edge of the chassis The FC switches that are connected directly to the FC Flex IO module provide Fibre Channel capabilities because the FC Flex IO module does not support full fabric functionalities With the separation of Ethernet and FC packets performed at the edge of the chassis itself you can use the MXL 10 40GbE Switch or the Aggregator that contains an FC Flex IO module to connect to a SAN environment without the need for a separate TOR switch to operate as NPIV proxy gateways The MXL 10 40GbE Switch or the I O Aggregator can function in NPIV proxy gateway mode when an FC Flex IO module is present or in the FIP snooping bridge FSB mode when all the ports are Ethernet ports The FC Flex IO
259. hernet 0 3 4 Dell show running config uplink state group uplink state group 3 description Testing UFD feature downstream disable links 2 downstream TenGigabitEthernet 0 1 2 5 9 11 12 upstream TenGigabitEthernet 0 3 4 Dell show uplink state group 3 Uplink State Group 3 Status Enabled Up Dell show uplink state group detail Up Interface up Dwn Interface down Dis Interface disabled Uplink State Group 3 Status Enabled Up 222 Uplink Failure Detection UFD Upstream Interfaces Te 0 3 Up Te 0 4 Up Downstream Interfaces Te 0 1 Up Te 0 2 Up Te 0 5 Up Te 0 9 Up Te 0 11 Up Te 0 12 Up lt After a single uplink port fails gt Dell show uplink state group detail Up Interface up Dwn Interface down Dis Interface disabled Uplink State Group 3 Status Enabled Up Upstream Interfaces Te 0 3 Dwn Te 0 4 Up Downstream Interfaces Te 0 1 Dis Te 0 2 Dis Te 0 5 Up Te 0 9 Up Te 0 11 Up Te 0 12 Up Uplink Failure Detection SMUX mode In Standalone or VLT modes by default all the server facing ports are tracked by the operational status of the uplink LAG If the uplink LAG goes down the aggregator loses its connectivity and is no longer operational All the server facing ports are brought down after the specified defer timer interval which is 10 seconds by default If you have configured VLAN you can reduce the defer time by changing the defer
260. hronized with the standby stack unit The FCoE database is maintained by snooping FIP keep alive messages e n case of a failover the new master switch starts the required timers for the FCoE database tables Timers run only on the master stack unit K NOTE While technically possible to run FIP snooping and stacking concurrently Dell Networking recommends a SAN design utilizes two redundant FCoE network path versus stacking This avoids a single point of failure to the SAN and provides a guaranteed latency The overall latency could easily rise above desired SAN limits if a link level failure redirects traffic over the stacking backplane How FIP Snooping is Implemented As soon as the Aggregator is activated in an M1000e chassis as a switch bridge existing VLAN specific and FIP snooping auto configurations are applied The Aggregator snoops FIP packets on VLANs enabled for FIP snooping and allows legitimate sessions By default all FCoE and FIP frames are dropped unless specifically permitted by existing FIP snooping generated ACLs FIP Snooping on VLANs FIP snooping is enabled globally on an Aggregator on all VLANs e FIP frames are allowed to pass through the switch on the enabled VLANs and are processed to generate FIP snooping ACLs e FCoE traffic is allowed on VLANs only after a successful virtual link initialization fabric login FLOGI between an ENode and an FCF All other FCoE traffic is dropped e Atleast one interface is
261. i reet eg d ne d eed ette Configuration Source Elec diia e d cn Propagation of DCB nto mati Musical ld RR rat Auto Det ction ofthe DEBXV SiN inc iii DEBXEXaMple anima EA E tenia naa ERRADA DCBX Prerequisites and Restrictloris ii MA e Pete De aie eb A RSEN DCBX Error Messages xit eden dit etn toe Debugging DCBX or an Interface tret td et ete tnn e rhet tet etta Verifying the JD CB Conflg ration eei erret ete ee e e eer ee be e ot Hierarchical Scheduling in ETS Output Policies mee 5 Dynamic Host Configuration Protocol DHCP 61 Assigning an IP Address using DAPR eed ir ae 61 Debugging DHCP Client Operatii uicti tret rtr eere ir RE RR ERR EYE BATH eI TEET de 63 DHCP Cl e Hee e ee tee ete tbe lada a 65 How DHCP Client is Implemented eene tnter 65 DHCP Client on a Management Interface nnne nnn 66 DHCP Clienten aV AN eco t RE RE ERRANT ERREUR PU Reg ARAA EGRE ora DHCP Packet Format and Options OPIOME NC Releasing and Renewing DHCP based IP Addresses 69 Viewing DHCP Statistics and Lease Information 69 6 FIP SADOPIN oceans i lela ek chee Ha ne eee le 71 Fibre Channel over Ethernet cita rdi dne det Pe edd 71 Ensuring Robustness in a Converged Ethernet Network eme 71 FIP Snooping on Ethernet Bridgesia a re e s eo dede 72 FIP Snooping ina SWitchStack id 75 How FIP Snooping is IMPlEMented ccccecccescessecsseessecseecseecseccseeeseecseessessseesseesseeses
262. ic to which FCoE storage traffic is sent Using an FCoE map an MXL 10 40GbE Switch and M I O Aggregator with the FC Flex IO module NPG operates as an FCoE FC bridge between an FC SAN and FCoE network by providing FCoE enabled servers and switches with the necessary parameters to log in to a SAN fabric An FCoE map applies the following parameters on server facing Ethernet and fabric facing FC ports on the MXL 10 40GbE Switch and M I O Aggregator with the FC Flex IO module e The dedicated FCoE VLAN used to transport FCoE storage traffic The FC MAP value used to generate a fabric provided MAC address The association between the FCoE VLAN ID and FC fabric ID where the desired storage arrays are installed Each Fibre Channel fabric serves as an isolated SAN topology within the same physical network e The priority used by a server to select an upstream FCoE forwarder FCF priority e FIP keepalive FKA advertisement timeout NOTE In each FCoE map the fabric 1D FC MAP value and FCoE VLAN must be unique Use one FCoE map to access one SAN fabric You cannot use the same FCoE map to access different fabrics When you configure an MXL 10 40GbE Switch and M I O Aggregator with the FC Flex IO module as an NPG FCoE transit with FIP snooping is automatically enabled and configured using the parameters in the FCoE map applied to server facing Ethernet and fabric facing FC interfaces see FIP Snooping on an NPIV Proxy Gateway Afte
263. ilable after strict priority groups are serviced If a priority group does not use its allocated bandwidth the unused bandwidth is made available to 32 Data Center Bridging DCB other priority groups so that the sum of the bandwidth use is 100 If priority group bandwidth use exceeds 100 all configured priority group bandwidth is decremented based on the configured percentage ratio until all priority group bandwidth use is 100 If priority group bandwidth usage is less than or equal to 100 and any default priority groups exist a minimum of 1 bandwidth use is assigned by decreasing 1 of bandwidth from the other priority groups until priority group bandwidth use is 100 e For ETS traffic selection an algorithm is applied to priority groups using Strict priority shaping ETS shaping Credit based shaping is not supported e ETS uses the DCB MIB IEEE 802 1azd2 5 Configuring Enhanced Transmission Selection ETS provides a way to optimize bandwidth allocation to outbound 802 1p classes of converged Ethernet traffic Different traffic types have different service needs Using ETS you can create groups within an 802 1p priority class to configure different treatment for traffic with different bandwidth latency and best effort needs For example storage traffic is sensitive to frame loss interprocess communication IPC traffic is latency sensitive ETS allows different traffic types to coexist without interruption in
264. in tacacsmethod tacacs Dell conf tacacs server host 256 1 1 2 key Forcel0 Dell conf Dell conf tline vty 0 9 Dell config line vty login authentication tacacsmethod Dell config line vty Dell config line vty access class denylO Dell config line vty end same applies for radius and line authentication VTY MAC SA Filter Support Dell Networking OS supports MAC access lists which permit or deny users based on their source MAC address With this approach you can implement a security policy based on the source MAC address To apply a MAC ACL on a VTY line use the same access class command as IP ACLs The following example shows how to deny incoming connections from subnet 10 0 0 0 without displaying a login prompt Example of Configuring VTY Authorization Based on MAC ACL for the Line Per MAC Address Dell conf mac access list standard sourcemac Dell config std mac permit 00 00 5e 00 01 01 mor std mac deny any Dell conf Dell conf line vty 0 9 Dell config line vty access class sourcemac Dell config line vty end Security for M I O Aggregator 175 16 Simple Network Management Protocol SNMP Network management stations use SNMP to retrieve or alter management data from network elements A datum of management information is called a managed object the value of a managed object can be static or variable Network elements store managed objects in a database called a management inf
265. ing If the switch receives a multicast packet that has an IP address of a group it has not learned unregistered frame the switch floods that packet out of all ports on the VLAN To disable multicast flooding on all VLAN ports enter the no ip igmp snooping flood command in global configuration mode When multicast flooding is disabled unregistered multicast data traffic is forwarded to only multicast router ports on all VLANs If there is no multicast router port in a VLAN unregistered multicast data traffic is dropped Displaying IGMP Information Use the show commands from the below table to display information on IGMP If you specify a group address or interface Enter a group address in dotted decimal format for example 225 0 0 0 e Enter an interface in one of the following formats tengigabitethernet slot port port channel port channel number Orvlan vlan number Displaying IGMP Information Command Output show ip igmp groups group adaress detail Displays information on IGMP groups detail interface group adaress detail show ip igmp interface interface Displays IGMP information on IGMP enabled interfaces show ip igmp snooping mrouter vlan vlan Displays information on IGMP enabled multicast router number mrouter interfaces clear ip igmp groups group adaress Clears IGMP information for group addresses and IGMP interface enabled interfaces show ip igmp groups Command Example
266. ing the vlan tagged orvlan untagged commands in INTERFACE configuration mode See VLAN Membership for more information Managing the MAC Address Table On an Aggregator in VLT and PMUX modes you can manage the MAC address table by e Clearing the MAC Address Entries Displaying the MAC Address Table In the Standalone mode use the show cam mac stack unit 0 port set 0 command to view the mac addresses The Aggregator auto configures with support for Network Interface Controller NIC Teaming Clearing the MAC Address Entries Learned MAC addresses are entered in the table as dynamic entries which means that they are subject to aging For any dynamic entry if no packet arrives on the switch with the MAC address as the source or destination address within the timer period the address is removed from the table The default aging time is 1800 seconds in PMUX mode and 300 seconds in Standalone and VLT modes You can manually clear the MAC address table of dynamic entries To clear a MAC address table use the following command 1 Clear a MAC address table of dynamic entries EXEC Privilege mode clear mac address table dynamic address all interfaces vlan address deletes the specified entry e all deletes all dynamic entries e interface deletes all entries for the specified interface e vlan deletes all entries for the specified VLAN Layer 2 137 Displaying the MAC Address Table To display the MAC address
267. ink Failure Detection UFD 221 Dell conf uplink state group 16 show configuration uplink state group 16 no enable description test downstream disable links all downstream TengigabitEthernet 0 40 upstream TengigabitEthernet 0 41 upstream Port channel 8 Sample Configuration Uplink Failure Detection The following example shows a sample configuration of UFD on a switch router in which you configure as follows e Configure uplink state group 3 e Add downstream links Gigabitethernet 0 1 0 2 0 5 0 9 0 11 and 0 12 e Configure two downstream links to be disabled if an upstream link fails e Add upstream links Gigabitethernet 0 3 and 0 4 e Add a text description for the group e Verify the configuration with various show commands Example of Configuring UFD Dell conf uplink state group 3 Dell conf uplink state group 3 00 23 52 SSTKUNITO M CP SIFMGR 5 ASTATE UP Changed uplink state group Admin state to up Group 3 Dell conf uplink state group 3 downstream tengigabitethernet 0 1 2 5 9 11 12 Dell conf uplink state group 3 downstream disable links 2 Dell conf uplink state group 3 upstream tengigabitethernet 0 3 4 Dell conf uplink state group 3 description Testing UFD feature Dell conf uplink state group 3 show config uplink state group 3 description Testing UFD feature downstream disable links 2 downstream TenGigabitEthernet 0 1 2 5 9 11 12 upstream TenGigabitEt
268. interface vlan vlan id interface port type port slot interface port channel port channel number show fip snooping system show fip snooping vlan Output Displays information on FIP snooped sessions on all VLANs or a specified VLAN including the ENode interface and MAC address the FCF interface and MAC address VLAN ID FCoE MAC address and FCoE session ID number FC ID worldwide node name WWNN and the worldwide port name WWPN Information on NPIV sessions is also displayed Displays the FIP snooping status and configured FC MAP values Displays information on the ENodes in FIP snooped sessions including the ENode interface and MAC address FCF MAC address VLAN ID and FC ID Displays information on the FCFs in FIP snooped sessions including the FCF interface and MAC address FCF interface VLAN ID FC MAP value FKA advertisement period and number of ENodes connected Clears FIP snooping information on a VLAN for a specified FCoE MAC address ENode MAC adaress or FCF MAC address and removes the corresponding ACLs generated by FIP snooping Displays statistics on the FIP packets snooped on all interfaces including VLANs physical ports and port channels Clears the statistics on the FIP packets snooped on all VLANs a specified VLAN or a specified port interface Display information on the status of FIP snooping on the switch enabled or disabled including the number of FCoE VLANs FCFs ENodes and cu
269. ip address p address mask INTERFACE Configure an IP address and mask on the interface e ip address mask enter an address in dotted decimal format A B C D the mask must be in prefix format Ux ip address dhcp INTERFACE Acquire an IP address from the DHCP server To access the management interface from another LAN you must configure the management route command to point to the management interface There is only one management interface for the whole stack To display the routing table for a given port use the show ip route command from EXEC Privilege mode Configuring a Static Route for a Management Interface When an IP address used by a protocol and a static management route exists for the sample prefix the protocol route takes precedence over the static management route To configure a static route for the management port use the following command in CONFIGURATION mode Command Syntax Command Mode Purpose management route p address CONFIGURATION Assign a static route to point to the mask forwarding router address management interface or forwarding router ManagementEthernet slot port To view the configured static routes for the management port use the show ip management route command in EXEC privilege mode Dell show ip management route all Destination Gateway State 1 1 1 0 24 Lee se 29 0 Active 172 16 1 0 24 WAZ She 125 0 Active 172 31 1 0 24 ManagementEthernet 1 0 Connected Dell
270. is capability an SSH client or server can use a VRF instance name to look up the correct routing table and establish a connection Enabling SSH Authentication by Password Authenticate an SSH client by prompting for a password when attempting to connect to the Dell Networking system This setup is the simplest method of authentication and uses SSH version 1 To enable SSH password authentication use the following command e Enable SSH password authentication CONFIGURATION mode ip ssh password authentication enable Example of Enabling SSH Password Authentication To view your SSH configuration use the show ip ssh command from EXEC Privilege mode Dell conf ip ssh server enable Please wait while SSH Daemon initializes done Dell conf tip ssh password authentication enable 170 Security for M I O Aggregator Dell sh ip ssh SSH server enabled Password Authentication enabled Hostbased Authentication disabled RSA Authentication disabled Using RSA Authentication of SSH The following procedure authenticates an SSH client based on an RSA key using RSA authentication This method uses SSH version 2 1 Onthe SSH client Unix machine generate an RSA key as shown in the following example Copy the public key id rsa pub to the Dell Networking system 3 Disable password authentication if enabled CONFIGURATION mode no ip ssh password authentication enable 4 Bind the public keys to RSA authentication
271. is 5 15 pm offset OPTIONAL Enter the number of minutes to add during the summer time period The range is from 1 to1440 The default is 60 minutes System Time and Date 211 Example of the clock summer time recurring Command Dell conf clock summer time pacific recurring Mar 14 2012 00 00 Nov 7 2012 00 00 Dell conf NOTE If you enter lt CR gt after entering the recurring command parameter and you have already set a one time daylight saving time date the system uses that time and date as the recurring setting Example of Clock Summer Time Recurring Parameters Dell conf clock summer time pacific recurring lt 1 4 gt Week number to start first Week number to start last Week number to start Eom Dell conf clock summer time pacific recurring Dell conf 4 212 System Time and Date 20 Uplink Failure Detection UFD Feature Description UFD provides detection of the loss of upstream connectivity and if used with network interface controller NIC teaming automatic recovery from a failed link A switch provides upstream connectivity for devices such as servers If a switch loses its upstream connectivity downstream devices also lose their connectivity However the devices do not receive a direct indication that upstream connectivity is lost because connectivity to the switch is still operational UFD allows a switch to associate downstream interfaces with upstream interfaces When upstream connectivity
272. itch type and the new unit a mismatch error message is displayed Resetting a Unit on a Stack Use the following reset commands to reload any of the member units or the standby in a stack If you try to reset the stack master the following error message is displayed Error Reset of master unit is not allowed To rest a unit on a stack use the following commands e Reload a stack unit from the master switch 196 Stacking EXEC Privilege mode reset stack unit unit number e Reset a stack unit when the unit is in a problem state EXEC Privilege mode reset stack unitunit number hard Removing an Aggregator from a Stack and Restoring Quad Mode To remove an Aggregator from a stack and return the 40GbE stacking ports to 4x10GbE quad mode follow the below steps 1 Disconnect the stacking cables from the unit The unit can be powered on or off and can be online or offline 2 Logon to the CLI and enter Global Configuration mode Login username Password FTOS enable FTOS configure 3 Configure the Aggregator to operate in standalone mode Stack unit 0 iom mode standalone CONFIGURATION 4 Logon to the CLI and reboot each switch one after another in as short a time as possible reload EXEC PRIVILEGE When the reload completes the base module ports comes up in 4x10GbE quad mode The switch functions in standalone mode but retains the running and startup configuration that was last synchronized b
273. ition the optic so it is in the correct position The optic has a key that prevents it from being inserted incorrectly e Insert the optic into the port until it gently snaps into place K NOTE 1 When you cable the ports be sure not to interfere with the airflow from the small vent holes above and below the ports Processing of Data Traffic The Dell Networking OS determines the module type that is plugged into the slot Based on the module type the software performs the appropriate tasks The FC Flex IO module encapsulates and decapsulates FC Flex IO Modules 263 the FCoE frames The module directly switches any non FCoE or non FIP traffic and only FCoE frames are processed and transmitted out of the Ethernet network When the external device sends FCoE data frames to the switch that contains the FC Flex IO module the destination MAC address represents one of the Ethernet MAC addresses assigned to FC ports Based on the destination address the FCoE header is removed from the incoming packet and the FC frame is transmitted out of the FC port The flow control mechanism is performed using per priority flow control to ensure that frame loss does not occur owing to congestion of frames Operation of the FIP Application The NPIV proxy gateway terminates the FIP sessions and responses to FIP messages The FIP packets are intercepted by the FC Flex IO module and sent to the Dell Networking OS for further analysis The FIP application r
274. ity assignment while flexible bandwidth allocation and the configured queue scheduling for a priority group is supported The following figure shows how ETS allows you to allocate bandwidth when different traffic types are classed according to 802 1p priority and mapped to priority groups Ingress Traffic Types Bandwidth Allocation by dotip Priorities in Egress Queues 7 6 LAN p 5 LAN E gt 4 SAN gt 3 SAN IPC e a 1 IPC A ETS Priority Groups LAN dotip 0 1 2 5 6 7 SAN dotip 3 IPC dotip 4 Figure 2 Enhanced Transmission Selection The following table lists the traffic groupings ETS uses to select multiprotocol traffic for transmission Table 2 ETS Traffic Groupings Traffic Groupings Description Priority group A group of 802 1p priorities used for bandwidth allocation and queue scheduling All 802 1p priority traffic in a group must have the same traffic handling requirements for latency and frame loss Group ID A 4 bit identifier assigned to each priority group The range is from 0 to 7 Group bandwidth Percentage of available bandwidth allocated to a priority group Group transmission selection algorithm TSA Type of queue scheduling a priority group uses In the Dell Networking OS ETS is implemented as follows e ETS supports groups of 802 1p priorities that have PFC enabled or disabled No bandwidth limit or no ETS processing e Bandwidth allocated by the ETS algorithm is made ava
275. ity group does not use its allocated bandwidth the unused bandwidth is made available to other priority groups percentage strict priority pfc on off Restriction You can enable PFC ona maximum of two priority queues Repeat this step to configure PFC and ETS traffic handling for each priority group for example priority group 0 bandwidth 60 pfc off priority group 1 bandwidth 20 pfc onpriority group 2 bandwidth 20 pfc onpriority group 4 strict priority pfc off 274 FC Flex IO Modules Step Task Specify the priority group ID number to handle VLAN traffic for each dotip class of service O through 7 Leave a space between each priority group number For example priority pgid 0001244 4where dotip priorities O 1 and 2 are mapped to priority group 0 dotlp priority 3 is mapped to priority group 1 dotip priority 4 is mapped to priority group 2 dotlp priorities 5 6 and 7 are mapped to priority group 4 All priorities that map to the same egress queue must be in the same priority group Important Points to Remember Command priority pgid dot1p0 group num dotipl_ group num dotip2 group num dotlp3 group num dotip4 group num dotip5 group num dotip6 group num dotlip7 group num Command Mode DCB MAP e f you remove a dotlp priority to priority group mapping from a DCB map no priority pgid command the PFC and ETS parameters revert to their default values on the interfaces on which the DCB map is app
276. ization of the network for better storage traffic throughput SCSI optimization provides a means of monitoring iSCSI sessions and applying QoS policies on iSCSI traffic When enabled iSCSI optimization allows a switch to monitor snoop the establishment and termination of iSCSI connections The switch uses the snooped information to detect iSCSI sessions and connections established through the switch iSCSI optimization allows you to reduce deployment time and management complexity in data centers In a data center network Dell EqualLogic and Compellent iSCSI storage arrays are connected to a converged Ethernet network using the data center bridging exchange protocol DCBx through stacked and or non stacked Ethernet switches iSCSI session monitoring over virtual link trunking VLT synchronizes the iSCSI session information between the VLT peers allowing session information to be available in both VLT peers iSCSI optimization functions as follows e Auto detection of EqualLogic storage arrays the switch detects any active EqualLogic array directly attached to its ports e Manual configuration to detect Compellent storage arrays where auto detection is not supported e Automatic configuration of switch ports after detection of storage arrays e f you configured flow control iSCSI uses the current configuration If you did not configure flow control iSCSI auto configures flow control e iSCSI monitoring sessions the switch
277. k occurs If a member unit fails in a ring topology traffic is re routed over existing stack links The following syslog messages are generated when a member unit fails Dell May 31 01 46 17 SSTKUNIT3 M CP IPC 2 STATUS target stack unit 4 not responding ay 31 01 46 17 STKUNIT3 M CP CHMGR 2 STACKUNIT DOWN Major alarm Stack unit 4 down IPC timeout Dell May 31 01 46 17 SSTKUNIT3 M CP SIFMGR 1 DEL PORT Removed port Te 4 1 32 41 48 Fo 4 49 53 Dell May 31 01 46 18 SSTKUNIT5 S CP SIFMGR 1 DEL PORT Removed port Te 4 1 32 41 48 Fo 4 49 53 Unplugged Stacking Cable e Problem A stacking cable is unplugged from a member switch The stack loses half of its bandwidth from the disconnected switch Resolution Intra stack traffic is re routed on a another link using the redundant stacking port on the switch A recalculation of control plane and data plane connections is performed Master Switch Fails Problem The master switch fails due to a hardware fault software crash or power loss e Resolution A failover procedure begins 1 Keep alive messages from the Aggregator master switch time out after 60 seconds and the switch is removed from the stack 2 The standby switch takes the master role Data traffic on the new master switch is uninterrupted Protocol traffic is managed by the control plane 3 Amember switch is elected as the new standby Data traffic on the new stand
278. l SNMPv2 SMI mib 2 47 L 2 15 STRING Unit 0 Port 12 10G Level SNMPv2 SMI mib 2 47 1 2 16 STRING Unit 0 Port 13 10G Level SNMPv2 SMI mib 2 47 L 2 17 STRING Unit 0 Port 14 10G Level SNMPv2 SMI mib 2 47 1 2 18 STRING Unit 0 Port 15 10G Level SNMPv2 SMI mib 2 47 1 2 19 STRING Unit 0 Port 16 10G Level SNMPv2 SMI mib 2 47 L 2 20 STRING Unit 0 Port 17 10G Level SNMPv2 SMI mib 2 47 1 2 21 STRING Unit 0 Port 18 10G Level SNMPv2 SMI mib 2 47 L 2 22 STRING Unit 0 Port 19 10G Level SNMPv2 SMI mib 2 47 1 2 23 STRING Unit 0 Port 20 10G Level SNMPv2 SMI mib 2 47 L 2 24 STRING Unit 0 Port 21 10G Level SNMPv2 SMI mib 2 47 1 2 25 STRING Unit 0 Port 22 10G Level SNMPv2 SMI mib 2 47 1 2 26 STRING Unit 0 Port 23 10G Level SNMPv2 SMI mib 2 47 L 2 27 STRING Unit 0 Port 24 10G Level SNMPv2 SMI mib 2 47 1 2 28 STRING Unit 0 Port 25 10G Level SNMPv2 SMI mib 2 47 L 2 29 STRING Unit 0 Port 26 10G Level SNMPv2 SMI mib 2 47 L 2 30 STRING Unit 0 Port 27 10G Level SNMPv2 SMI mib 2 47 L 2 31 STRING Unit 0 Port 28 10G Level SNMPv2 SMI mib 2 47 1 2 32 STRING Unit 0 Port 29 10G Level SNMPv2 SMI mib 2 47 1 2 33 STRING Unit 0 Port 30 10G Level SNMPv2 SMI mib 2 47 L 2 34 STRING Unit 0 Port 31 10G Level SNMPv2 SMI mib 2 47 1 2 35 STRING Unit 0 Port 32 10G Level SNMPv2 SMI mib 2 47 1 2 36 STRING 40G QS
279. l conf lldp no hello Dell conf 11dp show config protocol lldp Dell conf 11dp Configuring a Time to Live The information received from a neighbor expires after a specific amount of time measured in seconds called a time to live TTL The TTL is the product of the LLDPDU transmit interval hello and an integer called a multiplier The default multiplier is 4 which results in a default TTL of 120 seconds e Adjust the TTL value CONFIGURATION mode or INTERFACE mode multiplier e Return to the default multiplier value CONFIGURATION mode or INTERFACE mode no multiplier Example of the multiplier Command to Configure Time to Live R1 conf lldp fshow config l protocol lldp advertise dotl tlv port protocol vlan id port vlan id advertise dot3 tlv max frame size advertise management tlv system capabilities system description PMUX Mode of the IO Aggregator 243 no disable R1 conf 11dp fmultiplier lt 2 10 gt Multiplier default 4 R1 conf 11dp multiplier 5 R1 conf 11dp show config protocol lldp advertise dotl tlv port protocol vlan id port vlan id advertise dot3 tlv max frame size advertise management tlv system capabilities system description multiplier 5 no disable R1 conf 11dp no multiplier R1 conf lldp 4show config protocol lldp advertise dotl tlv port protocol vlan id port vlan id advertise dot3 tlv max frame size advertise management tlv system
280. l other VLANs K NOTE The table contains none of the other information provided by the show vlan command such as port speed or whether the ports are tagged or untagged K NOTE The 802 1q Q BRIDGE MIB defines VLANs regarding 802 1d as 802 1d itself does not define them As a switchport must belong a VLAN the default VLAN or a configured VLAN all MAC address learned on a switchport are associated with a VLAN For this reason the Q Bridge MIB is used for MAC address query Moreover specific to MAC address query the MAC address indexes dotidTpFdbTable only for a single forwarding database while dotlqTpFdbTable has two indices VLAN ID and MAC address to allow for multiple forwarding databases and considering that the same MAC address is learned on multiple VLANs The VLAN ID is added as the first index so that MAC addresses are read by the VLAN sorted lexicographically The MAC address is part of the OID instance so in this case lexicographic order is according to the most significant octet Table 14 MIB Objects for Fetching Dynamic MAC Entries in the Forwarding Database MIB Object OID MIB Description dotidTpFdbTable 1 5 6 1 2 1 17 4 5 Q BRIDGE MIB List the learned unicast MAC addresses on the default VLAN dotiqTpFdbTable 1 3 6 1 2 1 17 7 1 2 2 Q BRIDGE MIB List the learned unicast MAC addresses on non default VLANs dot3aCurAggFdb Table 1 5 6 1 4 1 6027 5 2 1 1 5 F10 LINK List the learned MAC AGGREGATION
281. lan Fabric Intf FC Flex IO Modules Description Number of downstream ENodes connected to a fabric over the MXL 10 40GbE Switch or M I O Aggregator with the FC Flex IO module NPG MXL 10 40GbE Switch or M I O Aggregator with the FC Flex IO module Ethernet interface slot port to which a server CNA is connected Worldwide port name WWPN of a server CNA port VLAN ID of the dedicated VLAN used to transmit FCOE traffic to and from the fabric Fabric facing Fibre Channel port slot port on which FC traffic is transmitted to the specified fabric 283 Field Description Fabric Map Name of the FCoE map containing the FCoE FC configuration parameters for the server CNA fabric connection Login Method Method used by the server CNA to log in to the fabric for example FLOGI ENode logged in using a fabric login FLOGI FDISC ENode logged in using a fabric discovery FDISC Status Operational status of the link between a server CNA port and a SAN fabric Logged In Server has logged in to the fabric and is able to transmit FCoE traffic show npiv devices Command Example Dell show npiv devices ENode 0 ENode MAC OO TOs Ue 1 294 21 ENode Intf Te 0 12 FCF MAC 5c f9 dd ef 10 c8 Fabric Intf Fc 0 5 FCoE Vlan 1003 Fabric Map fid 1003 ENode WWPN 20 01 00 10 18 f1 94 20 ENode WWNN 20 00 00 10 18 1 94 21 FCoE MAC 0e fc 03 01 02 01 FC ID 01 02 01 LoginMeth
282. larm is generated if over utilization of the port channel occurred The value Active is displayed for this field Interface Slot and port number and the type of the member interface of the port channel Line Protocol Indicates whether the interface is administratively up or down Utilization In Percent Traffic usage in percentage of the packets processed by the particular member interface You can also use the show running configuration interface port channel command in EXEC Privilege mode to view whether the mechanism to evaluate the utilization of the member interfaces of the LAG bundle is enabled The following sample output illustrates the portion of this show command Dell show running config int port channel 1 interface Port channel 1 mtu 12000 portmode hybrid switchport vlt peer lag port channel 1 no shutdown link bundle monitor enable 132 Link Aggregation Verifying LACP Operation and LAG Configuration To verify the operational status and configuration of a dynamically created LAG and LACP operation on a LAG on an Aggregator enter the show interfaces port channel port channel number and show lacp port channel number commands The show outputs in this section for uplink LAG 128 and server facing LAG 1 refer to the LACP Operation on an Aggregator figure show interfaces port channel 128Command Example Dell show interfaces port channel 128 Port channel 128 is up line protocol is up Created by LACP proto
283. led causing flow control to be enabled on all interfaces EQL detection and enabling iscsi profile compellent on an interface may cause some automatic configurations to occur like jumbo frames on all ports and no storm control and spanning tree port fast on the port of detection Displaying iSCSI Optimization Information To display information on iSCSI optimization use the show commands detailed in the below table Table 7 Displaying iSCSI Optimization Information Command Output show iscsi Displays the currently configured SCSI settings show iscsi sessions Displays information on active iSCSI sessions on the switch that have been established since the last reload show iscsi sessions detailed session isid Displays detailed information on active iSCSI sessions on the switch To display detailed information on specified iSCSi session enter the session s iSCSi ID show run iscsi Displays all globally configured non default SCSI settings in the current Dell Networking OS session show iscsi Command Example Dell show iscsi iSCSI is enabled iSCSI session monitoring is enabled iSCSI COS dotlp is 4 no remark Session aging time 10 Maximum number of connections is 256 TCP Port Target IP Address 3260 860 show iscsi sessions Command Example Dell show iscsi sessions Session 0 Target iqn 2001 05 com equallogic 0 8a0906 0e70c2002 10a0018426a48c94 iom010 Initiator iqn 1991 05 com microsoft win x918v27yajg
284. lidation of Interface Ranges You can avoid specifying spaces between the range of interfaces separated by commas that you configure by using the interface range command For example if you enter a list of interface ranges such as interface range fo 2 0 1 te 10 0 gi 3 0 fa 0 0 this configuration is considered valid The comma separated list is not required to be separated by spaces in between the ranges You can associate multicast MAC or hardware addresses to an interface range and VLANs by using the mac address table static multicast mac address vlan vlan id output range interface command Interfaces 115 9 SCSI Optimization An Aggregator enables internet small computer system interface iSCSI optimization with default SCSI parameter settings Default iSCSI Optimization Values and is auto provisioned to support Detection and Auto configuration for Dell EqualLogic Arrays SCSI Optimization Operation To display information on iSCSI configuration and sessions use show commands SCSI optimization enables quality of service QoS treatment for SCSI traffic SCSI Optimization Overview iSCSI is a TCP IP based protocol for establishing and managing connections between IP based storage devices and initiators in a storage area network SAN SCSI optimization enables the network switch to auto detect Dell s iSCSI storage arrays and triggers self configuration of several key network configurations that enables optim
285. lied By default PFC is not applied on specific 802 1p priorities ETS assigns equal bandwidth to each 802 1p priority As a result PFC and lossless port queues are disabled on 802 1p priorities and all priorities are mapped to the same priority queue and equally share port bandwidth e To change the ETS bandwidth allocation configured for a priority group in a DCB map do not modify the existing DCB map configuration Instead create a new DCB map with the desired PFC and ETS settings and apply the new map to the interfaces to override the previous DCB map settings Then delete the original dot1p priority to priority group mapping If you delete the dot1p priority to priority group mapping no priority pgid command before you apply the new DCB map the default PFC and ETS parameters are applied on the interfaces This change may create a DCB mismatch with peer DCB devices and interrupt the network operation Applying a DCB Map on Server facing Ethernet Ports You can apply a DCB map only on a physical Ethernet interface and can apply only one DCB map per interface Step 1 Task Enter CONFIGURATION mode on a server facing port or port channel to apply a DCB map You cannot apply a DCB map on a port channel However you can apply a DCB map on the ports that are members of the port channel Apply the DCB map on an Ethernet port or port channel The port is configured with the PFC and ETS settings in the DCB map for example
286. list Dell Networking OS does not apply the next method list Configuring AAA Authentication Login Methods To configure an authentication method and method list use the following commands Dell Networking OS Behavior If you use a method list on the console port in which RADIUS or TACACS is the last authentication method and the server is not reachable Dell Networking OS allows access even though the username and password credentials cannot be verified Only the console port behaves this Security for M I O Aggregator 157 way and does so to ensure that users are not locked out of the system if network wide issue prevents access to these servers 1 Define an authentication method list method list name or specify the default CONFIGURATION mode aaa authentication login method list name default methodl method4 The default method list is applied to all terminal lines Possible methods are e enable use the password you defined using the enable secret or enable password command in CONFIGURATION mode line use the password you defined using the password command in LINE mode e local use the username password database defined in the local configuration e none no authentication e radius use the RADIUS servers configured with the radius server host command e tacacs use the TACACS servers configured with the tacacs server host command 2 Enter LINE mode CONFIGURATION mode line aux 0 console 0 vty
287. ll Networking OS version use the show version command To download a Dell Networking OS version go to http support dell com Installation Site Preparation Before installing the switch or switches make sure that the chosen installation location meets the following site requirements e Clearance There is adequate front and rear clearance for operator access Allow clearance for cabling power connections and ventilation e Cabling The cabling is routed to avoid sources of electrical noise such as radio transmitters broadcast amplifiers power lines and fluorescent lighting fixtures e Ambient Temperature The ambient switch operating temperature range is 10 to 35 C 50 to 95 F 1 Decrease the maximum temperature by 1 C 1 8 F per 300 m 985 ft above 900 m 2955 ft 2 Relative Humidity The operating relative humidity is 8 percent to 85 percent non condensing with a maximum humidity gradation of 10 percent per hour Unpacking the Switch Package Contents When unpacking each switch make sure that the following items are included e One Dell Networking MXL 10 40GbE Switch IO Module e One USB type A to DB 9 female cable e Getting Started Guide e Safety and Regulatory Information e Warranty and Support Information e Software License Agreement Unpacking Steps Before unpacking the switch inspect the container and immediately report any evidence of damage Place the container on a clean fla
288. location to dotip priority traffic in all ETS priority groups is 100 dot1p priority traffic on the switch is scheduled according to the default dot1p queue mapping dotip priorities within the same queue should have the same traffic properties and scheduling method A priority group consists of 802 1p priority values that are grouped together for similar bandwidth allocation and scheduling and that share the same latency and loss requirements All 802 1p priorities mapped to the same queue should be in the same priority group By default All 802 1p priorities are grouped in priority group 0 100 of the port bandwidth is assigned to priority group 0 The complete bandwidth is equally assigned to each priority class so that each class has 12 to 13 The maximum number of priority groups supported in ETS output policies on an interface is equal to the number of data queues 4 on the port The 802 1p priorities in a priority group can map to multiple queues A DCB output policy is created to associate a priority group with an ETS output policy with scheduling and bandwidth configuration and applied on egress ports The ETS configuration associated with 802 1p priority traffic in a DCB output policy is used in DCBx negotiation with ETS peers When an ETS output policy is applied to an interface ETS configured scheduling and bandwidth allocation take precedence over any auto configured settings in the QoS output policie
289. loyment needs To enable a new operational mode reload the switch Standalone mode stack unit unit iom mode standalone This is the default mode for IOA It is a fully automated zero touch mode that allows you to configure VLAN memberships Supported in CMC Stacking mode stack unit unit iom mode stacking Select this mode to stack up to six IOA stack units as a single logical switch The stack units can be in the same or on different chassis This is a low touch mode where all configuration except VLAN membership is automated To enable VLAN you must configure it In this operational mode base module links are dedicated to stacking VLT mode stack unit unit iom mode vlt Select this mode to multi home server interfaces to different IOA modules This is a low touch mode where all configuration except VLAN membership is automated To enable VLAN you must configure it In this mode port 9 links are dedicated to VLT interconnect Programmable MUX mode stack unit unit iom mode programmable mux Select this mode to configure PMUX mode CLI commands Before You Start 15 Default Settings The I O Aggregator provides zero touch configuration with the following default configuration settings default user name root password calvin VLAN vlan1 and IP address for in band management DHCP IP address for out of band OOB management DHCP read only SNMP community name public broadcast storm control enabled in Standalone
290. lt VLAN To fetch the MAC addresses learned on non default VLANs use the object dotlqTpFdbTable The instance number is the VLAN number concatenated with the decimal conversion of the MAC address SY SEMA SSH ss SSS Ss SS Sa Stes sro Tas Dell show mac address table Vlanld Mac Address Type Interface State 1000 00 01 e8 06 95 ac Dynamic Tengig 0 7 Active A A Query from Management Station gt snmpwalk v 2c c techpubs 10 11 131 162 1 3 6 1 2 1 17 7 1 2 2 1 Example of Fetching MAC Addresses Learned on a Port Channel Use dot3aCurAggFdbTable to fetch the learned MAC address of a port channel The instance number is the decimal conversion of the MAC address concatenated with the port channel number SY St em 2 2 5 E AA Dell conf do show mac address table Vlanld ac Address Type Interface State 1000 00 01 e8 06 95 ac Dynamic Po 1 Active aii Query from Management Station gt snmpwalk v 2c c techpubs 10 11 131 162 1 3 6 1 4 1 6027 3 2 1 1 5 SNMPv2 SMI enterprises 6027 3 2 1 1 5 1 1 1000 0 1 232 6 149 172 1 INTEGER 1000 SNMPv2 SMI enterprises 6027 3 2 1 1 5 1 2 1000 0 1 232 6 149 172 1 Hex STRING 00 01 E8 06 95 AC SNMPv2 SMI enterprises 6027 3 2 1 1 5 1 3 1000 0 1 232 6 149 172 1 INTEGER 1 SNMPv2 SMI enterprises 6027 3 2 1 1 5 1 4 1000 0 1 232 6 149 172 1 INTEGER 1 Deriving Interface Indices The Dell Ne
291. ly configured IP route using the no ip route command the management route is reinstalled Manually delete management routes added by the DHCP client e To reinstall management routes added by the DHCP client that is removed or replaced by the same statically configured management routes release the DHCP IP address and renew it on the management interface e Management routes added by the DHCP client have higher precedence over the same statically configured management route Static routes are not removed from the running configuration if a dynamically acquired management route added by the DHCP client overwrites a static management route e Management routes added by the DHCP client are not added to the running configuration NOTE Management routes added by the DHCP client include the specific routes to reach a DHCP server in a different subnet and the management route DHCP Client on a VLAN The following conditions apply on a VLAN that operates as a DHCP client The default VLAN 1 with all ports auto configured as members is the only L5 interface on the Aggregator e When the default management VLAN has a DHCP assigned address and you reconfigure the default VLAN ID number the Aggregator Sends a DHCP release to the DHCP server to release the IP address Sends a DHCP request to obtain a new IP address The IP address assigned by the DHCP server is used for the new default management VLAN 66 Dynamic Host Configuration Protoc
292. m control multicast packets per second in command from INTERFACE mode 3 To configure the percentage of unknown unicast traffic allowed on an interface use the storm control unknown unicast packets per second in command from INTERFACE mode 208 Broadcast Storm Control 19 System Time and Date The Aggregator auto configures the hardware and software clocks with the current time and date If necessary you can manually set and maintain the system time and date using the CLl commands described in this chapter e Setting the Time for the Software Clock Setting the Time Zone Setting Daylight Savings Time Setting the Time for the Software Clock You can change the order of the month and day parameters to enter the time and date as time day month year You cannot delete the software clock The software clock runs only when the software is up The clock restarts based on the hardware clock when the switch reboots To set the software clock use the following command Set the system software clock to the current time and date EXEC Privilege mode clock set time month day year time Enter the time in hours minutes seconds For the hour variable use the 24 hour format for example 17 15 00 is 5 15 pm month Enter the name of one of the 12 months in English You can enter the name of a day to change the order of the display to time day month year day Enter the number of the day The range is from
293. m the running config enter the no command then the original command For example to delete an IP address configured on an interface use the no ip address ip address command K NOTE Use the help or command as described in Obtaining Help Example of Viewing Disabled Commands Dell conf interface managementethernet 0 0 Dell conf if ma 0 0 ip address 192 168 5 6 16 Dell conf if ma 0 0 Dell conf if ma 0 0 Dell conf if ma 0 0 interface ManagementEthernet 0 0 ip address 192 168 5 6 16 no shutdown Dell conf if ma 0 0 Dell conf if ma 0 0 Dell conf if ma 0 0 Dell conf if ma 0 0 interface ManagementEthernet 0 0 no ip address no shutdown Dell conf if ma 0 0 show config no ip address show config Obtaining Help Obtain a list of keywords and a brief functional description of those keywords at any CLI mode using the or help command e To list the keywords available in the current mode enter at the prompt or after a keyword e Enter after a prompt lists all of the available keywords The output of this command is the same for the help command Dell start Start Shell capture Capture Packet cd Change current directory clear Reset functions clock Manage the system clock configure Configuring from terminal copy Copy from one file to another More e Enter after a partial keyword lists all of the keywords that begin with the specified letters Dell conf cl
294. mber 4 Assign the method list to the terminal line LINE mode login authentication method list name default Example of a Failed Authentication To view the configuration use the show config in LINE mode or the show running config tacacs command in EXEC Privilege mode If authentication fails using the primary method Dell Networking OS employs the second method or third method if necessary automatically For example if the TACACS server is reachable but the server key is invalid Dell Networking OS proceeds to the next authentication method In the following example the TACACS is incorrect but the user is still authenticated by the secondary method First bold line Server key purposely changed to incorrect value Second bold line User authenticated using the secondary method Dell conf Dell conf tdo show run aaa aaa authentication enable default tacacs enable aaa authentication enable LOCAL enable tacacs aaa authentication login default tacacs local aaa authentication login LOCAL local tacacs aaa authorization exec default tacacs none aaa authorization commands 1 default tacacs none aaa authorization commands 15 default tacacs none aaa accounting exec default start stop tacacs aaa accounting commands 1 default start stop tacacs aaa accounting commands 15 default start stop tacacs Dell conf Dell conf tdo show run tacacs l tacacs server key 7 d05206c308f4d35b tacacs server host 10 10 10
295. med list of accounting methods and then applying that list to various virtual terminal line VTY lines Enabling AAA Accounting The aaa accounting command allows you to create a record for any or all of the accounting functions monitored To enable AAA accounting use the following command Enable AAA accounting and create a record for monitoring the accounting function CONFIGURATION mode aaa accounting system exec command level default name start stop wait start stop only tacacst The variables are system sends accounting information of any other AAA configuration exec sends accounting information when a user has logged in to EXEC mode command level sends accounting of commands executed at the specified privilege level default name enter the name of a list of accounting methods start stop use for more accounting information to send a start accounting notice at the beginning of the requested event and a stop accounting notice at the end wait start ensures that the TACACS security server acknowledges the start notice before granting the user s process request stop only use for minimal accounting instructs the TACACS server to send a stop record accounting notice at the end of the requested user process tacacs designate the security service Currently Dell Networking OS supports only TACACS Suppressing AAA Accounting for Null Username Sessions When you a
296. mmands ide nt hs ee a Eee ee t e cts 199 Troubleshooting a Switch Stack e edet eo OO be es 201 allie EMOS ti db ee trus 203 Upgr ding a Switch Stack alles DER ER SM MN 205 Upgrading a Single Stack Unit nennen enne 206 18 Broadcast Storm ControL eene 208 Disabling Broadcast Storm Control ete eet el ete deri i e iei ee dd 208 Displaying Broadcast Storm Control Status sssssssssssssssssesseeeereeeeene ener 208 Configuring Stormy Controle ee er P e er ad eds 208 19 System Time and Date etre tied horto rien otra c itur an ice dus 209 Setting the Time for the Software Clockaniiiiiensis iaia iea i 209 Setting the TIMEZONE ii med ee een e ete o x V ie eie teet etu ee o bte ets 209 Setting Daylight Savings Tile uiii tete e ce Cede c ee dere aede ts 210 Setting Daylight Saving Time Once eet oed t e e s 210 Setting Recurring Daylight Saving Time ssssssssssseeeeeneeeenee eene 211 20 Uplink Failure Detection UFD eeeeseesseseseeeeeeee nennt 213 Feature Description ons NS en e tele el bdo T Cea ARE 213 How Uplink Failure Detection Works nennen nennen rre nnne nnne nene nnn 214 WED and NIC Ga min ss diede dee eedem te ede eiie a 216 Important Points to Remember s e de de RR e PT led d e e ae 216 Configuring Uplink Failure Detection PMUX mode nennen 217 Clearing a UFD Disabled Interface in PMUX mode enne 218 Displaying Uplink Failure Det
297. monitored port MD and the destination port is the monitoring port MO Configuring Port Monitoring To configure port monitoring use the following commands 1 Verify that the intended monitoring port has no configuration other than no shutdown as shown in the following example EXEC Privilege mode show interface 2 Create a monitoring session using the command monitor session from CONFIGURATION mode as shown in the following example CONFIGURATION mode monitor session 3 Specify the source and destination port and direction of traffic as shown in the following example MONITOR SESSION mode source K NOTE By default all uplink ports are assigned to port channel LAG 128 and the destination port in a port monitoring session must be an uplink port When you configure the destination port using the source command the destination port is removed from LAG 128 To display the uplink ports currently assigned to LAG 128 enter the show lag 128 command Example of Viewing Port Monitoring Configuration To display information on currently configured port monitoring sessions use the show monitor session command from EXEC Privilege mode Dell conf monitor session 0 Dell conf mon sess 0 source tengig 1 1 dest tengig 1 42 direction rx Dell conf mon sess 0 exit Dell conf do show monitor session 0 SessionID Source Destination Direction Mode Type 0 TenGig 1 1 TenGig 1 42 rx interface Port based Dell conf
298. mum of two priority queues on an interface Enabling PFC for dotip priorities makes the corresponding port queue lossless The sum of all allocated bandwidth percentages in all groups in the DCB map must be 100 Strict priority traffic is serviced first Afterwards bandwidth allocated to other priority groups is made available and allocated according to the specified percentages If a priority group does not use its allocated bandwidth the unused bandwidth is made available to other priority groups Example priority group 0 bandwidth 60 pfc off priority group 1 bandwidth 20 pfc on priority group 2 bandwidth 20 pfc on priority group 4 strict priority pfc off Repeat this step to configure PFC and ETS traffic handling for each priority group Specify the dotip priority to priority group mapping for each priority Priority group range O to 7 All priorities that map to the same queue must be in the same priority group Leave a space between each priority group number For example priority pgid 0 0012 4 4 4 in which priority group O maps to dotip priorities O 1 and 2 priority group 1 maps to dotip priority 5 priority group 2 maps to dotip priority 4 priority group 4 maps to dot1p priorities 5 6 and 7 Important Points to Remember Command Mode CONFIGURATION Command dcb map name priority group DCB MAP group num bandwidth percentage strict priority pfc on off priority pgid DCB MAP dot1p0 group num
299. n the following software features are supported on VLTi link layer discovery protocol LLDP flow control port monitoring jumbo frames and data center bridging DCB When you enable the VLTi link the link between the VLT peer switches is established if the following configured information is true on both peer switches the VLT system MAC address matches the VLT unit id is not identical K NOTE If you configure the VLT system MAC address or VLT unit id on only one of the VLT peer switches the link between the VLT peer switches is not established Each VLT peer switch must be correctly configured to establish the link between the peers PMUX Mode of the IO Aggregator If the link between the VLT peer switches is established changing the VLT system MAC address or the VLT unit id causes the link between the VLT peer switches to become disabled However removing the VLT system MAC address or the VLT unit id may disable the VLT ports if you happen to configure the unit ID or system MAC address on only one VLT peer at any time If the link between VLT peer switches is established any change to the VLT system MAC address or unit id fails if the changes made create a mismatch by causing the VLT unit ID to be the same on both peers and or the VLT system MAC address does not match on both peers If you replace a VLT peer node preconfigure the switch with the VLT system MAC address unit id and other VLT parameters before
300. n Timeouts Session failures due to Hardware Config Vlan Requests Vlan Notifications ulticast Discovery Solicits nicast Discovery Solicits LOGI DISC LOGO node Keep Alive N Port Keep Alive ulticast Discovery Advertisement Unicast Discovery Advertisement FLOGI Accepts FLOGI Rejects FDISC Accepts F F F SW on np np c DISC Rejects LOGO Accepts LOGO Rejects CVL FCF Discovery Timeouts S VN Port Session Timeouts ession failures due to Hardware Config show fip snooping statistics port channel Command Example Vlan Requests Vlan Notifications ulticast Discovery Solicits nicast Discovery Solicits LOGI DISC LOGO node Keep Alive N Port Keep Alive ulticast Discovery Advertisement Unicast Discovery Advertisement FLOGI Accepts FLOGI Rejects FDISC Accepts F F F lt Ega DISC Rejects LOGO Accepts LOGO Rejects CVL FCF Discovery Timeouts S VN Port Session Timeouts ession failures due to Hardware Config 3349 4437 N N 4000 00 0 01 FO or Dell show fip snooping statistics int tengigabitethernet 0 11 0 0 0 0 0 0 0 0 00 0 0000000 roNnNN O0o0o0oo0oo0o0o0onNo Dell show fip snooping statistics interface port channel 22 FIP Snooping show fip snooping statistics Command Description Field Number of V Number of V lan Requests LAN Notifications Number of Multicast Discovery Solicits Number of U Solicits Number of
301. n each mode and you can limit user access to modes using privilege levels In Dell Networking OS after you enable a command it is entered into the running configuration file You can view the current configuration for the whole system or for a particular CLI mode To save the current configuration copy the running configuration to another location For more information refer to Save the Running Configuration K NOTE You can use the chassis management controller CMC out of band management interface to access and manage an Aggregator using the Dell Networking OS command line reference For more information about how to access the CMC to configure an Aggregator refer to the Dell Chassis Management Controller CMC User s Guide on the Dell Support website at http support dell com support edocs systems pem en index htm Accessing the Command Line Access the command line through a serial console port or a Telnet session Logging into the System using Telnet When the system successfully boots enter the command line in EXEC mode Logging into the System using Telnet telnet 172 31 1 53 Trying 172 31 1 53 Connected to 172 31 1 53 Escape character is Login username Password Dell CLI Modes Different sets of commands are available in each mode A command found in one mode cannot be executed from another mode except for EXEC mode commands with a preceding do command refer to the do Command section The
302. n on FC Flex IO Modules The Fibre Channel FC Flex IO module is supported on the MXL 10 40GbE Switch and M I O Aggregator IOA The MXL and IOA switches installed with the FC Flex IO module function as a top of rack edge switch that supports Converged Enhanced Ethernet CEE traffic Fibre Channel over Ethernet FCoE for storage Interprocess Communication IPC for servers and Ethernet local area network LAN IP cloud for data as well as FC links to one or more storage area network SAN fabrics The N port identifier virtualization NPIV proxy gateway NPG provides FCoE FC bridging capability on the MXL 10 40GbE Switch and M I O Aggregator with the FC Flex IO module This chapter describes how to configure and use an NPIV proxy gateway on an MXL 10 40GbE Switch and M I O Aggregator with the FC Flex IO module in a SAN NPIV Proxy Gateway Operations and Capabilities Benefits of an NPIV Proxy Gateway The MXL 10 40GbE Switch and M I O Aggregator with the FC Flex IO module functions as a top of rack edge switch that supports Converged Enhanced Ethernet CEE traffic FCoE for storage Interprocess Communication IPC for servers and Ethernet LAN IP cloud for data as well as Fibre Channel FC links to one or more SAN fabrics Using an NPIV proxy gateway NPG helps resolve the following problems in a storage area network Fibre Channel storage networks typically consist of servers connected to edge switches which
303. n the unit s will be powercycled immediately dE Proceed with caution ES KKK KKK KKK KKK KKK KKK KK KKK KKK KKK KKK KKK KKK KK KKK KKK KKK KKK KKK KKK KK ko kv ke ko ko ko KK KKK Proceed with factory settings Confirm yes no yes Restore status Unit Nvram Config 0 Success Debugging and Diagnostics 307 Power cycling the unit s 308 Debugging and Diagnostics 25 Standards Compliance This chapter describes standards compliance for Dell Networking products Es NOTE Unless noted when a standard cited here is listed as supported by the Dell Networking Operating System OS the system also supports predecessor standards One way to search for predecessor standards is to use the http tools ietf org website Click Browse and search IETF documents enter an RFC number and inspect the top of the resulting document for obsolescence citations to related RFCs IEEE Compliance The following is a list of IEEE compliance 802 1AB LLDP 802 1D Bridging 802 1p L2 Prioritization 802 1Q VLAN Tagging Double VLAN Tagging GVRP 802 3ac Frame Extensions for VLAN Tagging 802 3ad Link Aggregation with LACP 802 3ae 10 Gigabit Ethernet LOGBASE W 10GBASE X 802 3ak 10 Gigabit Ethernet LOGBASE CX4 802 3i Ethernet 10BASE T 802 3u Fast Ethernet LOOBASE FX 100BASE TX 802 3x Flow Control 802 1Qaz Enhanced Transmission Selection 802 1Qbb Priority based Flow Control ANSI TIA 1057 LLDP MED MTU 12 000 bytes R
304. nal Lines To enable AAA accounting with a named method list for a specific terminal line where com15 and execAcct are the method list names use the following commands e Configure AAA accounting for terminal lines CONFIG LINE VTY mode accounting commands 15 com15 accounting exec execAcct Example of Enabling AAA Accounting with a Named Method List Dell config line vty accounting commands 15 coml5 Dell config line vty accounting exec execAcct Monitoring AAA Accounting Dell Networking OS does not support periodic interim accounting because the periodic command can cause heavy congestion when many users are logged in to the network No specific show command exists for TACACS accounting To obtain accounting records displaying information about users currently logged in use the following command e Step through all active sessions and print all the accounting records for the actively accounted functions CONFIGURATION mode or EXEC Privilege mode show accounting Example of the show accounting Command for AAA Accounting Dell show accounting Active accounted actions on tty2 User admin Priv 1 166 Security for M I O Aggregator Task ID 1 EXEC Accounting record 00 00 39 Elapsed service shell Active accounted actions on tty3 User admin Priv 1 Task ID 2 EXEC Accounting record 00 00 26 Elapsed service shell Dell Monitoring TACACS To view information on TACACS transaction
305. nary 1 represents slot and port O In S4810 the first interface is 0 0 but in the Aggregator the first interface is 0 1 Hence in the Aggregator 0 Os Ifindex is unused and Ifindex creation logic is not changed Because Zero is reserved for logical interfaces it starts from 1 For the first interface port number is set to 1 Adding it causes an increment by 1 for the next interfaces so it only starts from 2 Therefore the port number is set to 42 for 0 41 Example of Deriving the Interface Index Number Dell show interface tengig 1 21 TenGigabitEthernet 1 21 is up line protocol is up Hardware is Dell ForcelOEth address is 00 01 e8 0d b7 4e Current address is 00 01 e8 0d b7 4e Interface index is 72925242 output omitted Monitor Port Channels To check the status of a Layer 2 port channel use fLOLinkAggMib 1 3 6 1 4 1 6027 3 2 In the following example Po 1 is a switchport and Po 2 is in Layer 3 mode NOTE The interface index does not change if the interface reloads or fails over If the unit is renumbered for any reason the interface index changes during a reload Example of SNMP Trap for Monitored Port Channels senthilnathan lithium snmpwalk v 2c c public 10 11 1 1 1 3 6 1 4 1 6027 3 2 1 1 SNMPv2 SMI enterprises 6027 3 2 1 1 1 1 INTEGER 1 SNMPv2 SMI enterprises 6027 3 2 1 1 2 INTEGER 2 SNMPv2 SMI enterprises 6
306. ncements provided in Data Center Bridging DCB to support lossless no drop SAN and LAN traffic In addition DCB provides flexible bandwidth sharing for different traffic types such as LAN and SAN according to 802 1p priority classes of service DCBx should be enabled on the system before the FIP snooping feature is enabled All of the commands that are supported for FCoE on the MXL and I O Aggregator apply to the FC Flex IO modules Similarly all of the configuration procedures and the settings that are applicable for FCoE on the MXL and I O Aggregator are valid for the FC Flex IO modules Legacy FIP snooping related commands are supported on 10G optional modules when the Fiber channel capability is disabled by using the no feature fc command If the FC capability is enabled only the fip snooping max sessions per enodemac command is supported and the remaining FIP snooping commands are not supported NPIV Proxy Gateway for FC Flex IO Modules The N port identifier virtualization NPIV Proxy Gateway NPG feature provides FCoE FC bridging capability on the M I O Aggregator and MXL 10 40GbE Switch with the FC Flex IO module switch allowing server CNAs to communicate with SAN fabrics over the M I O Aggregator and MXL 10 40GbE Switch with the FC Flex IO module To configure the M I O Aggregator and MXL 10 40GbE Switch with the FC Flex IO module to operate as an NPIV proxy gateway use the following commands NPIV Proxy Gateway Configuratio
307. nected Switch using the switch and port WWN methods After a successful login the NPIV gateway sends a notification to inform the CNA that the FCF available to log in The source address of the FIP advertisement and FIP discovery advertisement response contain the MAC address of the FC Flex IO module port Depending on the number of login sessions on a particular FCF the NPIV gateway can load balance the login sessions from ENodes The NPIV application performs the FLOGI to FDISC conversion and sends the new FC frame on the associated FC ports After the external switch responds to the FLOGI request the NPIV gateway establishes the NPIV session and sends the frame to the FIP application The FIP application establishes virtual links to convert FCoE FLOGI accept messages into FIP FLOGI accept messages The corresponding ACL for the accept message is then applied If a FIP timeout from ENode or VN PORT occurs the NPIV application performs the FC fabric logout to the external FC switch The NPIV application manages the sessions between the FCoE and the FC domain Installing and Configuring the Switch After you unpack the MXL 10 40GbE Switch refer to the flow chart in the following figure for an overview of the steps you must follow to install the blade and perform the initial configuration 264 FC Flex IO Modules Installing and Configuring Flowchart for FC Flex IO Modules FC Flex IO Modules 265 To see if a switch is running the latest De
308. nel LAG Mode Status Uptime Ports L 128 1213 up 17 36 24 Te 0 33 Up Te 0 35 Up Te 0 36 Up Te 0 39 Up Te 0 51 Up Te 0 53 Up Te 0 54 Up Te 0 56 Up Dell show uplink state group 1 detail Up Interface up Dwn Interface down Dis Interface disabled Uplink State Group FERE Status Enabled Up Defer Timer 10 sec Upstream Interfaces Po 128 Up Downstream Interfaces Te 0 1 Up Te 0 2 Up Te 0 3 Dwn Te 0 4 Dwn Te Debugging and Diagnostics 287 0 5 Up Te 0 6 Dwn Te 0 7 Dwn Te 0 8 Up Te 0 9 Up Te 0 10 Up Te 0 11 Dwn Te 0 12 Dwn Te 0 13 Up Te 0 14 Dwn Te 0 15 Up Te 0 16 Up Te 0 17 Dwn Te 0 18 Dwn Te 0 19 Dwn Te 0 20 Dwn Te 0 21 Dwn Te 0 22 Dwn Te 0 23 Dwn Te 0 24 Dwn Te 0 25 Dwn e 0 26 Dwn Te 0 27 Dwn Te 0 28 Dwn Te 0 29 Dwn Te 0 30 Dwn Te 0 31 Dwn Te 0 32 Dwn 2 Verify that the downstream port channel in the top of rack switch that connect to the Aggregator is configured correctly Broadcast unknown multicast and DLF packets switched at a very low rate Symptom Broadcast unknown multicast and DLF packets are switched at a very low rate By default broadcast storm control is enabled on an Aggregator and rate limits the transmission of broadcast unknown multicast and DLF packets to 1Gbps This default behavior is designed to avoid unnecessarily flooding these packets on all 4094 VLANs on all Aggregator i
309. nel Capability on the Switch You must first enable an IOA and MXL switch with the FC Flex IO module that you want to configure as an NPG for the FC protocol Task Command Command Mode Enable an IOA and MXL switch feature fc CONFIGURATION with the FC Flex IO module for the FC protocol PMUX Mode of the lO Aggregator 233 Creating a DCB Map Configure the priority based flow control PFC and enhanced traffic selection ETS settings in a DCB map before you can apply them on downstream server facing ports Task Command Command Mode Create a DCB map to specify the dcb map name CONFIGURATION PFC and ETS settings for groups of dotip priorities Configure the PFC setting on or priority group group num DCB MAP off and the ETS bandwidth bandwidth percentage percentage allocated to traffic in strict priority pfc on each priority group or whether off priority group traffic should be handled with strict priority scheduling Specify the priority group ID priority pgid DCB MAP number to handle VLAN traffic dotlp0 group num for each dotip class of service dotlpl group num O through 7 Leave a space dotlp2 group num between each priority group dotlp3 group num number dot1p4 group num dotipS group num dotlp6 group num dotlp7 group num Important Points to Remember e f you remove a dotlp priority to priority group mapping from a DCB map the no priority pgid command the PFC and ETS parameters revert to their defa
310. net leave latency after the last host leaves the group In version 1 hosts quietly leave groups and the router waits for a query response timer several times the value of the query interval to expire before it stops forwarding traffic To receive multicast traffic from a particular source a host must join the multicast group to which the source is sending traffic A host that is a member of a group is called a receiver A host may join many groups and may join or leave any group at any time A host joins and leaves a multicast group by sending an IGMP message to its IGMP querier The querier is the router that surveys a subnet for multicast receivers and processes survey responses to populate the multicast routing table IGMP messages are encapsulated in IP packets which is as illustrated below Internet Group Management Protocol IGMP 85 ry o ly hat r QM cU vase land Men 21 GP one bru Figure 10 IGMP Version 2 Packet Format Joining a Multicast Group There are two ways that a host may join a multicast group it may respond to a general query from its querier or it may send an unsolicited report to its querier Responding to an IGMP Query One router on a subnet is elected as the querier The querier periodically multicasts to all multicast systems address 224 0 0 1 a general query to all hosts on the subnet A host that wants to join a multicast group responds with an IGMP membership r
311. nfigured PFC priorities Port state for current operational PFC configuration Init Local PFC configuration parameters were exchanged with peer e Recommend Remote PFC configuration parameters were received from peer e Internally propagated PFC configuration parameters were received from configuration source Operational status for exchange of PFC configuration on local port match up or mismatch down Type of state machine used for DCBx exchanges of PFC parameters e Feature for legacy DCBx versions e Symmetric for an IEEE version Status of PFC TLV advertisements enabled or disabled Link delay in quanta used to pause specified priority traffic Status of FCoE advertisements in application priority TLVs from local DCBx port enabled or disabled Status of ISCSI advertisements in application priority TLVs from local DCBx port enabled or disabled Priority bitmap used by local DCBx port in FCoE advertisements in application priority TLVs Priority bitmap used by local DCBx port in ISCSI advertisements in application priority TLVs Data Center Bridging DCB Fields Description Application Priority TLV Remote FCOE Priority Priority bitmap received from the remote DCBX Map port in FCoE advertisements in application priority TLVs Application Priority TLV Remote ISCSI Priority Map Priority bitmap received from the remote DCBX port in iSCSI advertisements in application priority
312. nfigured for non default untagged VLAN membership FIP Snooping Restrictions The following restrictions apply to FIP snooping on an Aggregator e The maximum number of FCoE VLANs supported on the Aggregator is eight e The maximum number of FIP snooping sessions supported per ENode server is 32 To increase the maximum number of sessions to 64 use the fip snooping max sessions per enodemac command This is configurable only in PMUX mode e Ina full FCoE N port ID virtualization NPIV configuration 16 sessions one FLOGI 15 NPIV sessions are supported per ENode In an FCoE NPV confguration only one session is supported per ENode e The maximum number of FCFs supported per FIP snooping enabled VLAN is 12 e Links to other FIP snooping bridges on a FIP snooping enabled port bridge to bridge links are not supported on the Aggregator 76 FIP Snooping Displaying FIP Snooping Information Use the show commands from the table below to display information on FIP snooping Command show fip snooping sessions interface vlan vlan id show fip snooping config show fip snooping enode enode mac address show fip snooping fcf fcf mac address clear fip snooping database interface vlan vlan id fcoe mac address enode mac address fcf mac address show fip snooping statistics interface vlan vlan id interface port type port slot interface port channel port channel number clear fip snooping statistics
313. ng command This method uses SSH version 1 or version 2 If the SSH port is a non default value use the ip ssh server port number command to change the default port number You may only change the port number when SSH is disabled Then use the p option with the ssh command e SSH from the chassis to the SSH client ssh ip address Example of Client Based SSH Authentication Dell ssh 10 16 127 201 1 User name option 172 Security for M I O Aggregator p SSH server port option default 22 V SSH protocol version Troubleshooting SSH To troubleshoot SSH use the following information You may not bind id rsa pub to RSA authentication while logged in via the console In this case this message displays Error No username set for this term Enable host based authentication on the server Dell Networking system and the client Unix machine The following message appears if you attempt to log in via SSH and host based is disabled on the client In this case verify that host based authentication is set to Yes in the file ssh config root permission is required to edit this file permission denied host based If the IP address in the RSA key does not match the IP address from which you attempt to log in the following message appears In this case verify that the name and IP address of the client is contained in the file etc hosts RSA Authentication Error Telnet To use Telnet with SSH first enable SSH as previ
314. nique device Switches forward multicast frames out of all ports in a VLAN by default even if there are only a small number of interested hosts resulting in a waste of bandwidth IGMP snooping enables switches to use information in IGMP packets to generate a forwarding table that associate ports with multicast groups so that the received multicast frames are forwarded only to interested receivers How IGMP Snooping is Implemented on an Aggregator e IGMP snooping is enabled by default on the switch e Dell Networking OS supports version 1 version 2 and version 3 hosts Dell Networking OS IGMP snooping is based on the IP multicast address not on the Layer 2 multicast MAC address IGMP snooping entries are stored in the Layer 5 flow table instead of in the Layer 2 forwarding information base FIB e Dell Networking OS IGMP snooping is based on draft ietf magma snoop 10 e IGMP snooping is supported on all M I O Aggregator stack members Amaximum of 2k groups and 4k virtual local area networks VLAN are supported e IGMP snooping is not supported on the default VLAN interface e Flooding of unregistered multicast traffic is enabled by default Queries are not accepted from the server side ports and are only accepted from the uplink LAG e Reports and Leaves are flooded by default to the uplink LAG irrespective of whether it is an mrouter port or not Internet Group Management Protocol IGMP 89 Disabling Multicast Flood
315. nit to activate the new Dell Networking OS version CONFIGURATION mode reload Example of Upgrading all Stacked Switches The following example shows how to upgrade all switches in a stack including the master switch Dell upgrade system ftp A Address or name of remote host 10 11 200 241 Source file name FTOS XL 8 3 17 0 bin User name to login remote host ftp Password to login remote host rra eee eee eee eee p p b p p b EE E E EE eee eet Stacking 205 31972272 bytes successfully copied System image upgrade completed successfully Upgrade system image for all stack units yes no yes ELE IE EEEEE IF ILE DL T I E E p p p p rp p p p E E EE E E E rra Image upgraded to all Dell configure Dell conf boot system stack unit all primary system A Dell conf end Dell write memory Jan 3 14 01 48 SSTKUNITO M CP SFILEMGR 5 FILESAVED Copied running config to startup config in flash by default Synchronizing data to peer Stack unit LILI Dell reload Proceed with reload confirm yes no yes Upgrading a Single Stack Unit Upgrading a single stacked switch is necessary when the unit was disabled due to an incorrect Dell Networking OS version This procedure upgrades the image in the boot partition of the member unit from the corresponding partition in the master unit To upgrade an individual stack unit with a new Dell Networking OS version follow the below steps 1 D
316. nits An Aggregator supports LACP for auto configuring dynamic LAGs Use CLI commands to display LACP information clear port channel counters and debug LACP operation for auto configured LAG on an Aggregator The Dell Networking OS implementation of LACP is based on the standards specified in the IEEE 802 3 Carrier sense multiple access with collision detection CSMA CD access method and physical layer specifications LACP functions by constantly exchanging custom MAC protocol data units PDUs across local area network LAN Ethernet links The protocol packets are only exchanged between ports that you configure as LACP capable K NOTE You can configure a maximum of up to 128 port channels with eight members per channel 122 Link Aggregation Uplink LAG When the Aggregator power is on all uplink ports are configured in a single LAG LAG 128 Server Facing LAGs Server facing ports are configured as individual ports by default lf you configure a server NIC in standalone stacking or VLT mode for LACP based NIC teaming server facing ports are automatically configured as part of dynamic LAGs The LAG range 1 to 127 is reserved for server facing LAGs After the Aggregator receives LACPDU from server facing ports the information embedded in the LACPDU remote system ID and port key is used to form a server facing LAG The LAG port channel number is assigned based on the first available number in the range from 1 to 127 For ea
317. no shutdown link bundle monitor enable Dell conf if po 128 Reassigning an Interface to a New Port Channel An interface can be a member of only one port channel If the interface is a member of a port channel remove it from the first port channel and then add it to the second port channel Each time you add or remove a channel member from a port channel Dell Networking OS recalculates the hash algorithm for the port channel Link Aggregation 127 To reassign an interface to a new port channel use the following commands 1 Remove the interface from the first port channel INTERFACE PORT CHANNEL mode no channel member interface 2 Change to the second port channel INTERFACE mode INTERFACE PORT CHANNEL mode interface port channel id number 3 Add the interface to the second port channel INTERFACE PORT CHANNEL mode channel member interfac Example of Moving an Interface to a New Port Channel The following example shows moving the TenGigabitEthernet 0 8 interface from port channel 4 to port channel 3 Dell conf if po 4 show config l interface Port channel 4 no ip address channel member TenGigabitEthernet 0 8 no shutdown Dell conf if po 4 no chann tengi 0 8 Dell conf if po 4 int port 3 Dell conf if po 3 channel tengi 0 8 Dell conf if po 3 sho conf interface Port channel 3 no ip address channel member TenGigabitEthernet 0 8 shutdown Dell conf if po 3 Configuring the Minimum Oper
318. nooping bridge that provides security for FCoE traffic using ACLs see FCoE Transit chapter e FCoE gateway that provides FCoE to FC bridging N port virtualization using FCoE maps exposes upstream F ports as FCF ports to downstream server facing ENode ports on the NPG see FCoE Maps NPIV Proxy Gateway Terms and Definitions The following table describes the terms used in an NPG configuration on the MXL 10 40GbE Switch and M I O Aggregator with the FC Flex IO module Table 17 MXL 10 40GbE Switch and M I O Aggregator with the FC Flex IO module NPIV Proxy Gateway Terms and Definitions Term Description FC port Fibre Channel port on an MXL 10 40GbE Switch and M I O Aggregator with the FC Flex IO module FC module that operates in autosensing 2 4 or 8 Gigabit mode On an NPIV proxy gateway an FC port can be used as a downlink for a server connection and an uplink for a fabric connection F port Port mode of an FC port connected to an end node N port on an MXL 10 40GbE Switch and M I O Aggregator with the FC Flex IO module NPIV proxy gateway N port Port mode of an MXL 10 40GbE Switch and M I O Aggregator with the FC Flex IO module FC port that connects to an F port on an FC switch in a SAN fabric On an MXL 10 40GbE Switch and M I O Aggregator with the FC Flex IO module NPIV proxy gateway an N port also functions as a proxy for multiple server CNA port connections ENode port Port mode of a server facing MXL 10 40GbE Switch and M
319. nterfaces default configuration Resolution Disable broadcast storm control globally on the Aggregator Steps to Take 1 Display the current status of broadcast storm control on the Aggregator show io aggregator broadcast storm control status command Dell show io aggregator broadcast storm control status Storm Control Enabled Broadcast Traffic limited to 1000 Mbps 2 Disable broadcast storm control no io aggregator broadcast storm control command and re display its status Dell config terminal Dell conf Hno io aggregator broadcast storm control Dell conf end Dell show io aggregator broadcast storm control status Storm Control Disabled Flooded packets on all VLANs are received on a server Symptom All packets flooded on all VLANs on an Aggregator are received on a server even if the server is configured as a member of only a subset of VLANs This behavior happens because all Aggregator ports are by default members of all 4094 VLANs Resolution Configure a port that is connected to the server with restricted VLAN membership Steps to Take 1 Display the current port mode for Aggregator L2 interfaces show interfaces switchport interface commana Dell show interfaces switchport tengigabitethernet 0 1 Codes U Untagged T Tagged 288 Debugging and Diagnostics x Dotlx untagged X Dotlx tagged G GVRP tagged M Trunk H VSN tagged i VLT tagged TenG Tagg Name 8
320. nterfaces can belong to the default VLAN By default VLAN 1 is the default VLAN To change the default VLAN ID use the default vlan id lt 1 4094 gt command in CONFIGURATION mode You cannot delete the default VLAN Port Based VLANs Port based VLANs are a broadcast domain defined by different ports or interfaces In Dell Networking OS a port based VLAN can contain interfaces from different stack units within the chassis Dell Networking OS supports 4094 port based VLANs Port based VLANs offer increased security for traffic conserve bandwidth and allow switch segmentation Interfaces in different VLANs do not communicate with each other adding some security to the traffic on those interfaces Different VLANs can communicate between each other by means of IP routing Because traffic is only broadcast or flooded to the interfaces within a VLAN the VLAN conserves bandwidth Finally you can have multiple VLANs configured on one switch thus segmenting the device Interfaces within a port based VLAN must be in Layer 2 mode and can be tagged or untagged in the VLAN ID 98 Interfaces VLANs and Port Tagging To add an interface to a VLAN it must be in Layer 2 mode After you place an interface in Layer 2 mode it is automatically placed in the default VLAN Dell Networking OS supports IEEE 802 1Q tagging at the interface level to filter traffic When you enable tagging a tag header is added to the frame after the destination and source MAC a
321. number end number 3 Assign a method list name or the default list to the terminal line LINE mode login authentication method list name default To view the configuration use the show config command in LINE mode or the show running config in EXEC Privilege mode NOTE Dell Networking recommends using the none method only as a backup This method does not authenticate users The none and enable methods do not work with secure shell SSH You can create multiple method lists and assign them to different terminal lines Enabling AAA Authentication To enable AAA authentication use the following command Enable AAA authentication CONFIGURATION mode aaa authentication enable method list name default methodl method4 default uses the listed authentication methods that follow this argument as the default list of methods when a user logs in method list name character string used to name the list of enable authentication methods activated when a user logs in methodi method4 any ofthe following RADIUS TACACS enable line none If you do not set the default list only the local enable is checked This setting has the same effect as issuing an aaa authentication enable default enable command 158 Security for M I O Aggregator Enabling AAA Authentication RADIUS To enable authentication from the RADIUS server and use TACACS as a backup use the following commands 1 Enable RA
322. number 4 Clears statistics on the specified stack unit The valid stack unit numbers are from O to 5 clear hardware stack unit unit number counters 5 Displays the current operational mode of the Aggregator standalone or stacking and the mode in which the Aggregator will operate at the next reload show system stack unit unit number iom mode Example of the show system stack ports Command Dell show system stack ports Topology Ring Interface Connection Link Speed Admin Link Trunk Gb s Status Status Group 0 33 1 33 40 up up 0 37 1 37 40 up up 1 33 0 33 40 up up 1 37 0 37 40 up up Stacking 201 Example of the show redundancy Command Dell show redundancy Stack unit Status Mgmt ID 0 Stack unit ID 0 Stack unit Redundancy Role Primary Stack unit State Active Indicates Master Unit Stack unit SW Version E8 3 16 46 Link to Peer Up PEER Stack unit Status Stack unit State Standby Indicates Standby Unit Peer stack unit ID 1 Stack unit SW Version E8 3 16 46 Primary Stack unit mgmt id 0 Auto Data Sync Full Failover Type Hot Failover Failover type with redundancy force failover Auto reboot Stack unit Enabled Auto failover limit 3 times in 60 minutes Stack unit Failover Record Failover Count 0 Last failover timestamp None Last failover Reason None Last failover type None Last Data Block Sync Record Stack Unit Config succeed
323. o contact targets When you enable iSCSI optimization by default the switch identifies IP packets to or from these ports as iSCSI traffic You can configure the switch to monitor traffic for additional port numbers or a combination of port number and target IP address and you can remove the well known port numbers from monitoring SCSI Optimization 117 Information Monitored in iSCSI Traffic Flows SCSI optimization examines the following data in packets and uses the data to track the session and create the classifier entries that enable QoS treatment e Initiator s IP Address e Target s IP Address ISID Initiator defined session identifier e Initiator s ION SCSI qualified name e Target s ION e Initiator s TCP Port e Target s TCP Port If no iSCSI traffic is detected for a session during a user configurable aging period the session data clears Detection and Auto configuration for Dell EqualLogic Arrays The iSCSI optimization feature includes auto provisioning support with the ability to detect directly connected Dell EqualLogic storage arrays and automatically reconfigure the switch to enhance storage traffic flows An Aggregator uses the link layer discovery protocol LLDP to discover Dell EqualLogic devices on the network LLDP is enabled by default For more information about LLDP refer to Link Layer Discovery Protocol LLDP The following message displays the first time a Dell EqualLogic array is
324. o the fabric You can associate an FC MAP with only one FCoE VLAN and conversely associate an FCoE VLAN with only one FC MAP e FCF priority the priority used by a server CNA to select an upstream FCoE forwarder FCF e FIP keepalive FKA advertisement timeout The values for the FCoE VLAN fabric ID and FC MAP must be unique Apply an FCoE map on downstream server facing Ethernet ports and upstream fabric facing Fibre Channel ports Step Task Command Command Mode 1 Create an FCoE map that contains parameters fcoe map map name CONFIGURATION used in the communication between servers and a SAN fabric 2 Configure the association between the fabric id fabric num FCoE MAP dedicated VLAN and the fabric where the vlan vlan id desired storage arrays are installed The fabric 276 FC Flex IO Modules and VLAN ID numbers must be the same Fabric and VLAN ID range 2 4094 For example fabric id 10 vlan 10 Add a text description of the settings in the description text FCoE MAP FCoE map Maximum 32 characters Specify the FC MAP value used to generatea fc map fc map value FCoE MAP fabric provided MAC address which is required to send FCoE traffic from a server on the FCoE VLAN to the FC fabric specified in Step 2 Enter a unique MAC address prefix as the FC MAP value for each fabric Range OEFCOO OEFCFF Default None Configure the priority used by a server CNA to fcf priority FCoE MAP select the FCF for a fabric login FL
325. o versions is that version 2 supports two additional protocol operations informs operation and snmpgetbulk query and one additional object counter64 object Creating a Community For SNMPv1 and SNMPv2 create a community to enable the community based security in the Dell Networking OS The management station generates requests to either retrieve or alter the value of a management object and is called the SNMP manager A network element that processes SNMP requests is called an SNMP agent An SNMP community is a group of SNMP agents and managers that are allowed to interact Communities are necessary to secure communication between SNMP managers and agents SNMP agents do not respond to requests from management stations that are not part of the community The Dell Networking OS enables SNMP automatically when you create an SNMP community and displays the following message You must specify whether members of the community may retrieve values in Read Only mode Read write access is not supported 22 31 23 SRPM1 P CP SSNMP 6 SNMP_ WARM START Agent Initialized SNMP WARM START To create an SNMP community e Choose a name for the community CONFIGURATION mode snmp server community name Yo Example of Creating an SNMP Community To view your SNMP configuration use the show running config snmp command from EXEC Privilege mode Dell conf snmp server community my snmp community ro 22 31 23 RPM1 P CP SNMP 6 SNMP WARM START Agent
326. od FLOGI Secs 5593 Status OGGED IN ENode 1 ENode MAC E 00 10 18 1 94 22 ENode Intf Te 0 13 FCF MAC 5c f9 dd ef 10 c9 Fabric Intf Fc 0 0 FCoE Vlan 1003 Fabric Map E fid 1003 ENode WWPN E 10 00 00 00 c9 d9 9c cb ENode WWNN 10 00 00 00 c9 d9 9c cd FCoE MAC 0e fc 03 01 02 02 FC ID E 01 02 01 LoginMethod FDISC Secs j 5593 Status LOGGED IN Table 23 show npiv devices Field Descriptions Field Description ENode number Server CNA that has successfully logged in to a fabric over an MXL 10 40GbE Switch or M I O Aggregator with the FC Flex IO module Ethernet port in ENode mode Enode MAC MAC address of a server CNA port Enode Intf Port number of a server facing Ethernet port operating in ENode mode 284 FC Flex IO Modules Field FCF MAC Fabric Intf FCOE VLAN Fabric Map Enode WWPN Enode WWNN FCoE MAC FC ID LoginMethod Secs State Description Fibre Channel forwarder MAC MAC address of MXL 10 40GbE Switch or M I O Aggregator with the FC Flex IO module FCF interface Fabric facing MXL 10 40GbE Switch or M I O Aggregator with the FC Flex IO module Fibre Channel port slot port on which FCoE traffic is transmitted to the specified fabric D of the dedicated VLAN used to transmit FCoE traffic from a server CNA to a fabric and configured on both the server facing MXL 10 40GbE Switch or M I O Aggregator with the FC Flex IO module port and server CNA port Name of t
327. of the port channel If the other interfaces configured in that port channel are configured with a different speed Dell Networking OS disables them For example if four interfaces TenGig 0 1 0 2 0 3 and 0 4 in which TenGig 0 land TenGig 0 2 are set to speed 1000 Mb s and the TenGig 0 3 and TenGigO 4 are set to 10000 Mb s with all interfaces enabled and you add them to a port channel by entering channel member tengigabitethernet 0 1 4 while in port channel interface mode and the Dell Networking OS determines if the first interface specified TenGig 0 0 is up After it is up the common speed of the port channel is 1000 Mb s Dell Networking OS disables those interfaces configured with speed 10000 Mb s or whose speed is 10000 Mb s as a result of auto negotiation In this example you can change the common speed of the port channel by changing its configuration so the first enabled interface referenced in the configuration is a 1000 Mb s speed interface You can also change the common speed of the port channel by setting the speed of the TenGig 0 1 interface to 1000 Mb s Uplink Port Channel VLAN Membership The tagged VLAN membership of the uplink LAG is automatically configured based on the VLAN configuration of all server facing ports ports 1 to 32 The untagged VLAN used for the uplink LAG is always the default VLAN 1 Server Facing Port Channel VLAN Membership The tagged VLAN membership of a server facing LAG is automatically
328. ol DHCP DHCP Packet Format and Options DHCP uses the user datagram protocol UDP as its transport protocol The server listens on port 67 and transmits to port 68 the client listens on port 68 and transmits to port 67 The configuration parameters are carried as options in the DHCP packet in Type Length Value TLV format many options are specified in RFC 2132 To limit the number of parameters that servers must provide hosts specify the parameters that they require and the server sends only those parameters Some common options are shown in the following illustration Code Length Value Figure 6 DHCP packet Format The following table lists common DHCP options Option Subnet Mask Router Domain Name Server Domain Name IP Address Lease Time DHCP Message Type Number and Description Option 1 Specifies the client s subnet mask Option 3 Specifies the router IP addresses that may serve as the client s default gateway Option 6 Specifies the domain name servers DNSs that are available to the client Option 15 Specifies the domain name that clients should use when resolving hostnames via DNS Option 51 Specifies the amount of time that the client is allowed to use an assigned IP address Option 53 1 DHCPDISCOVER 2 DHCPOFFER e 3 DHCPREQUEST e 4 DHCPDECLINE Dynamic Host Configuration Protocol DHCP 67 Option Number and Description e 5 DHCPACK 6 DHCPNACK e 7 DHCPRELEA
329. ol errors 0 runts 0 giants 0 throttles 42 CRC O IP Checksum 0 overrun 0 discarded 2456590833 packets output 203958235255 bytes 0 underruns Output 1640 Multicasts 56612 Broadcasts 2456532581 Unicasts 2456590654 IP Packets 0 Vlans 0 MPLS O throttles 0 discarded Rate info interval 5 minutes Input 00 01Mbits sec 2 packets sec Output 81 60Mbits sec 133658 packets sec Time since last interface status change 04 31 57 Dell When more than one interface is added to a Layer 2 port channel Dell Networking OS selects one of the active interfaces in the port channel to be the primary port The primary port replies to flooding and sends protocol data units PDUs An asterisk in the show interfaces port channel brief command indicates the primary port As soon as a physical interface is added to a port channel the properties of the port channel determine the properties of the physical interface The configuration and status of the port channel are also applied to the physical interfaces within the port channel For example if the port channel is in Layer 2 mode you cannot add an IP address or a static MAC address to an interface that is part of that port channel Example of Error Due to an Attempt to Configure an Interface that is Part of a Port Channel Dell conf tint port channel 128 Dell conf if po 128 show config l interface Port channel 128 mtu 12000 portmode hybrid switchport fip snooping port mode fcf
330. ollowing list includes the configuration task for TACACS functions Choosing TACACS as the Authentication Method e Monitoring TACACS e TACACS Remote Authentication e Specifying a TACACS Server Host For a complete listing of all commands related to TACACS refer to the Security chapter in the Dell Networking OS Command Reference Guide Choosing TACACS as the Authentication Method One of the login authentication methods available is TACACS and the user s name and password are sent for authentication to the TACACS hosts specified To use TACACS to authenticate users specify at least one TACACS server for the system to communicate with and configure TACACS as one of your authentication methods To select TACACS as the login authentication method use the following commands 1 Configure a TACACS server host CONFIGURATION mode tacacs server host ip address host Enter the IP address or host name of the TACACS server Security for M I O Aggregator 163 Use this command multiple times to configure multiple TACACS server hosts 2 Enter a text string up to 16 characters long as the name of the method list you wish to use with the TACAS authentication method CONFIGURATION mode aaa authentication login method list name default tacacs method3 The TACACS method must not be the last method specified 3 Enter LINE mode CONFIGURATION mode line aux 0 console 0 vty number end nu
331. or SAN packets and the Ethernet or LAN packets must be split within the chassis or by using a TOR switch to perform this splitting If you want to segregate the LAN and SAN traffic within the chassis you can employ switches such as the Dell M8428 k Converged 10GbE Switch or FC only switches such as the Dell M5424 switch module You can also use the 5000 Switch as a TOR switch to separate the LAN and SAN traffic at the ToR By using the FC Flex IO module you can optimally and effectively split the LAN and SAN traffic at the edge of the blade chassis itself You can deploy the FC Flex IO module can be deployed in the enterprise and data center switching networks to leverage and derive the advantages of a converged Ethernet network The FC Flex IO module is not an FCF switch but it offers FCoE capabilities from the server to the MXL or I O Aggregator switches and native FC capability in the uplink direction to the SAN switches Although the FC Flex IO module does not support all of the FCF characteristics such as full blown name services or zone parameters it presents the most flexible solution in interoperating with third party switches that enable the splitting of LAN and SAN traffic With the MXL 10 40GbE Switch and I O Aggregator being well established systems in the switch domain you can install the FC Flex IO module to enhance and increase the converged Ethernet network performance and behavior With the FC Flex IO module the MXL 10 40GbE Switch
332. ority Config 128 128 281 Fcf Priority 128 Config State ACTIVE Oper State UP Members Fc 0 0 Te 0 14 Te 0 16 Table 20 show fcoe map Field Descriptions Field Description Fabric Name Name of a SAN fabric Fabric ID The ID number of the SAN fabric to which FC traffic is forwarded VLAN ID The dedicated VLAN used to transport FCoE storage traffic between servers and a fabric over the NPG The configured VLAN ID must be the same as the fabric ID VLAN priority FCoE traffic uses VLAN priority 3 This setting is not user configurable FC MAP FCoE MAC address prefix value The unique 24 bit MAC address prefix that identifies a fabric FKA ADV period Time interval in seconds used to transmit FIP keepalive advertisements FCF Priority The priority used by a server to select an upstream FCoE forwarder Config State Indicates whether the configured FCoE and FC parameters in the FCoE map are valid Active all mandatory FCoE and FC parameters are correctly configured or Incomplete either the FC MAP value fabric ID or VLAN ID are not correctly configured Oper State Operational status of the link to the fabric up link is up and transmitting FC traffic down link is down and not transmitting FC traffic link wait link is up and waiting for FLOGI to complete on peer FC port or removed port has been shut down Members MXL 10 40GbE Switch or M I O Aggregator with the FC Flex IO module Ethernet and FC ports
333. ormation base MIB MIBs are hierarchically structured and use object identifiers to address managed objects but managed objects also have a textual name called an object descriptor NOTE An I O Aggregator supports standard and private SNMP MIBs including Get operations in supported MIBs Implementation Information The Dell Networking OS supports SNMP version 1 as defined by RFC 1155 1157 and 1212 SNMP version 2c as defined by RFC 1901 Configuring the Simple Network Management Protocol K NOTE The configurations in this chapter use a UNIX environment with net snmp version 5 4 This is only one of many RFC compliant SNMP utilities you can use to manage the Aggregator using SNMP Also these configurations use SNMP version 2c Configuring SNMP version 1 or version 2 requires only a single step 1 Create a community K NOTE IOA supports only Read only mode Important Points to Remember e Typically 5 second timeout and 3 second retry values on an SNMP server are sufficient for both local area network LAN and wide area network WAN applications If you experience a timeout with these values increase the timeout value to greater than 3 seconds and increase the retry value to greater than 2 seconds on your SNMP server 176 Simple Network Management Protocol SNMP Setting up SNMP Dell Networking OS supports SNMP version 1 and version 2 which are community based security models The primary difference between the tw
334. ort Viewing DHCP Statistics and Lease Information To display DHCP client information enter the following show commands e Display statistics about DHCP client interfaces EXEC Privilege show ip dhcp client statistics interface type slot port e Clear DHCP client statistics on a specified or on all interfaces EXEC Privilege clear ip dhcp client statistics all interface type slot port e Display lease information about the dynamic IP address currently assigned to a DHCP client interface EXEC Privilege show ip dhcp lease interface type slot port View the statistics about DHCP client interfaces with the show ip dhcp client statistics command and the lease information about the dynamic IP address currently assigned to a DHCP client interface with the show ip dhcp lease command Example of the show ip dhcp client statistics Command Dell show ip dhcp client statistics interface tengigabitethernet 0 0 Interface Name Ma 0 0 Message Received DHCPOFFER 0 DHCPACK 0 DHCPNAK 0 Message Sent DHCPDISCOVER 13 Dynamic Host Configuration Protocol DHCP 69 DHCPREQUEST DHCPDECLINE EL EB DHCPRELEASE DHCPREBIND DHCPRENEW DHCPINFORM oooooo Example of the show ip dhcp lease Command Dell show ip dhcp Interfac Lease IP Def Router ServerId State Lease Obtnd At Lease Expires At Ma 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 INIT NA NA Vl1 10 1 1 254 24
335. orts 1 to 32 are internal server facing interfaces Ports 33 to 56 are external ports numbered from the bottom to the top of the Aggregator Interface Types The following interface types are supported on an Aggregator Interface Type Supported Default Mode Requires Default State Modes Creation Physical L2 10GbE uplink No No Shutdown enabled Management L3 L3 No No Shutdown enabled Port Channel L2 L2 No L2 No Shutdown enabled Default VLAN L2 and L3 L2 and L3 No L2 No Shutdown enabled L3 VLAN 1 No Shutdown enabled Non default VLANs L2 L2 and L3 Yes L2 No Shutdown enabled L3 VLANs 2 4094 No Shutdown enabled Viewing Interface Information To view interface status and auto configured parameters use show commands The show interfaces command in EXEC mode lists all configurable interfaces on the chassis and has options to display the interface status IP and MAC addresses and multiple counters for the amount and type of traffic passing through the interface If you configure a port channel interface the show interfaces command lists the interfaces configured in the port channel K NOTE To end output from the system such as the output from the show interfaces command enter CTRL C and the Dell Networking Operating System OS returns to the command prompt K NOTE The CLI output may be incorrectly displayed as O zero for the Rx Tx power values Perform an simple network management protocol SNMP que
336. ot port information 2 Double check that the interface was added to the port channel INTERFACE PORT CHANNEL mode show config To view the port channel s status and channel members in a tabular format use the show interfaces port channel brief command in EXEC Privilege mode as shown in the following example Example of the show interfaces port channel brief Command Dell show int port brief LAG Mode Status Uptime Ports 1 12 up 00 06 03 Te 0 7 Up Te 0 8 Up 2 12 up 00 06 03 Te 0 9 Up Te 0 10 Up Te 0 11 Up Dell The following example shows the port channel s mode L2 for Layer 2 and L3 for Layer 3 and L2L3 for a Layer 2 port channel assigned to a routed VLAN the status and the number of interfaces belonging to the port channel Example of the show interface port channel Command Dell gt show interface port channel 20 Port channel 20 is up line protocol is up 126 Link Aggregation Hardware address is 00 01 e8 01 46 fa Internet address is 1 1 120 1 24 MTU 1554 bytes IP MTU 1500 bytes LineSpeed 2000 Mbit Members in this channel Gi 9 10 Gi 9 17 ARP type ARPA ARP timeout 04 00 00 Last clearing of show interface counters 00 00 00 Queueing strategy fifo 1212627 packets input 1539872850 bytes Input 1212448 IP Packets 0 Vlans 0 MPLS 4857 64 byte pkts 17570 over 64 byte pkts 35209 over 127 byte pkts 69164 over 255 byte pkts 143346 over 511 byte pkts 942523 over 1023 byte pkts Received 0 input symb
337. otocol using SMlv2 SNMPv2 Management Information Base for the Transmission Control Protocol using SMIv2 SNMPv2 Management Information Base for the User Datagram Protocol using SMIv2 Definitions of Managed Objects for Data Link Switching using SMIv2 IP Forwarding Table MIB Introduction and Applicability Statements for Internet Standard Management Framework 511 RFC 2571 2572 2574 2575 2576 2578 2579 2580 2618 3635 2674 2787 2819 2863 2865 3273 3416 312 Full Name An Architecture for Describing Simple Network Management Protocol SNMP Management Frameworks Message Processing and Dispatching for the Simple Network Management Protocol SNMP User based Security Model USM for version 3 of the Simple Network Management Protocol SNMPv3 View based Access Control Model VACM for the Simple Network Management Protocol SNMP Coexistence Between Version 1 Version 2 and Version 3 of the Internet standard Network Management Framework Structure of Management Information Version 2 SMIv2 Textual Conventions for SMlv2 Conformance Statements for SMlv2 RADIUS Authentication Client MIB except the following four counters radiusAuthClientInvalidServerAddresses radiusAuthClientMalformedAccessResponses radiusAuthClientUnknownTypes radiusAuthClientPacketsDropped Definitions of Managed Objects for the Ethernet like Interface Types Definitions of Managed Objects for Bridges
338. ously described By default the Telnet daemon is enabled If you want to disable the Telnet daemon use the following command or disable Telnet in the startup config To enable or disable the Telnet daemon use the no ip telnet server enable command The Telnet server or client is VRF aware You can enable a Telnet server or client to listen to a specific VRF by using the vrf vrf instance name parameter in the telnet command This capability enables a Telent server or client to look up the correct routing table and establish a connection Example of Using Telnet for Remote Login Dell conf tip telnet server enabl Dell conf Hno ip telnet server enabl VTY Line and Access Class Configuration Various methods are available to restrict VTY access in Dell Networking OS These depend on which authentication scheme you use line local or remote Table 13 VTY Access Authentication Method VTY access class Username access class Remote authorization _ support support support Line YES NO NO Local NO YES NO TACACS YES NO YES with Dell Networking OS version 5 2 1 0 and later RADIUS YES NO YES with Dell Networking OS version 6 1 1 0 and later Dell Networking OS provides several ways to configure access classes for VTY lines including Security for M I O Aggregator 173 e VTY Line Local Authentication and Authorization e VTY Line Remote Authentication and Authorization VTY Line Local Authentication
339. ownload the Dell Networking OS image from the master s boot partition to the member unit and upgrade the relevant boot partition in the single stack member unit EXEC Privilege mode upgrade system stack unit unit number partition 2 Reboot the stack unit from the master switch to load the Dell Networking OS image from the same partition CONFIGURATION mode boot system stack unit unit number primary system partition 3 Save the configuration EXEC Privilege mode write memory 4 Reset the stack unit to activate the new Dell Networking OS version EXEC Privilege mode power cycle stack unit unit number Example of Upgrading a Single Stack Unit The following example shows how to upgrade an individual stack unit Dell upgrade system stack unit 1 A PEEEEP EEE PEEP Peer Peer eee E E E E eee p p p p p p p p p p p p p p p p p p p E E E E E E B B GB GB G eet 206 Stacking Image upgraded to Stack unit 1 Dell configure Dell conf boot system stack unit 1 primary system A Dell conf end Dell Jan 3 14 27 00 SSTKUNITO M CP SYS 5 CONFIG I Configured from console Dell write memory Jan 3 14 27 10 SSTKUNITO M CP SFILEMGR 5 FILESAVED Copied running config to startup config in flash by default Synchronizing data to peer Stack unit LILI Dell power cycle stack unit 1 Proceed with power cycle Confirm yes no yes Stacking 207 18 Broadcast Storm Control On the Aggregator the broadcast storm
340. p link ensures that node failure conditions are correctly detected and are not confused with failures of the VLT interconnect VLT ensures that local traffic on a chassis does not traverse the VLTi and takes the shortest path to the destination via directly attached links Configure Virtual Link Trunking VLT requires that you enable the feature and then configure the same VLT domain backup link and VLT interconnect on both peer switches Important Points to Remember e VLT port channel interfaces must be switch ports Dell Networking strongly recommends that the VLTi VLT interconnect be a static LAG and that you disable LACP on the VLTi e If the Lacp ungroup feature is not supported on the ToR reboot the VLT peers one at a time After rebooting verify that VLTi ICL is active before attempting DHCP connectivity Configuration Notes When you configure VLT the following conditions apply e VLT domain AVLT domain supports two chassis members which appear as a single logical device to network access devices connected to VLT ports through a port channel AVLT domain consists of the two core chassis the interconnect trunk backup link and the LAG members connected to attached devices Each VLT domain has a unique MAC address that you create or VLT creates automatically PMUX Mode of the IO Aggregator 247 ARP tables are synchronized between the VLT peer nodes VLT peer switches operate as separate chassis wit
341. packet in Interface a 0 0 with Lease IP 10 16 134 250 Mask 255 255 0 0 DHCP REQUEST sent in Interface Ma 0 0 1w2d23h SSTKUNITO M CP DHCLIENT 5 DHCLIENT LOG DHCLIENT DBG EVT Interface a 0 0 Transitioned to state BOUND Dell conf if ma 0 0 4 no ip address Dell conf if ma 0 0 41w2d23h STKUNITO M CP SDHCLIENT 5 DHCLIENT LOG DHCLIENT DBG EVT Interface a 0 0 DHCP DISABLE CMD Received in state SELECTING No Dynamic Host Configuration Protocol DHCP 63 1w2d23h SSTKUNITO M CP DHC Ma 0 0 Trans to state START 1w2d23h SSTKUNITO M CP SDHC Ma 0 0 DHCP CMD sent to FTOS Dellf release Del1f1w2d23h Interface Ma F EASE D Received w2d23h STK ELEASE sent a 0 0 DROJ a 0 0 Trans to state STOP 1w2d23h SSTK a 0 0 DHCP RELEASED CMD sent to EF Dell 1lw2d23h Interface Ma RENEW CMD Received 1w2d23h SSTK Ma 0 0 Trans to state SELE DISCOV Ma 0 0 1w2d23h SSTK DHCPOFFER pac Interface Ma 10 16 134 249 R sent Dell renew dhcp int Ma 0 0 SSTKUNITO M CP DHCLIENT 5 DHCLIENT itioned DISABLED in state START dhcp int Ma 0
342. pe ree erre eee edge reste cata nep ende EAGCBP EXatmple set HR E A aue Hp uet a e e dac pu pet Link Aggregation Control Protocol LACP sssssssssseeeeeeem enne eem nnne Configuration Tasks for Port Channel Interfaces Creating a Port ChanneL ssssssssssssse Adding a Physical Interface to a Port ChanneL ssssssssssssssssssssseeeeee ener Reassigning an Interface to a New Port Channel Configuring the Minimum Oper Up Links in a Port Channel Deleting or Disabling a Port Channel atc eee terree tet nts Configuring the Minimum Number of Links to be Up for Uplink LAGs to be Active 130 Optimizing Traffic Disruption Over LAG Interfaces On IOA Switches in VLT Mode 131 Preserving LAG and Port Channel Settings in Nonvolatile Storage sssee 131 Enabling the Verification of Member Links Utilization in a LAG Bundle sse 131 Monitoring the Member Links of a LAG Bundle oooocninnnccnncninonoccoccconcn nono ncnnn conc concnornnar cnn cnn rca 132 Verifying LACP Operation and LAG Configuration 155 Eg M eal pias Managing the MAG Address Table toten nai Clearing the MAC Address Entries eene re nnne eren nens Displaying the MAC Address Table Network Interface Controller NIC Teaming sse emere 138 MAC Address Station Move ae Reuben e ed edv e ee a 139 MAG Move Optirmizatiori tt tm
343. pkt out of Gi 1 2 1widl9r jump 1wid19h 0180 c2 00 00 Oe 00 01 e8 Od b7 3b 8100 00 00 1widi9h 1widi9h 1widi9h 1widi9h LLDP fra 1w1d19h Started Figure 23 The debug lldp detail Command LLDPDU Packet Dissection 146 Link Layer Discovery Protocol LLDP Relevant Management Objects Dell Networkings OS supports all IEEE 802 1AB MIB objects The following tables list the objects associated with received and transmitted TLVs e the LLDP configuration on the local agent e EEE 802 1AB Organizationally Specific TLVs e received and transmitted LLDP MED TLVs Table 9 LLDP Configuration MIB Objects MIB Object LLDP Variable Category LLDP adminStatus Configuration msgTxHold msgTxlnterval rxInfoTTL txInfoTTL Basic TLV mibBasicTLVsTxEnable Selection mibMgmtAdarInstanceTxEn able LLDP statsAgeoutsTotal Statistics statsFramesDiscardedTotal statsFramesInErrorsTotal Link Layer Discovery Protocol LLDP LLDP MIB Object ldpPortConfigAdminStatus ldpMessageTxHoldMultiplie r ldpMessageTxiInterval ldpRxinfoTTL ldpTxiInfoTTL ldpPortConfigTLVsTxEnabl e ldpManAddrPortsTxEnable lldpStatsRxPortAgeoutsTotal lldpStatsRxPortFramesDisca rdedTotal ldpStatsRxPortFramesErrors Description Whether you enable the local LLDP agent for transmit receive or both Multiplier value Transmit Interval value Time to live for received TLVs Time to live for transmitted TLV
344. play the public part of the SSH host keys show ip ssh client pub keys display the client public keys used in host based authentication show ip ssh rsa authentication display the authorized keys for the RSA authentication e ssh peer rpm open an SSH connection to the peer RPM The following example shows the use of SCP and SSH to copy a software image from one switch running SSH server on UDP port 99 to the local switch Dell copy scp flash Address or name of remote host 10 10 10 1 Port number of the server 22 99 Source file name test cfg User name to login remote host admin Password to login remote host Secure Shell Authentication Secure Shell SSH is disabled by default Enable SSH using the ip ssh server enable command SSH supports three methods of authentication e Enabling SSH Authentication by Password e Using RSA Authentication of SSH e Configuring Host Based SSH Authentication Important Points to Remember e If you enable more than one method the order in which the methods are preferred is based on the ssh config file on the Unix machine When you enable all the three authentication methods password authentication is the backup method when the RSA method fails e The files known hosts and known hosts2 are generated when a user tries to SSH using version 1 or version 2 respectively The SSH server and client are enhanced to support the VRF awareness functionality Using th
345. queuel 3 queue2 3 queue3 3 queue4 3 queued 3 queue6 3 queue7 3 buffer dynamic 1256 buffer profile fp fsqueue hig buffer dedicated queue0 3 queuel 3 queue2 3 queue3 3 queue4 3 queue5 3 queue6 3 queue7 3 buffer dynamic 1256 302 Debugging and Diagnostics buffer fp uplink stack unit 0 port set 0 buffer policy fsqueue hig buffer fp uplink stack unit 0 port set 1 buffer policy fsqueue hig Interface range gi 0 1 48 buffer policy fsqueue fp Dellfshow run int Tengig 0 10 interface TenGigabitEthernet 0 10 no ip address Troubleshooting Packet Loss To troubleshoot packet loss use the following commands how hardware stack unit cpu data plane statistics how hardware stack unit cpu party bus statistics drops unit 0 0 port 1 56 stack port 33 56 unit 0 0 counters details tion table dump hardware stack unit 0 5 0 5 how 0 5 0 5 how hardware stack unit how hardware stack unit 0 5 detail register ipmc replica how eg hardware layer3 qos stack unit hardware layer2 layer3 acl how 0 5 port set 0 0 ack unit 0 5 port set 0 1 vn u Um uu HDHD N 0 how hardware system flow layer2 st The show hardware stack unit command is intended primarily to troubleshoot packet loss port stats lin acl stack unit 0 5 port set 0 0 counters clear clear clear clear clear hardware stack unit hardware stack unit 0 5 coun 0 5 unit ters 0 0 counters hardw
346. r The Aggregator operates as a lossless FIP snooping bridge to transparently forward FCoE frames between the ENode servers and the FCF switch FIP Snooping 73 SAN Network Fibre Channel Storage Traffic Ethernet LAN Traffic Aggregators Installed in M1000e Chassis Servers Installed in M1000e Chassis Figure 8 FIP Snooping on an Aggregator The following sections describes how to configure the FIP snooping feature on a switch that functions as a FIP snooping bridge so that it can perform the following functions e Performs FIP snooping allowing and parsing FIP frames globally on all VLANs or on a per VLAN basis e Set the FCoE MAC address prefix FC MAP value used by an FCF to assign a MAC address to an ECoE end device server ENode or storage device after a server successfully logs in e Set the FCF mode to provide additional port security on ports that are directly connected to an FCF Check FIP snooping enabled VLANs to ensure that they are operationally active 74 FIP Snooping e Process FIP VLAN discovery requests and responses advertisements solicitations FLOGI FDISC requests and responses FLOGO requests and responses keep alive packets and clear virtual link messages FIP Snooping in a Switch Stack FIP snooping supports switch stacking as follows e A switch stack configuration is synchronized with the standby stack unit e Dynamic population of the FCoE database ENode Session and FCF tables is sync
347. r c Aged Drops 0 Egress MAC counters Egress FCS Drops 0 Egress FORWARD PROCESSOR Drops IPv4 L3UC Aged amp Drops i0 TTL Threshold Drops E INVALID VLAN CNTR Drops L2MC Drops PKT Drops of ANY Conditions Hg MacUnderflow TX Err PKT Counter o0oo00o0ooo Dataplane Statistics The show hardware stack unit cpu data plane statistics command provides insight into the packet types coming to the CPU The command output in the following example has been augmented providing detailed RX TX packet statistics on a per queue basis The objective is to see whether CPU bound traffic is internal so called party bus or IPC traffic or network control traffic which the CPU must process Example of Viewing Dataplane Statistics Dell show hardware stack unit 2 cpu data plane statistics bc pci driver statistics for device rxHandle 0 noMhdr 0 304 Debugging and Diagnostics noMbuf 0 noClus 0 recvd 0 dropped 0 recvToNet 0 rxError 0 rxDatapathErr 0 rxPkt COSO 0 rxPkt COS1 0 rxPkt COS2 0 rxPkt COS3 0 rxPkt COS4 0 rxPkt COS5 0 rxPkt COS6 0 rxPkt COS7 0 rxPkt UNITO 0 rxPkt UNIT1 xO rxPkt UNIT2 z0 rxPkt UNIT3 0 transmitted 0 txRequested 0 noTxDesc 0 txError 20 txReqTooLarge 20 txInternalError 0 txDatapathErr 20 txPkt COSO 0 txPkt COS1 0 txPkt COS2 0 txPkt COS3 0 txPkt COS4 0 txPkt COS5 20 txPkt CO
348. r configuring protocols or assigning access control lists Adding a Physical Interface to a Port Channel The physical interfaces in a port channel can be on any line card in the chassis but must be the same physical type K NOTE Port channels can contain a mix of Gigabit Ethernet and 10 100 1000 Ethernet interfaces but Dell Networking OS disables the interfaces that are not the same speed of the first channel member in the port channel Link Aggregation 125 You can add any physical interface to a port channel if the interface configuration is minimal You can configure only the following commands on an interface if it is a member of a port channel e description e shutdown no shutdown e mtu e ip mtu if the interface is on a Jumbo enabled by default K NOTE A logical port channel interface cannot have flow control Flow control can only be present on the physical interfaces if they are part of a port channel To view the interface s configuration enter INTERFACE mode for that interface and use the show config command or from EXEC Privilege mode use the show running config interface interface command When an interface is added to a port channel Dell Networking OS recalculates the hash algorithm To add a physical interface to a port use the following commands 1 Add the interface to a port channel INTERFACE PORT CHANNEL mode channel member interface The interface variable is the physical interface type and sl
349. r is auto configured to operate as a DHCP client The DHCP client functionality is enabled only on the default VLAN and the management interface A DHCP client is a network device that requests an IP address and configuration parameters from a DHCP server On an Aggregator the DHCP client functionality is implemented as follows The public out of band management OOB interface and default VLAN 1 are configured by default as a DHCP client to acquire a dynamic IP address from a DHCP server You can override the DHCP assigned address on the OOB management interface by manually configuring an IP address using the CLI or CMC interface If no user configured IP address exists for the OOB interface exists and if the OOB IP address is not in the startup configuration the Aggregator will automatically obtain it using DHCP You can also manually configure an IP address for the VLAN 1 default management interface using the CLI If no user configured IP address exists for the default VLAN management interface exists and if the default VLAN IP address is not in the startup configuration the Aggregator will automatically obtain it using DHCP e The default VLAN 1 with all ports configured as members is the only L3 interface on the Aggregator When the default management VLAN has a DHCP assigned address and you reconfigure th
350. r the stacking topology shown previously in this chapter P Set up a connection to the CLI on an Aggregator as described in Accessing the CLI 2 Logon to the CLI and enter Global Configuration mode Login username Password Dell enable Dell configure 3 Configure the Aggregator to operate in stacking mode CONFIGURATION mode stack unit 0 iom mode stack 4 Repeat Steps 1 to 3 on the second Aggregator in the stack 5 Logon to the CLI and reboot each switch one after another in as short a time as possible EXEC PRIVILEGE mode reload K NOTE If the stacked switches all reboot at approximately the same time the switch with the highest MAC address is automatically elected as the master switch The switch with the next highest MAC address is elected as standby As each switch joins the stack it is assigned the lowest available stack unit number from 0 to 5 The default configuration of each stacked switch is stored in the running configuration of the stack The stack unit ID numbers are retained after future stack reloads To verify the stack unit number assigned to each switch in the stack use the show system brief command Stacking 195 Adding a Stack Unit You can add a new unit to an existing stack both when the unit has no stacking ports stack groups configured and when the unit already has stacking ports configured If the units to be added to the stack have been previously used they are assigned
351. r you apply an FCoE map on an FC port when you enable the port no shutdown the NPG starts sending FIP multicast advertisements on behalf of the FC port to downstream servers in order to advertise the availability of a new FCF port on the FCoE VLAN The FIP advertisement also contains a keepalive message to maintain connectivity between a SAN fabric and downstream servers Configuring an NPIV Proxy Gateway Prerequisite Before you configure an NPIV proxy gateway NPG with the FC Flex IO module on an MXL 10 40GbE Switch or an M I O Aggregator ensure that the following features are enabled e DCB is enabled by default with the FC Flex IO module on the MXL 10 40GbE Switch or M I O Aggregator e Autonegotiated DCBx is enabled for converged traffic by default with the FC Flex IO module Ethernet ports on all MXL 10 40GbE Switches or M I O Aggregators e FCoE transit with FIP snooping is automatically enabled when you configure Fibre Channel with the FC Flex IO module on the MXL 10 40GbE Switch or M I O Aggregator To configure an NPG operation with the FC Flex IO module on an MXL 10 40GbE Switch or an M I O Aggregator follow these general configuration steps 1 Enabling Fibre Channel Capability on the Switch 2 Creating a DCB map 3 Applying a DCB map on server facing Ethernet ports FC Flex IO Modules 273 Creating an FCoE VLAN Creating an FCoE map Applying an FCoE map on server facing Ethernet ports SN mos Applying an FCoE Map on
352. rature is value C approaching hutdown threshold of value C uano To view the programmed alarm thresholds levels including the shutdown value use the show alarms hreshold command ct NOTE When the ingress air temperature exceeds 61 C the Status LED turns Amber and a major alarm is triggered Example of the show alarms threshold Command Dell show alarms threshold Temperature Limits deg C Ingress Air Off Ingress Air Major Off Major Shutdown UnitO 58 61 84 86 90 Dell Troubleshoot an Over Temperature Condition To troubleshoot an over temperature condition use the following information Use the show environment commands to monitor the temperature levels 2 Check air flow through the system Ensure that the air ducts are clean and that all fans are working correctly 3 After the software has determined that the temperature levels are within normal limits you can re power the card safely To bring back the line card online use the power on command in EXEC mode In addition Dell Networking requires that you install blanks in all slots without a line card to control airflow for adequate system cooling NOTE Exercise care when removing a card if it has exceeded the major or shutdown thresholds the card could be hot to the touch Example of the show enivornment Command Dell show environment Unit Environment Status Unit Status Temp Voltage TempStatus 0 online 59C ok 2 Manag
353. rd authenticationorno ip ssh rsa authentication 6 Enable host based authentication CONFIGURATION mode ip ssh hostbased authentication enable 7 Bind shosts and rhosts to host based authentication CONFIGURATION mode ip ssh pub key file flash filenameor ip ssh rhostsfile flash filename Examples of Creating shosts and rhosts The following example shows creating shosts admin Unix client cd etc ssh admin Unix client ls moduli sshd_config ssh host dsa key pub ssh host key pub ssh host rsa key pub ssh config ssh host dsa key ssh host key ssh host rsa key admin Unix client cat ssh host rsa key pub ssh rsa AAAAB3NzaClyc2EAAAABIwAAAIEA8KT7jLZRVfjgHJzUOmXxuIlbZx AyWhVgJDOh39k8v3e8eQOvLnHBIsqIL8jVylOHhUeb7GaDl1JVEDAMz30myqQbJgXBBRTWgBpLWwL doyUXFufjiL9YmoVTkbKcFmxJEMkE3JyHanEi7hg34LChjk9hLl1by8cYZP2kYS2l1nSyOWk admin Unix client ls id_rsa id_rsa pub shosts admin Unix client cat shosts 10 16 127 201 ssh rsa AAAAB3NzaClyc2EAAAABIwAAAIEA8K7jLZRVfjgHJzUOmXxuIlbZx AyW hVgJDOh39k8v3e8eQvLnHBIsqIL8jVylOHhUeb7GaDl1JVEDAMz30myqQbJgXBBRTWgBpLWwL doyUXFufjiL9YmoVTkbKcFmxJEMkE3JyHanEi7hg34LChjk9hLl1by8cYZP2kYS2l1nSyOWk The following example shows creating rhosts admin Unix client ls id_rsa id_rsa pub rhosts shosts admin Unix client cat rhosts 10 16 127 201 admin Using Client Based SSH Authentication To SSH from the chassis to the SSH client use the followi
354. refer to the Security chapter in the Dell Networking OS Command Reference Guide Understanding Banner Settings This functionality is supported on the M I O Aggregator A banner is a note that is displayed when you log in to the system depending on the privilege level and the command mode into which the you log in You can specify different banners to be displayed as the message of the day MOTD as the opening quote in EXEC mode or as the beginning message in EXEC Privilege mode Setting up a banner enables you to display any important information or group level notification that needs to be communicated to all the users of the system A login banner message is displayed only in EXEC Privilege mode after entering the enable command followed by the password These banners are not displayed to users in EXEC mode When you connect to a system the message of the day MOTD banner is displayed first followed by the login banner and prompts After you log in to the system with valid authentication credentials the EXEC banner is shown You can use the MOTD banner to alert users of critical upcoming events so that they can plan and schedule their accessibility to the device You can modify the banner messages depending on the requirements or conditions Accessing the I O Aggregator Using the CMC Console Only This functionality is supported on the M I O Aggregator You can enable the option to access and administer an I O Aggregator only using
355. rent address is 00 01 e8 cc cc ce Interface index is 1107787786 Internet address is not set MTU 1554 bytes IP MTU 1500 bytes LineSpeed auto ARP type ARPA ARP Timeout 04 00 00 178 Simple Network Management Protocol SNMP To display the ports in a VLAN send an snmpget request for the object dotiqStaticEgressPorts using the interface index as the instance number as shown in the following example Example of Viewing the Ports in a VLAN in SNMP snmpget v2c c mycommunity 10 11 131 185 1 3 6 1 2 1 17 7 1 4 3 1 2 1107787786 SNMPv2 SMI mib 2 17 7 1 4 3 1 2 1107787786 Hex STRING 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 The table that the Dell Networking system sends in response to the snmpget request is a table that contains hexadecimal hex pairs each pair representing a group of eight ports Seven hex pairs represent a stack unit Seven pairs accommodate the greatest number of ports available on an Aggregator 12 ports The last stack unit is assigned eight pairs the eight pair is unused The first hex pair 00 in the previous example represents ports 1 to 7 in Stack Unit O The next pair to the right represents ports 8 to 15 To resolve the hex pair into a representation of the individual ports convert the hex pair to binary Consider the first hex pair 00 which resolves to 0000 0000 in binary e Each position
356. ress to a client spoofing the same MAC address on a different relay agent e assign IP addresses according to the relay agent This prevents generating DHCP offers in response to requests from an unauthorized relay agent The server echoes the option back to the relay agent in its response and the relay agent can use the information in the option to forward a reply out the interface on which the request was received rather than flooding it on the entire VLAN The relay agent strips Option 82 from DHCP responses before forwarding them to the client To insert Option 82 into DHCP packets follow this step 68 Dynamic Host Configuration Protocol DHCP Insert Option 82 into DHCP packets CONFIGURATION mode int ma 0 0 ip add dhcp relay information option remote id For routers between the relay agent and the DHCP server enter the trust downstream option Releasing and Renewing DHCP based IP Addresses On an Aggregator configured as a DHCP client you can release a dynamically assigned IP address without removing the DHCP client operation on the interface To manually acquire a new IP address from the DHCP server use the following command e Release a dynamically acquired IP address while retaining the DHCP client configuration on the interface EXEC Privilege mode release dhcp interface type slot port Acquire a new IP address with renewed lease time from a DHCP server EXEC Privilege mode renew dhcp interface type slot p
357. rface TenGigabitEthernet 0 3 Priority Rx XOFF Frames Rx Total Frames Tx Total Frames 50 Data Center Bridging DCB Example of the show interfaces pfc summary Command Dell show interfaces tengigabitethernet 0 4 pfc summary Interface TenGigabitEthernet 0 4 Admin mode is on Admin is enabled Remote is enabled Priority list is 4 Remote Willing Status is enabled Local is enabled Oper status is Recommended PFC DCBx Oper status is Up State Machine Type is Featur TLV Tx Status is enabled PFC Link Delay 45556 pause quantams Application Priority TLV Parameters FCOE TLV Tx Status is disabled ISCSI TLV Tx Status is disabled Local FCOE PriorityMap is 0x8 Local ISCSI PriorityMap is 0x10 Remote FCOE PriorityMap is 0x8 Remote ISCSI PriorityMap is 0x8 Dell show interfaces tengigabitethernet 0 4 pfc detail Interface TenGigabitEthernet 0 4 Admin mode is on Admin is enabled Remote is enabled Remote Willing Status is enabled Local is enabled Oper status is recommended PFC DCBx Oper status is Up State Machine Type is Featur TLV Tx Status is enabled PFC Link Delay 45556 pause quanta Application Priority TLV Parameters FCOE TLV Tx Status is disabled ISCSI TLV Tx Status is disabled Local FCOE PriorityMap is 0x8 Local ISCSI PriorityMap is 0x10 Remote FCOE PriorityMap is 0x8 Remote ISCSI PriorityMap is 0x8 0 Input TLV pkts 1 Output TLV pkts 0 Error pkts 0 Pause T
358. rface level configuration overrides the global configuration To advertise TLVs use the following commands 1 Enter LLDP mode CONFIGURATION or INTERFACE mode protocol lldp 2 Advertise one or more TLVs PROTOCOL LLDP mode advertise dcbx appln tlv dcbx tlv dot3 tlv interface port desc management tlv med Include the keyword for each TLV you want to advertise e For management TLVs system capabilities system description e For 802 1 TLVs port protocol vlan id port vlan id For 802 5 TLVs max frame size For TIA 1057 TLVs guest voice guest voice signaling location identification power via mdi softphone voice streaming video video conferencing 240 PMUX Mode of the IO Aggregator video signaling voice voice signaling In the following example LLDP is enabled globally R1 and R2 are transmitting periodic LLDPDUs that contain management 802 1 and 802 5 TLVs Figure 29 Configuring LLDP Viewing the LLDP Configuration To view the LLDP configuration use the following command e Display the LLDP configuration CONFIGURATION or INTERFACE mode show config Example of Viewing LLDP Global Configurations R1 conf protocol lldp R1 conf lldp show config protocol lldp advertise dot3 tlv max frame size advertise management tlv system capabilities system description hello 10 no disable R1 conf 11dp Example of Viewing LLDP Interfa
359. rformance computing clusters to share information Server traffic is extremely sensitive to latency requirements To ensure lossless delivery and latency sensitive scheduling of storage and service traffic and I O convergence of LAN storage and server traffic over a unified fabric IEEE data center bridging adds the following extensions to a classical Ethernet network e 802 10bb Priority based Flow Control PFC 802 1Qaz Enhanced Transmission Selection ETS e 802 1Qau Congestion Notification Data Center Bridging DCB 27 e Data Center Bridging Exchange DCBx protocol NOTE In Dell Networking OS version 9 4 0 x only the PFC ETS and DCBx features are supported in data center bridging Priority Based Flow Control In a data center network priority based flow control PFC manages large bursts of one traffic type in multiprotocol links so that it does not affect other traffic types and no frames are lost due to congestion When PFC detects congestion on a queue for a specified priority it sends a pause frame for the 802 1p priority traffic to the transmitting device In this way PFC ensures that large amounts of queued LAN traffic do not cause storage traffic to be dropped and that storage traffic does not result in high latency for high performance computing HPC traffic between servers PFC enhances the existing 802 3x pause and 802 1p priority capabilities to enable flow control based on 802 1p priorities classes of ser
360. rom the configuration source ignore their current settings and use the configuration source information Propagation of DCB Information When an auto upstream or auto downstream port receives a DCB configuration from a peer the port acts as a DCBx client and checks if a DCBx configuration source exists on the switch e If a configuration source is found the received configuration is checked against the currently configured values that are internally propagated by the configuration source If the local configuration is compatible with the received configuration the port is enabled for DCBx operation and synchronization e If the configuration received from the peer is not compatible with the internally propagated configuration used by the configuration source the port is disabled as a client for DCBx operation and synchronization and a syslog error message is generated The port Keeps the peer link up and continues to exchange DCBx packets If a compatible configuration is later received from the peer the port is enabled for DCBx operation K NOTE When a configuration source is elected all auto upstream ports other than the configuration source are marked as willing disabled The internally propagated DCB configuration is refreshed on all auto configuration ports and each port may begin configuration negotiation with a DCBx peer again Auto Detection of the DCBx Version The Aggregator operates in auto detection mode so that a DCBX port
361. rrently active sessions Display information on the FCoE VLANs on which FIP snooping is enabled show fip snooping sessions Command Example Dell show fip snooping sessions Enode MAC Enode Intf aa bb cc 00 00 00 Te 0 42 FIP Snooping FCF MAC FCF Intf VLAN aa bb cd 00 00 00 Te 0 43 100 77 aa bb cc 00 00 00 Te 0 42 aa bb cd 00 00 00 Te 0 43 100 aa bb cc 00 00 00 Te 0 42 aa bb cd 00 00 00 Te 0 43 100 aa bb cc 00 00 00 Te 0 42 aa bb cd 00 00 00 Te 0 43 100 aa bb cc 00 00 00 Te 0 42 aa bb cd 00 00 00 Te 0 43 100 FCoE MAC FC ID Port WWPN Port WWNN 0e c 00 01 00 01 01 00 01 31 00 0e fc 00 00 00 00 21 00 0e c 00 00 00 00 0e fc 00 01 00 02 01 00 02 41 00 0e fc 00 00 00 00 21 00 0e fc 00 00 00 00 0e fc 00 01 00 03 01 00 03 41 00 0e fc 00 00 00 01 21 00 0e fc 00 00 00 00 0e fc 00 01 00 04 01 00 04 41 00 0e fc 00 00 00 02 21 00 0e fc 00 00 00 00 0e fc 00 01 00 05 01 00 05 41 00 0e fc 00 00 00 03 21 00 0e fc 00 00 00 00 show fip snooping sessions Command Description Field Description ENode MAC MAC address of the ENode ENode Interface Slot port number of the interface connected to the ENode FCF MAC MAC address of the FCF FCF Interface Slot port number of the interface to which the FCF is connected VLAN VLAN ID number used by the session FCoE MAC MAC address of the FCoE session assigned by the FCF FC ID Fibre Channel ID assigned by the FCF Port WWPN Worldwide port name of the CNA port Por
362. rs Example of Viewing Details Advertised by Neighbors R1 conf if te 1 31 lldp end R1 conf if te 1 31 do show lldp neighbors Loc PortID Rem Host Name Rem Port Id Rem Chassis Id 144 Link Layer Discovery Protocol LLDP Te 0 2 Te 0 3 00 00 c9 b1 3b 82 00 00 c9 ad 6 12 00 00 c9 b1 3b 82 00 00 c9 ad 6 12 Dell show lldp neighbors detail Local Interface Te Total Frames Out Total Frames In a 0 2 has 1 neighbor 16843 17464 Total Neighbor information Age outs 0 Total Multiple Neighbors Detected 0 Total Frames Discarded 0 Total In Error Frames 0 Total Unrecognized TLVs 0 Total TLVs Discarded 0 Next packet will E Remote Chassis Remote Chassis Remote Remote Port ID Local Port ID Remote TTL 120 Information val Time Remot Port Subtype since last information change of this neighbor be sent after 16 seconds The neighbors are given below ID Subtype Mac address 4 TD 00 00 2c9 b1 3b 782 ac address 3 00 00 c9 b1 3b 82 TenGigabitEthernet 0 2 Locally assigned remote Neighbor Index 7 id for next 105 seconds 1d21h56m Enabled System System Desc Existing System Capabilities Emulex OneConnect 10Gb Multi function Adapter Station only Capabilities Station only Local Interface Te Total Frames Out Total Frames In otal Neighbor in Total Multiple Ne Total Frames Disc Total In
363. rt channel status System MAC mismatch Unit ID mismatch Version ID mismatch VLT LAG ID is not configured on one VLT peer VLT LAG ID mismatch 258 Behavior at Peer Up N A A syslog error message and an SNMP trap are generated The VLT peer does not boot up The VLTi is forced to a down state A syslog error message is generated A syslog error message and an SNMP trap are generated A syslog error message is generated The peer with the VLT configured remains active The VLT port channel is brought down A syslog error message is generated Behavior During Run Time N A A syslog error message and an SNMP trap are generated The VLT peer does not boot up The VLTi is forced to a down state A syslog error message is generated A syslog error message and an SNMP trap are generated A syslog error message is generated The peer with the VLT configured remains active The VLT port channel is brought down A syslog error message is generated Action to Take Use the show vlt detail and show vlt brief commands to view the VLT port channel status information Verify that the unit ID of VLT peers is not the same on both units and that the MAC address is the same on both units Verify the unit ID is correct on both VLT peers Unit ID numbers must be sequential on peer units for example if Peer 1 is unit ID 0 Peer 2 unit ID must be T Verify the Dell N
364. rtual local area network VLAN configurations NOTE Diagnostic is not allowed in Stacking mode including member stacking Avoid stacking before executing the diagnostic tests in the chassis Important Points to Remember You can only perform offline diagnostics on an offline standalone unit You cannot perform diagnostics if the ports are configured in a stacking group Remove the port s from the stacking group before executing the diagnostic test Diagnostics only test connectivity not the entire data path Diagnostic results are stored on the flash of the unit on which you performed the diagnostics When offline diagnostics are complete the unit or stack member reboots automatically Running Offline Diagnostics To run offline diagnostics use the following commands For more information refer to the examples following the steps 1 Place the unit in the offline state EXEC Privilege mode offline stack unit You cannot enter this command on a MASTER or Standby stack unit K NOTE The system reboots when the offline diagnostics complete This is an automatic process The following warning message appears when you implement the offline stack unit command Warning offline of unit will bring down all the protocols and the unit will be operationally down except for running Diagnostics Please make sure that stacking fanout not configured for Diagnostics execution Also reboot online command is necessary for normal opera
365. run O discarded Statistics packets 0 bytes 0 underruns 64 byte pkts 0 over 64 byte pkts 0 over 127 byte pkts over 255 byte pkts 0 over 511 byte pkts 0 over 1023 byte pkts Multicasts 0 Broadcasts 0 Unicasts throttles 0 discarded 0 collisions 0 wreddrops Rate info interval 299 seconds Input 00 00 Mbits sec 0 packets sec 0 00 of line rate Outpu oo0oooofoooooo 130 Link Aggregation Output 00 00 Mbits sec 0 packets sec 0 00 of line rate Time since last interface status change 05 22 28 Optimizing Traffic Disruption Over LAG Interfaces On IOA Switches in VLT Mode When you use the write memory command while an Aggregator operates in VLT mode the VLT LAG configurations are saved in nonvolatile storage NVS By restoring the settings saved in NVS the VLT ports come up quicker on the primary VLT node and traffic disruption is reduced The delay in restoring the VLT LAG parameters is reduced 90 seconds by default on the secondary VLT peer node before it becomes operational This makes sure that the configuration settings of the primary VLT node are synchronized with the secondary VLT peer node before the secondary VLT mode is up The traffic outage is less than 200 millisconds during the restart or switchover of the VLT peer nodes from primary to secondary Preserving LAG and Port Channel Settings in Nonvolatile Storage Use the write memory command on an I O Aggregator which operates in ei
366. runk the VLT management system uses the backup link interface to determine whether the failure is a link level failure or whether the remote peer has failed entirely If the remote peer is still alive heartbeat messages are still being received the VLT secondary switch disables its VLT port channels If keepalive messages from the peer are not being received the peer continues to forward traffic assuming that it is the last device available in the network In either case after recovery of the peer link or reestablishment of message forwarding across the interconnect trunk the two VLT peers resynchronize any MAC addresses learned while communication was interrupted and the VLT system continues normal data forwarding Ifthe primary chassis fails the secondary chassis takes on the operational role of the primary e The SNMP MIB reports VLT statistics Primary and Secondary VLT Peers Primary and Secondary VLT Peers are supported on the Aggregator To prevent issues when connectivity between peers is lost you can designate Primary and Secondary roles for VLT peers You can elect or configure the Primary Peer By default the peer with the lowest MAC address is selected as the Primary Peer If the VLTi link fails the status of the remote VLT Primary Peer is checked using the backup link If the remote VLT Primary Peer is available the Secondary Peer disables all VLT ports to prevent loops If all ports in the VLTi link fail or if the comm
367. ry protocol LLDP advertises connectivity and management from the local station to the adjacent stations on an IEEE 802 LAN LLDP facilitates multi vendor interoperability by using standard management tools to discover and make available a physical topology for network management The Dell Networking operating software implementation of LLDP is based on IEEE standard 801 1ab The starting point for using LLDP is invoking LLDP with the protocol lldp command in either CONFIGURATION or INTERFACE mode The information LLDP distributes is stored by its recipients in a standard management information base MIB You can access the information by a network management system through a management protocol such as simple network management protocol SNMP An Aggregator auto configures to support the link layer discovery protocol LLDP for the auto discovery of network devices You can use CLI commands to display acquired LLDP information clear LLDP counters and debug LACP operation Configure LLDP Configuring LLDP is a two step process 1 Enable LLDP globally 2 Advertise TLVs out of an interface Related Configuration Tasks Viewing the LLDP Configuration Viewing Information Advertised by Adjacent LLDP Agents e Configuring LLDPDU Intervals e Configuring a Time to Live Debugging LLDP Important Points to Remember e LLDP is enabled by default e Dell Networking systems support up to eight neighbors per interface
368. ry to obtain the correct power information The following example shows the configuration and status information for one interface Dell show interface tengig 1 16 TenGigabitEthernet 1 16 is up line protocol is up Hardware is DellForcel0Eth address is 00 01 e8 00 ab 01 Current address is 00 01 e8 00 ab 01 Server Port AdminState is Up Pluggable media not present Interface index is 71635713 Interfaces 93 Internet address is not set ode of IP Address Assignment NON DHCP Client ID tenG2730001e800ab01 MTU 12000 bytes IP MTU 11982 bytes LineSpeed 1000 Mbit Flowcontrol rx off tx off ARP type ARPA ARP Timeout 04 00 00 Last clearing of show interface counters 11 04 02 Queueing strategy fifo Input Statistics 0 packets 0 bytes 0 64 byte pkts 0 over 64 byte pkts 0 over 127 byte pkts 0 over 255 byte pkts 0 over 511 byte pkts 0 over 1023 byte pkts 0 0 El ulticasts O Broadcasts runts 0 giants 0 throttles O CRC 0 overrun 0 discarded Output Statistics 14856 packets 2349010 bytes 0 underruns 0 64 byte pkts 4357 over 64 byte pkts 8323 over 127 byte pkts 2176 over 255 byte pkts 0 over 511 byte pkts 0 over 1023 byte pkts 12551 Multicasts 2305 Broadcasts 0 Unicasts 0 throttles 0 discarded 0 collisions 0 wreddrops Rate info interval 299 seconds Input 00 00 Mbits sec 0 packets sec 0 00 of line rate Output 00 00 Mbits sec 0 packets sec 0 00 of line rate Time since las
369. s ETS is enabled by default with the default ETS configuration applied all dot1p priorities in the same group with equal bandwidth allocation ETS Operation with DCBx In DCBx negotiation with peer ETS devices ETS configuration is handled as follows ETS TLVs are supported in DCBx versions CIN CEE and IEEE2 5 ETS operational parameters are determined by the DCBX port role configurations ETS configurations received from TLVs from a peer are validated In case of a hardware limitation or TLV error DCBx operation on an ETS port goes down New ETS configurations are ignored and existing ETS configurations are reset to the previously configured ETS output policy on the port or to the default ETS settings if no ETS output policy was previously applied ETS operates with legacy DCBx versions as follows Data Center Bridging DCB 43 Inthe CEE version the priority group traffic class group TCG ID 15 represents a non ETS priority group Any priority group configured with a scheduler type is treated as a strict priority group and is given the priority group TCG ID 15 The CIN version supports two types of strict priority scheduling Group Strict priority Allows a single priority flow in a priority group to increase its bandwidth usage to the bandwidth total of the priority group A single flow in a group can use all the bandwidth allocated to the group Link strict priority Allows a flow in any priorit
370. s Indicates which management TLVs are enabled for system ports The management addresses defined for the system and the ports through which they are enabled for transmission Total number of times that a neighbor s information is deleted on the local system due to an rxInfoTTL timer expiration Total number of LLDP frames received then discarded Total number of LLDP frames received on a port with errors 147 MIB Object Category LLDP Variable statsFramesInTotal statsFramesOutTotal statsTLVsDiscardedTotal statsTLVsUnrecognizedTota l Table 10 LLDP System MIB Objects LLDP MIB Object lldpStatsRxPortFramesTotal lldpStatsTxPortFramesTotal lldpStatsRxPortTLVsDiscard edTotal lldpStatsRxPortTLVsUnreco gnizedTotal TLV Type TLV Name TLV Variable System 1 Chassis ID chassis ID subtype Local Remote chassid ID Local Remote 2 Port ID port subtype Local Remote port ID Local Remote 4 Port Description port description Local Remote 5 System Name system name Local Remote 6 System Description system description Local Remote 7 System Capabilities system capabilities Local Remote 148 Description Total number of LLDP frames received through the port Total number of LLDP frames transmitted through the port Total number of TLVs received then discarded Total number of all TLVs the local agent does not recognize LLDP MIB Object lldoLocChassisldSub type ldpRemChassisldSu
371. s e To achieve complete lossless handling of traffic enable PFC operation on ingress port traffic and on all DCB egress port traffic e All 802 1p priorities are enabled for PFC Queues to which PFC priority traffic is mapped are lossless by default Traffic may be interrupted due to an interface flap going down and coming up For PFC to be applied on an Aggregator port the auto configured priority traffic must be supported by a PFC peer as detected by DCBx e A DCB input policy for PFC applied to an interface may become invalid if dotlp queue mapping is reconfigured refer to Create Input Policy Maps This situation occurs when the new dotlp queue assignment exceeds the maximum number 2 of lossless queues supported globally on the switch In this case all PFC configurations received from PFC enabled peers are removed and re synchronized with the peer devices Dell Networking OS does not support MACsec Bypass Capability MBC How Enhanced Transmission Selection is Implemented Enhanced transmission selection ETS provides a way to optimize bandwidth allocation to outbound 802 1p classes of converged Ethernet traffic Different traffic types have different service needs Using ETS groups within an 802 1p priority class are auto configured to provide different treatment for traffic with different bandwidth latency and best effort needs For example storage traffic is sensitive to frame loss interprocess communication IPC tr
372. s P Primary C Community I Isolated O Openflow U Untagged T Tagged Dotlx untagged X Dotlx tagged OpenFlow untagged O OpenFlow tagged GVRP tagged M Vlan stack H VSN tagged Internal untagged I Internal tagged v VLT untagged VLT tagged lt P00x0 l NUM Status Description Q Ports 1 Active U Po20 Fo 0 33 37 U Po21 Fo 0 49 1000 Active T Po20 Fo 0 33 37 1001 Active T Po21 Fo 0 49 Dell Uplink Failure Detection UFD UFD provides detection of the upstream connectivity loss and if used with network interface controller NIC teaming automatic recovery from a failed link 1 Create the UFD group and associate the downstream and upstream ports Dell configure Dell conf uplink state group 1 Dell conf uplink state group 1 Dell conf uplink state group 1 upstream port channel 128 Dell conf uplink state group 1 downstream tengigabitethernet 0 1 32 2 Show the running configurations in the UFD group 1 Dell conf uplink state group 1 show config uplink state group 1 downstream TenGigabitEthernet 0 1 32 upstream Port channel 128 Dell conf uplink state group 1 3 Show the UFD status Dell show uplink state group detail Up Interface up Dwn Interface down Dis Interface disabled Uplink State Group 1 Status Enabled Up Upstream Interfaces Po 128 Up Downstream Interfaces Te 0 1 Dwn Te 0 2 Dwn Te 0 3 Dwn Te 0 4 Up Te 0 5 Up
373. s use the following command e View TACACS transactions to troubleshoot problems EXEC Privilege mode debug tacacs TACACS Remote Authentication When configuring a TACACS server host you can set different communication parameters such as the key password Example of Specifying a TACACS Server Host Dell conf Dell conf aaa authentication login tacacsmethod tacacs Dell conf aaa authentication exec tacacsauthorization tacacs Dell conf tacacs server host 25 1 1 2 key Force Dell conf Dell conf line vty 0 9 Dell config line vty login authentication tacacsmethod Dell config line vty end Specifying a TACACS Server Host To specify a TACACS server host and configure its communication parameters use the following command Enter the host name or IP address of the TACACS server host CONFIGURATION mode tacacs server host hostname ip address port port number timeout seconds key key Configure the optional communication parameters for the specific host port port number the range is from O to 65335 Enter a TCP port number The default is 49 timeout seconds the range is from O to 1000 Default is 10 seconds key key enter a string for the key The key can be up to 42 characters long This key must match a key configured on the TACACS server host This parameter must be the last parameter you configure If you do not configure these optional parameters the defa
374. s local configuration with the new parameter values When an auto upstream port besides the configuration source receives and overwrites its configuration with internally propagated information one of the following actions is taken e fthe peer configuration received is compatible with the internally propagated port configuration the link with the DCBx peer is enabled e If the received peer configuration is not compatible with the currently configured port configuration the link with the DCBX peer port is disabled and a syslog message for an incompatible configuration is generated The network administrator must then reconfigure the peer device so that it advertises a compatible DCB configuration The configuration received from a DCBX peer or from an internally propagated configuration is not stored in the switch s running configuration On a DCBX port in an auto upstream role the PFC and application priority TLVs are enabled ETS recommend TLVs are disabled and ETS configuration TLVs are enabled The port advertises its own configuration to DCBx peers but is not willing to receive remote peer configuration The port always accepts internally propagated configurations from a configuration source An auto downstream port that receives an internally propagated configuration overwrites its local configuration with the new parameter values When an auto downstream port receives and overwrites its configuration with internally propagat
375. s to be applied to that range of interfaces The interface range prompt offers the interface with slot and port information for valid interfaces The maximum size of an interface range prompt is 32 If the prompt size exceeds this maximum it displays at the end of the output K NOTE Non existing interfaces are excluded from interface range prompt K NOTE When creating an interface range interfaces appear in the order they were entered and are not sorted To display all interfaces that have been validated under the interface range context use the show range in Interface Range mode To display the running configuration only for interfaces that are part of interface range use the show configuration command in Interface Range mode Bulk Configuration Examples The following are examples of using the interface range command for bulk configuration e Create a Single Range e Create a Multiple Range e Exclude a Smaller Port Range e Overlap Port Ranges Interfaces 105 e Commas Create a Single Range Creating a Single Range Bulk Configuration Dell conf interface range tengigabitethernet 0 1 23 Dell conf if range te 0 1 23 no shutdown Dell conf if range te 0 1 23 Create a Multiple Range Creating a Multiple Range Prompt Dell conf interface range tengigabitethernet 0 5 10 tengigabitethernet 0 1 vlan 1 Dell conf if range te 0 5 10 te 0 1 v1 1 Exclude a Smaller Por
376. sabling DCB To configure the Aggregator so that all interfaces are DCB disabled and flow control enabled use the no dcb enable command dcb enable auto detect on next reload Command Example Dell dcb enable auto detect on next reload QoS dotip Traffic Classification and Queue Assignment DCB supports PFC ETS and DCBx to handle converged Ethernet traffic that is assigned to an egress queue according to the following QoS methods Honor dotip dot1p priorities in ingress traffic are used at the port or global switch level Layer 2 class dot1p priorities are used to classify traffic in a class map and apply a service policy maps to an ingress port to map traffic to egress queues K NOTE Dell Networking does not recommend mapping all ingress traffic to a single queue when using PFC and ETS However Dell Networking does recommend using Ingress traffic classification using the service class dynamic dotip command honor dot1p on all DCB enabled interfaces If you use L2 class maps to map dotlp priority traffic to egress queues take into account the default dotlp queue assignments in the following table and the maximum number of two lossless queues supported on a port Although the system allows you to change the default dotip priority queue assignments DCB policies applied to an interface may become invalid if you reconfigure dot1p queue mapping If the configured DCB policy remains valid the change in the dotip queue assignment is allowed For
377. se DCBx to exchange and negotiate parameters with peer devices DCBx capabilities include Discovery of DCB capabilities on peer device connections e Determination of possible mismatch in DCB configuration on a peer link e Configuration of a peer device over a DCB link DCBx requires the link layer discovery protocol LLDP to provide the path to exchange DCB parameters with peer devices Exchanged parameters are sent in organizationally specific TL Vs in LLDP data units For more information refer to Link Layer Discovery Protocol LLDP The following LLDP TLVs are supported for DCB parameter exchange PFC PFC Configuration TLV and Application Priority Configuration TLV parameters ETS parameters ETS Configuration TLV and ETS Recommendation TLV Data Center Bridging DCB 57 Data Center Bridging in a Traffic Flow The following figure shows how DCB handles a traffic flow on an interface Ingress Traffic e Egress Traffic e p Figure 3 DCB PFC and ETS Traffic Handling Enabling Data Center Bridging DCB is automatically configured when you configure FCoE or iSCSI optimization Data center bridging supports converged enhanced Ethernet CEE in a data center network DCB is disabled by default It must be enabled to support CEE e Priority based flow control e Enhanced transmission selection Data center bridging exchange protocol e FCoE initialization protocol FIP snooping DCB processes virtual
378. seeesseesseseesseesas 75 FIP SMOODING on VANS oeratu a td ds 75 ECSMAP Valla a a lla ed cit dd 75 Bridge to FCF Links dda ie de ede Pe e E RO UE EEG 76 Imp ct on other Software FeatUr es e otc etel rie gl ptt ila t HR bod e 76 FIP Snooping Prerequisites FIP Snooping Restrictions Displaying FIP Snooping Informati0ON o oocnccnoccnonononcnononononnnonononnnoonnonnnonnnonnon orion ccoo nnonn enne ntes 77 FIP Snooping Example xe oso re e b P T e ee ela e m cias 83 Debugging FIP AIM S E dd el el ee eats 84 7 Internet Group Management Protocol IGMP 85 GMP Overview GMP Version 2 Joining a Multicast Group Leaving a Multicast Group GMP Version 3 Joining and Filtering Groups and Sources Leaving and Staying in Groups EMPSAOOPINO Lo dt A UM dh eet ein a ie eink cda How IGMP Snooping is Implemented on an Aggregator sssssssse eee 89 Disabling Multicast Floedirigs s t c tie eee ed t et dre mette 90 Displaying IGMP Information 12 dri nei ere a e too e e e edet ee edet 90 Mii cfe e se iraan inaona ace aeaaea nasaan a sapanen bond taett teat bewenterevstiecanteeats Basic Interface Configuration Advanced Interface Configuration nterface Auto Configuration nterface Types Viewing Interface Information Disabling and Re enabling a Physical Interface Layer 2 Mode Management Interfaces Accessing an Aggregator Configuring a Management Interface
379. shed between the FC Flex I O module and the Cisco MDS switch e The FC Flex I O module sends a proxy FLOGI request to the upstream F Port of the FC switch or the MDS switch The F port accepts the proxy FLOGI request for the FC Flex IO virtual N Port The converged network adapters CNAs are brought online and the FIP application is run Discovery of the VLAN and FCF MAC addresses is completed e The CNA sends a FIP fabric login FLOGI request to the FC Flex IO module which converts FLOGI to FDISC messages or processes any internally generated FC frames and sends these messages to the SAN environment e When the FC fabric discovery FDISC accept message is received from the SAN side the FC Flex IO module converts the FDISC message again into an FLOGI accept message and transmits it to the CNA e Internal tables of the switch are then programmed to enable the gateway device to forward FCoE traffic directly back and forth between the devices e The FC Flex IO module sends an FC or FCoE registered state change notification RSCN message to the upstream or downstream devices whenever an error occurs in the appropriate direction e AnF Portis a port on an FC switch that connects to an N Port of an FC device and is called a fabric port By default the NPIV functionality is disabled on the Cisco MDS switch enable this capability before you connect the FC port of the MXL or I O Aggregator to these upstream switches Data Center Brid
380. sical interfaces or selected ones Without an interface specified the command clears all interface counters OPTIONAL Enter the following interface keywords and slot port or number information For a Loopback interface enter the keyword loopback followed by a number from 0 to 16383 For a Port Channel interface enter the keyword port channel followed by a number from 1 to 128 For a 10 Gigabit Ethernet interface enter the keyword TenGigabitEthernet followed by the slot port numbers For a VLAN enter the keyword vlan followed by a number from 1 to 4094 When you enter this command you must confirm that you want Dell Networking OS to clear the interface counters for the interface refer to the below clearing interface example Clearing Dell clear counters tengig 0 0 Clear counters on TenGigabit Dell 114 an Interface Ethernet 0 0 confirm Interfaces Enabling the Management Address TLV on All Interfaces of an Aggregator The management address TLV which is an optional TLV of type 8 denotes the network address of the management interface and is supported by the Dell Networking OS It is advertised on all the interfaces on an I O Aggregator in the Link Layer Discovery Protocol LLDP data units You can use the show running configuration command to verify that this TLV is advertised on all the configured interfaces andthe show lldp neighbors detail command to view the value of this TLV Enhanced Va
381. simplify and speed up this process This OID provides 4000 VLAN entries with port membership bit map for each VLAN and reduces the scan for 4000 X Number of ports to 4000 Enhancements 1 ThedotiqVlanCurrenti interfaces EgressPorts MIB attribute has been enhanced to support logical LAG Current status OID in standard VLAN MIB is accessible over SNMP The bitmap supports 42 bytes for physical ports and 16 bytes for the LAG interfaces up to a maximum of 128 LAG interfaces 4 A59 byte buffer bitmap is supported and in that bitmap e First 42 bytes represent the physical ports e Next 16 bytes represent logical ports 1 128 Simple Network Management Protocol SNMP 185 e An additional 1 byte is reserved for future Fetching the Switchport Configuration and the Logical Interface Configuration Important Points to Remember The SNMP should be configured in the chassis and the chassis management interface should be up with the IP address e If a port is configured in a VLAN the respective bit for that port will be set to 1 in the specific VLAN e Inthe aggregator all the server ports and uplink LAG 128 will be in switchport Hence the respective bits are set to 1 The following output is for the default VLAN Example of dot1qVlanCurrentUntaggedPorts output snmpwalk Os c public v 1 10 16 151 151 1 3 6 1 2 1 17 7 1 4 2 1 5 mib 2 17 7 1 4 2 1 5 0 1107525633 Hex STRING FFFFFFFF 00 00 00 00 00 00 00 00 0
382. sion a DCBx TLV from a remote peer but received a different conflicting DCBx rsion DCBx PFC PARAMETERS MATCH and DSM DCBx PFC PARAMETERS MISMATCH A local Bx port received compatible match or incompatible mismatch PFC configuration from a peer DCBx ETS PARAMETERS MATCH and DSM DCBx ETS PARAMETERS MISMATCH A local Bx port received compatible match or incompatible mismatch ETS configuration from a peer DP UNRECOGNISED DCBx TLV RECEIVED A local DCBx port received an unrecognized DC a Bx TLV from peer Debugging DCBX on an Interface To enable DCBx debug traces for all or a specific control paths use the following command Enable DCBx debugging EXEC PRIVILEGE mode debug dcbx all auto detect timer config exchng fail mgmt resource sem tlv all enables all DCBx debugging operations auto detect timer enables traces for DCBx auto detect timers config exchng enables traces for DCBx configuration exchanges fail enables traces for DCBx failures mgmt enables traces for DCBx management frames resource enables traces for DCBx system resource frames sem enables traces for the DCBx state machine Data Center Bridging DCB 49 tlv enables traces for DCBx TLVs Verifying the DCB Configuration To display DCB configurations use the following sho
383. sion line has its own idle time If the idle time value is not changed the default value of 30 minutes is used RADIUS specifies idle time allow for a user during a session before timeout When a user logs in the lower of the two idle time values configured or default is used The idle time value is updated if both of the following happens The administrator changes the idle time of the line on which the user has logged in e The idle time is lower than the RADIUS returned idle time Configuration Task List for RADIUS To authenticate users using RADIUS you must specify at least one RADIUS server so that the system can communicate with and configure RADIUS as one of your authentication methods The following list includes the configuration tasks for RADIUS Defining a AAA Method List to be Used for RADIUS mandatory Applying the Method List to Terminal Lines mandatory except when using default lists e Specifying a RADIUS Server Host mandatory Setting Global Communication Parameters for all RADIUS Server Hosts optional e Monitoring RADIUS optional For a complete listing of all Dell Networking OS commands related to RADIUS refer to the Security chapter in the Dell Networking OS Command Reference Guide Es NOTE RADIUS authentication and authorization are done in a single step Hence authorization cannot be used independent of authentication However ifyou have configured RADIUS authorization
384. ssor computing within server clusters only one DCB enabled network is required in a data center The Dell Networking switches that support a unified fabric and consolidate multiple network infrastructures use a single input output I O device called a converged network adapter CNA A CNA is a computer input output device that combines the functionality of a host bus adapter HBA with a network interface controller NIC Multiple adapters on different devices for several traffic types are no longer required Data center bridging satisfies the needs of the following types of data center traffic in a unified fabric e LAN traffic consists of a large number of flows that are generally insensitive to latency requirements while certain applications such as streaming video are more sensitive to latency Ethernet functions as a best effort network that may drop packets in case of network congestion IP networks rely on transport protocols for example TCP for reliable data transmission with the associated cost of greater processing overhead and performance impact e Storage traffic based on Fibre Channel media uses the SCSI protocol for data transfer This traffic typically consists of large data packets with a payload of 2K bytes that cannot recover from frame loss To successfully transport storage traffic data center Ethernet must provide no drop service with lossless links Servers use InterProcess Communication IPC traffic within high pe
385. switch goes off line the standby replaces it as the new master K NOTE For the Aggregator the entire stack has only one management IP address Stack Master Election The stack elects a master and standby unit at bootup time based on MAC address The unit with the higher MAC value becomes master To view which switch is the stack master enter the show system command The following example shows sample output from an established stack A change in the stack master occurs when e You power down the stack master or bring the master switch offline e A failover of the master switch occurs e You disconnect the master switch from the stack K NOTE When a stack reloads and all the units come up at the same time for example when all units boot up from flash all units participate in the election and the master and standby are chosen based on the priority on the MAC address When the units do not boot up at the same time such as when some units are powered down just after reloading and powered up later to join the stack they do not participate in the election process even though the units that boot up late may have a higher priority configured This happens because the master and standby have already been elected hence the unit that boots up late joins only as a member Also when an up and running standalone unit or stack is merged with another stack based on election the losing stack reloads and the master unit of the winning stack
386. t CONFIGURATION mode ip ssh server port number On Chassis One enable SSH CONFIGURATION mode ip ssh server enable On Chassis Two invoke SCP CONFIGURATION mode copy scp flash On Chassis Two in response to prompts enter the path to the desired file and enter the port number specified in Step 1 EXEC Privilege mode Example of Using SCP to Copy from an SSH Server on Another Switch Other SSH related commands include crypto key generate generate keys for the SSH server debug ip ssh enables collecting SSH debug information ip scp topdir identify a location for files used in secure copy transfer ip ssh authentication retries configure the maximum number of attempts that should be used to authenticate a user ip ssh connection rate limit configure the maximum number of incoming SSH connections per minute ip ssh hostbased authentication enable enable host based authentication for the SSHv2 server ip ssh key size configure the size of the server generated RSA SSHv1 key ip ssh password authentication enable enable password authentication for the SSH server ip ssh pub key file specify the file the host based authentication uses ip ssh rhostsfile specify the rhost file the host based authorization uses ip ssh rsa authentication enable enable RSA authentication for the SSHv2 server ip ssh rsa authentication add keys for the RSA authentication Security for M I O Aggregator 169 show crypto dis
387. t Range If the interface range has multiple port ranges the smaller port range is excluded from the prompt Interface Range Prompt Excluding a Smaller Port Range Dell conf interface range tengigabitethernet 2 0 23 tengigab 2 1 10 Dell conf if range te 2 0 23 Overlap Port Ranges If overlapping port ranges are specified the port range is extended to the smallest start port number and largest end port number Interface Range Prompt Including Overlapping Port Ranges Dell conf tinte ra tengig 2 1 11 tengig 2 1 23 Dell conf if range te 2 1 23 Commas The example below shows how to use commas to add different interface types to the range enabling all Ten Gigabit Ethernet interfaces in the range 0 1 to 0 23 and both Ten Gigabit Ethernet interfaces 1 1 and 1 2 Multiple Range Bulk Configuration Gigabit Ethernet and Ten Gigabit Ethernet Dell conf if interface range tengigabitethernet 0 1 23 tengigabitethernet 1 1 2 Dell conf if range te 0 1 23 no shutdown Dell conf if range te 0 1 23 106 Interfaces Monitor and Maintain Interfaces You can display interface statistics with the monitor interface command This command displays an ongoing list of the interface status up down number of packets traffic statistics etc Command Syntax Command Mode Purpose monitor interface interface EXEC Privilege View interface statistics Enter the type of interface and slot port information
388. t WWNN Worldwide node name of the CNA port show fip snooping config Command Example Dell show fip snooping config FIP Snooping Feature enabled Status Enabled FIP Snooping Global enabled Status Enabled Global FC MAP Value OXOEFCOO FIP Snooping enabled VLANs VLAN Enabled FC MAP 100 TRUE OXOEFCOO show fip snooping enode Command Example Dell show fip snooping enode Enode MAC Enode Interface FCF MAC VLAN FC ID d4 ae 52 1b e3 cd Te 0 11 54 7f ee 37 34 40 100 62 00 11 show fip snooping enode Command Description 78 FIP Snooping Field ENode MAC ENode Interface Description MAC address of the ENode Slot port number of the interface connected to the ENode FCF MAC MAC address of the FCF VLAN VLAN ID number used by the session FC ID Fibre Channel session ID assigned by the FCF show fip snooping fcf Command Example Dell show fip snooping fcf FCF MAC FCF Interface VLAN FC MAP FKA ADV PERIOD No of Enodes 54 7 ee 37 34 40 Po 22 100 0e fc 00 4000 2 show fip snooping fcf Command Description Field FCF MAC FCF Interface VLAN FC MAP ENode Interface FKA ADV PERIOD No of ENodes FC ID Description MAC address of the FCF Slot port number of the interface to which the FCF is connected VLAN ID number used by the session FC Map value advertised by the FCF Slot number of the interface connected to the ENode Period of time in milliseconds during which FIP keep alive
389. t et ttp fd hm mm f em ts md 140 Link Layer Discovery Protocol LLDP cesses 141 OVA dede uses ea ie t ER HEURE ERRFUEEOU S MER URDU EEUU teats 141 Protocol Data Units s etus t vt et RA De pte Ee ree 141 Optional u Di AL A DE E a LU oM od Qu de sl E ANA 145 Management TEVS 5c cec et ede data i e e Led tae E ede hr ehe Le ode 143 LEDP Oper ose onde eec tds Pens 143 Viewing the LLDP Config ration ete A cene d end eee de dt 144 Viewing Information Advertised by Adjacent LLDP Agents ssssssssme 144 Gl aring BEDPGOUFetsS aceto eerte tese EB eei ear prete lasted ge egets 145 D b ggirig EEDP ux e URDU RU RE Ria aa Im 146 Relevant Mariagement ODJIGCts x ueteri dee bet oa 147 14 Port MONITO unica ederrenen enais euda Ane aua sda iaaa Kaanaa Eades 153 Gonfigurimg Port Monitotifide a A 153 Important Points to Remember ie ideas ec Here ti Paar cio Pee ed 154 Port MonitOringi ss eet ctra da 155 15 Security for M I O Aggregatol 2 ccccceccscsecssseeeesseeeeeeeeeeeeeeeeeeeeeeseeneetes 156 Understanding Banner Settirigs citri rU Te P pO ERR tre thre 156 Accessing the I O Aggregator Using the CMC Console Only sssssseeens 156 AAA Authentication bs dea de 157 Configuration Task List for AAA Authentication 157 MIDI 159 RADIUS Authentication rre RD ee e otc id ee ave t 160 Configuration Task
390. t interface status change 11 01 23 To view only configured interfaces use the show interfaces configured command in EXEC Privilege mode To determine which physical interfaces are available use the show running config command in EXEC mode This command displays all physical interfaces available on the switch which is as shown in the following example Dell show running config Current Configuration Version E8 3 17 38 Last configuration change at Tue Jul 24 20 48 55 2012 by default boot system stack unit 1 primary tftp 10 11 9 21 dv m1000e 2 b2 boot system stack unit 1 default system A boot system gateway 10 11 209 62 redundancy auto synchronize full Service timestamps log datetime l hostname FTOS username root password 7 d7acc8aldcd4f698 privilege 15 mac address table aging time 300 stack unit 1 provision I O Aggregator Stack unit 1 port 33 portmode quad Stack unit 1 port 37 portmode quad 35Mores 94 Interfaces Disabling and Re enabling a Physical Interface By default all port interfaces on an Aggregator are operationally enabled no shutdown to send and receive Layer 2 traffic You can reconfigure a physical interface to shut it down by entering the shutdown command To re enable the interface enter the no shutdown command Step Command Syntax Command Mode Purpose 1 interface interface CONFIGURATION Enter the keyword interface followed by the type of interface and
391. t port range 40 Gigabit Ethernet enter fortygigabitethernet slot port slot port range Port channel enter port channel 1 512 port channel range Where port range and port channel range specify a range of ports separated by a dash and or individual ports port channels in any order for example tengigabitethernet 1 1 2 5 9 11 12 port channel 1 3 5 A comma is required to separate each port and port range entry 218 Uplink Failure Detection UFD clear ufd disable interface interface uplink state group group id re enables all UFD disabled downstream interfaces in the group The range is from 1 to 16 Example of Syslog Messages Before and After Entering the clear ufd disable uplink state group Command S50 The following example message shows the Syslog messages that display when you clear the UFD Disabled state from all disabled downstream interfaces in an uplink state group by using the clear ufd disable uplink state group group id command All downstream interfaces return to an operationally up state 00 10 12 SSTKUNITO M CP SIFMGR 5 ASTATE DN Changed interface Admin state to down Te 0 1 00 10 12 SSTKUNITO M CP SIFMGR 5 ASTATE DN Changed interface Admin state to down Te 0 2 00 10 12 SSTKUNITO M CP SIFMGR 5 ASTATE DN Changed interface Admin state to down Te 0 3 00 10 12 SSTKUNITO M CP SIFMGR 5 OSTATE DN Changed interface state to down Te 0 1 00 10 12 SSTKUNITO M CP SIFMGR 5
392. t present 5 Member not present Stack Unit in Card Problem State Due to Configuration Mismatch Problem A stack unit enters a Card Problem state because there is a configuration mismatch between the logical provisioning stored for the stack unit number on the master switch and the newly added unit with the same number e Resolution From the master switch reload the stack by entering thereload command in EXEC Privilege mode When the stack comes up the card problem will be resolved Upgrading a Switch Stack To upgrade all switches in a stack with the same Dell Networking OS version follow these steps 1 Copy the new Dell Networking OS image to a network server 2 Download the Dell Networking OS image by accessing an interactive CLI that requests the server IP address and image filename and prompts you to upgrade all member stack units EXEC Privilege mode upgrade system flash ftp scp tftp usbflash partition Specify the system partition on the master switch into which you want to copy the Dell Networking OS image The system then prompts you to upgrade all member units with the new Dell Networking OS version The valid values are a and b 3 Reboot all stack units to load the Dell Networking OS image from the same partition on all switches in the stack CONFIGURATION mode boot system stack unit all primary system partition 4 Save the configuration EXEC Privilege write memory 5 Reload the stack u
393. t surface and cut all straps securing the container Open the container or remove the container top Carefully remove the switch from the container and place it on a secure and clean surface Remove all packing material Aa e U N P Inspect the product and accessories for damage After you insert a Flex IO module into an empty slot you must reload the I O Aggregator for the module If you remove an installed module and insert a different module type an error message displays to remind you that the slot is configured for a different type of Flex IO module You must reload the switch to make the Flex IO module operational 266 FC Flex IO Modules Interconnectivity of FC Flex IO Modules with Cisco MDS Switches In a network topology that contains Cisco MDS switches FC Flex IO modules that are plugged into the MXL and I O Aggregator switches enable interoperation for a robust effective deployment of the NPIV proxy gateway and FCoE FC bridging behavior In an environment that contains FC Flex IO modules and Cisco MDS switches perform the following steps e Insert the FC Flex IO module into any of the optional module slots of the MXL 10 40GBE Switch or the I O Aggregator Switch and reload the switch e When the device is reloaded NPIV mode is automatically enabled e Configure the NPIV related commands on MXL or I O Aggregator After you perform the preceding procedure the following operations take place e A physical link is establi
394. t the interface configuration level If you enter renew dhcp command on an interface already configured with a dynamic IP address the lease time of the dynamically acquired IP address is renewed uo Important To verify the currently configured dynamic IP address on an interface enter the show ip dhcp lease command The show running configuration command output only displays ip address dhcp the currently assigned dynamic IP address is not displayed DHCP Client on a Management Interface These conditions apply when you enable a management interface to operate as a DHCP client The management default route is added with the gateway as the router IP address received in the DHCP ACK packet It is required to send and receive traffic to and from other subnets on the external network The route is added irrespective when the DHCP client and server are in the same or different subnets The management default route is deleted if the management IP address is released like other DHCP client management routes e ip route for 0 0 0 0 takes precedence if it is present or added later e Management routes added by a DHCP client display with Route Source as DHCP in the show ip management route and show ip management route dynamic command output e Management routes added by DHCP are automatically reinstalled if you configure a static IP route with the ip route command that replaces a management route added by the DHCP client If you remove the statical
395. t this step to FIBRE_CHANNEL apply an FCoE map to more than one FC port for example Dell interface fi 0 0Dell config if fc 0 0 fabric SAN FABRIC A 3 Enable the port for FC transmission no shutdown INTERFACE FIBRE_CHANNEL Tip You can apply a DCB or FCoE map to a range of Ethernet or Fibre Channel interfaces by using theinterface range command for example 278 FC Flex IO Modules Dell config interface range tengigabitEthernet 1 12 23 tengigabitEthernet 2 24 35 Dell config interface range fibrechannel 0 0 3 fibrechannel 0 8 11 Enter the keywords interface range followed by an interface type and port range A port range must contain spaces before and after the dash Separate each interface type and port range with a space comma and space as shown in the preceding examples Sample Configuration 1 Configure a DCB map with PFC and ETS settings Dell config dcb map SAN DCB MAP Dell config dcbx name priority group 0 bandwidth 60 pfc off Dell config dcbx name priority group 1 bandwidth 20 pfc on Dell config dcbx name priority group 2 bandwidth 20 pfc on Dell config dcbx name f priority group 4 strict priority pfc off Dell conf dcbx name priority pgid 00012444 2 Apply the DCB map on a downstream server facing Ethernet port Dell config interface tengigabitethernet 1 0 Dell config if te 0 0 dcb map SAN DCB MAP 3 Create the dedicated VLAN to be used for FCo
396. t to mini jumbo 2500 bytes when a port is in Switchport mode the FIP snooping feature is enabled on the switch and the FIP snooping is enabled on all or individual VLANs e Link aggregation group LAG FIP snooping is supported on port channels on ports on which PFC mode is on PFC is operationally up FIP Snooping Prerequisites On an Aggregator FIP snooping requires the following conditions e A FIP snooping bridge requires DCBX and PFC to be enabled on the switch for lossless Ethernet connections refer to Data Center Bridging DCB Dell recommends that you also enable ETS ETS is recommended but not required DCBX and PFC mode are auto configured on Aggregator ports and FIP snooping is operational on the port If the PFC parameters in a DCBX exchange with a peer are not synchronized FIP and FCoE frames are dropped on the port VLAN membership The Aggregator auto configures the VLANs which handle FCoE traffic You can reconfigure VLAN membership on a port vlan tagged command Each FIP snooping port is auto configured to operate in Hybrid mode so that it accepts both tagged and untagged VLAN frames Tagged VLAN membership is auto configured on each FIP snooping port that sends and receives FCoE traffic and has links with an FCF ENode server or another FIP snooping bridge The default VLAN membership of the port should continue to operate with untagged frames FIP snooping is not supported on a port that is co
397. table use the following command Display the contents of the MAC address table EXEC Privilege mode K NOTE This command is available only in PMUX mode show mac address table address aging time vlan vlan id count dynamic interface static vlan address displays the specified entry aging time displays the configured aging time count displays the number of dynamic and static entries for all VLANs and the total number of entries dynamic displays only dynamic entries interface displays only entries for the specified interface static displays only static entries vlan displays only entries for the specified VLAN Network Interface Controller NIC Teaming NIC teaming is a feature that allows multiple network interface cards in a server to be represented by one MAC address and one IP address in order to provide transparent redundancy balancing and to fully utilize network adapter resources Support for NIC teaming is auto configured on the Aggregator including support for e MAC Address Station Move MAC Move Optimization The below fig shows a topology where two NICs have been teamed together In this case if the primary NIC fails traffic switches to the secondary NIC because they are represented by the same set of addresses 138 Layer 2 Redundant links create a switching loop Without STP broadcast storms occurs EB D m COR d Use backup interfa
398. tack After you remove the unit you can configure VLT on the unit VLT and IGMP Snooping When configuring IGMP Snooping with VLT ensure the configurations on both sides of the VLT trunk are identical to get the same behavior on both sides of the trunk When you configure IGMP snooping on a VLT node the dynamically learned groups and multicast router ports are automatically learned on the VLT peer node VLT Port Delayed Restoration When a VLT node boots up if the VLT ports have been previously saved in the start up configuration they are not immediately enabled To ensure MAC and ARP entries from the VLT per node are downloaded to the newly enabled VLT node the system allows time for the VLT ports on the new node to be enabled and begin receiving traffic The delay restore feature waits for all saved configurations to be applied then starts a configurable timer After the timer expires the VLT ports are enabled one by one in a controlled manner The delay between bringing up each VLT port channel is proportional to the number of physical members in the port channel The default is 90 seconds If you enable IGMP snooping IGMP queries are also sent out on the VLT ports at this time allowing any receivers to respond to the queries and update the multicast table on the new node This delay in bringing up the VLT ports also applies when the VLTi link recovers from a failure that caused the VLT ports on the secondary VLT peer node to b
399. te Group Status S50 Example of Viewing Interface Status with UFD Information S50 Examples of Viewing UFD Output zU GG Dell show uplink state group plink State Group 1 Status Enabled Up plink State Group 3 Status Enabled Up plink State Group 5 Status Enabled Down plink State Group 6 Status Enabled Up plink State Group 7 Status Enabled Up plink State Group 16 Status Disabled Up Dell show uplink state group 16 plink State Group 16 Status Disabled Up ell show uplink state group detail Up Interface up Dwn Interface down Dis Interface disabled plink State Group 1 Status Enabled Up pstream Interfaces 220 Uplink Failure Detection UFD Downstream Interfaces Te 13 13 Dis Te 13 14 Dis Te 13 15 Dis De11 show interfaces gigabitethernet 7 45 TenGigabitEthernet 7 45 is up Hardware is Forcel0Eth Current address is 00 01 e8 32 7a 47 Interface index is 280544512 Internet address is not set TU 1554 bytes IP MTU 1500 bytes LineSpeed 1000 Mbit Mode auto Flowcontrol rx off tx off runts 0 giants 0 throttles Tengig 0 3 Dwn Uplink State Group 3 Status Enabled Up Upstream Interfaces Tengig 0 46 Up Downstream Interfaces Te 13 0 Up Te 13 1 Up 13 6 Up Uplink State Group 5 Status Enabled Down Upstream Interfaces Tengig 0 0 Dwn Downstream Interfaces Te 13 2 Dis Te 13 4
400. tem ID Priority 32768 Address 0001 e88b 253d Actor Admin Key 128 Oper Key 128 Partner Oper Key 128 VLT Peer Oper Key 128 ACP LAG 128 is an aggregatable link ACP LAG 128 is a normal LAG Active LACP B Passive LACP C Short Timeout D Long Timeout Aggregatable Link F Individual Link G IN SYNC H OUT OF SYNC Collection enabled J Collection disabled K Distribution enabled Distribution disabled M Partner Defaulted N Partner Non defaulted He D E Link Aggregation 155 O Receiver is in expired state P Receiver is not in expired state Port Te 0 41 is enabled LACP is enabled and mode is lacp Port State Bundle Actor Admin State ADEHJLMP Key 128 Priority 32768 Oper State ADEGIKNP Key 128 Priority 32768 Partner Admin State BDFHJLMP Key 0 Priority 0 Oper State ACEGIKNP Key 128 Priority 32768 Port Te 0 42 is enabled LACP is enabled and mode is lacp Port State Bundle Actor Admin State ADEHJLMP Key 128 Priority 32768 Oper State ADEGIKNP Key 128 Priority 32768 Partner Admin State BDFHJLMP Key 0 Priority 0 Oper State ACEGIKNP Key 128 Priority 32768 Port Te 0 43 is enabled LACP is enabled and mode is lacp Port State Bundle Actor Admin State ADEHJLMP Key 128 Priority 32768 Oper State ADEGIKNP Key 128 Priority 32768 Partner Admin State BDFHJLMP Key 0 Priority 0 Oper State ACEGIKNP Key 128 Priority 32768 Port Te 0 44 is enabled LACP is enabl
401. terface port channel 11 interface Port channel 11 no ip address switchport channel member fortyGigE 1 18 22 no shutdown Troubleshooting VLT To help troubleshoot different VLT issues that may occur use the following information NOTE For information on VLT Failure mode timing and its impact contact your Dell Networking representative Table 16 Troubleshooting VLT Description Bandwidth monitoring Domain ID mismatch Dell Networking OS Version mismatch Behavior at Peer Up A syslog error message and an SNMP trap is generated when the VLTi bandwidth usage goes above the 80 threshold and when it drops below 80 The VLT peer does not boot up The VLTi is forced to a down state A syslog error message and an SNMP trap are generated A syslog error message is generated PMUX Mode of the IO Aggregator Behavior During Run Time A syslog error message and an SNMP trap is generated when the VLTi bandwidth usage goes above its threshold The VLT peer does not boot up The VLTi is forced to a down state A syslog error message and an SNMP trap are generated A syslog error message is generated Action to Take Depending on the traffic that is received the traffic can be offloaded inVLTi Verify the domain ID matches on both VLT peers Follow the correct upgrade procedure for the unit with the mismatched Dell Networking OS version 257 Description Remote VLT po
402. ternet group management protocol IGMP snooping require state information coordinating between the two VLT chassis IGMP and VLT configurations must be identical on both sides of the trunk to ensure the same behavior on both sides VLT Terminology The following are key VLT terms Virtual link trunk VLT The combined port channel between an attached device and the VLT peer switches e VLT backup link The backup link monitors the vitality of VLT peer switches The backup link sends configurable periodic keep alive messages between the VLT peer switches e VLT interconnect VLTi The link used to synchronize states between the VLT peer switches Both ends must be on 10G or 40G interfaces e VLT domain This domain includes both the VLT peer devices VLT interconnect and all of the port channels in the VLT connected to the attached devices It is also associated to the configuration mode that you must use to assign VLT global parameters e VLT peer device One of a pair of devices that are connected with the special port channel known as the VLT interconnect VLTi VLT peer switches have independent management planes A VLT interconnect between the VLT chassis maintains synchronization of L2 L3 control planes across the two VLT peer switches The VLT interconnect uses either 10G or 40G user ports on the chassis A separate backup link maintains heartbeat messages across an out of band OOB management network The backu
403. ters from the cursor to the end of the command line CNTL L Re enters the previous command CNTL N Return to more recent commands in the history buffer after recalling commands with CTRL P or the UP arrow key CNTL P Recalls commands beginning with the last command CNTL R Re enters the previous command CNTL U Deletes the line CNTL W Deletes the previous word CNTL X Deletes the line CNTL Z Ends continuous scrolling of command outputs Esc B Moves the cursor back one word 24 Configuration Fundamentals Short Cut Key Action Combination Esc F Moves the cursor forward one word Esc D Deletes all characters from the cursor to the end of the word Command History Dell Networking OS maintains a history of previously entered commands for each mode For example e When you are in EXEC mode the UP and DOWN arrow keys display the previously entered EXEC mode commands e When you are in CONFIGURATION mode the UP or DOWN arrows keys recall the previously entered CONFIGURATION mode commands Filtering show Command Outputs Filter the output of a show command to display specific information by adding except find grep no more save specified text after the command The variable specified textisthe text for which you are filtering and it IS case sensitive unless you use the ignore case sub option Starting with Dell Networking OS version 7 8 1 0 the grep command accepts an ignore case sub option that forces the sear
404. the chassis management controller CMC interface and prevent the usage of the CLI interface of the device to configure and monitor settings You can configure the restrict access session command to disable access of the I O Aggregator using a Telnet or SSH session the device is accessible only using the CMC GUI You can use the no version of this command to reactivate the Telnet or SSH session capability for the device Use the show restrict access command to view whether the access to a device using Telnet or SSH is disabled or not 156 Security for M I O Aggregator AAA Authentication Dell Networking OS supports a distributed client server system implemented through authentication authorization and accounting AAA to help secure networks against unauthorized access In the Dell Networking implementation the Dell Networking system acts as a RADIUS or TACACS client and sends authentication requests to a central remote authentication dial in service RADIUS or Terminal access controller access control system plus TACACS server that contains all user authentication and network service access information Dell Networking uses local usernames passwords stored on the Dell Networking system or AAA for login authentication With AAA you can specify the security protocol or mechanism for different login methods and different users In Dell Networking OS AAA uses a list of authentication methods called method lists to define the types of au
405. the same DCB input policy with the same pause time and dot1p priorities on all PFC enabled peer interfaces To configure PFC and apply a PFC input policy to an interface follow these steps 1 Create a DCB input policy to apply pause or flow control for specified priorities using a configured delay time CONFIGURATION mode dcb input policy name The maximum is 32 alphanumeric characters 2 Configure the link delay used to pause specified priority traffic DCB INPUT POLICY mode pfc link delay value One quantum is equal to a 512 bit transmission Data Center Bridging DCB 29 9 The range in quanta is from 712 to 65535 The default is 45556 quantum in link delay Configure the CoS traffic to be stopped for the specified delay DCB INPUT POLICY mode p c priority priority range Enter the 802 1p values of the frames to be paused The range is from 0 to 7 The default is none Maximum number of loss less queues supported on the switch 2 Separate priority values with a comma Specify a priority range with a dash for example pfc priority 1 3 5 7 Enable the PFC configuration on the port so that the priorities are included in DCBx negotiation with peer PFC devices DCB INPUT POLICY mode pfc mode on The default is PFC mode is on Optional Enter a text description of the input policy DCB INPUT POLICY mode description text The maximum is 32 characters Exit DCB input policy configuration mode DCB
406. the same converged link by e Allocating a guaranteed share of bandwidth to each priority group e Allowing each group to exceed its minimum guaranteed bandwidth if another group is not fully using its allotted bandwidth To configure ETS and apply an ETS output policy to an interface you must 1 Create a Quality of Service QoS output policy with ETS scheduling and bandwidth allocation settings 2 Create a priority group of 802 1p traffic classes Configure a DCB output policy in which you associate a priority group with a QoS ETS output policy 4 Apply the DCB output policy to an interface Configuring DCB Maps and its Attributes This topic contains the following sections that describe how to configure a DCB map apply the configured DCB map to a port configure PFC without a DCB map and configure lossless queues DCB Map Configuration Procedure A DCB map consists of PFC and ETS parameters By default PFC is not enabled on any 802 1p priority and ETS allocates equal bandwidth to each priority To configure user defined PFC and ETS settings you must create a DCB map Data Center Bridging DCB 33 Step Task Enter global configuration mode to create a DCB map or edit PFC and ETS settings Configure the PFC setting on or off and the ETS bandwidth percentage allocated to traffic in each priority group or whether the priority group traffic should be handled with strict priority scheduling You can enable PFC on a maxi
407. thentication and the sequence in which they are applied You can define a method list or use the default method list User defined method lists take precedence over the default method list K NOTE If a console user logs in with RADIUS authentication the privilege level is applied from the RADIUS server if the privilege level is configured for that user in RADIUS whether you configure RADIUS authorization K NOTE In the release 9 4 0 0 RADIUS and TACACS servers support VRF awareness functionality You can create RADIUS and TACACS groups and then map multiple servers to a group The group to which you map multiple servers is bound to a single VRF Configuration Task List for AAA Authentication The following sections provide the configuration tasks e Configure Login Authentication for Terminal Lines e Configuring AAA Authentication Login Methods e Enabling AAA Authentication For a complete list of all commands related to login authentication refer to the Security chapter in the Dell Networking OS Command Reference Guide Configure Login Authentication for Terminal Lines You can assign up to five authentication methods to a method list Dell Networking OS evaluates the methods in the order in which you enter them in each list If the first method list does not respond or returns an error Dell Networking OS applies the next method list until the user either passes or fails the authentication If the user fails a method
408. ther standalone or stacking modes to save the LAG port channel configuration parameters This behavior enables the port channels to be brought up because the configured interface attributes are available in the system database during the booting of the device With the reduction in time for the port channels to become active after the switch is booted the loss in the number of packets that are serviced by these interfaces is minimized Enabling the Verification of Member Links Utilization in a LAG Bundle To examine the working efficiency of the LAG bundle interfaces perform the following steps 1 The functionality to detect the working efficiency of the LAG bundle interfaces is automatically activated on all the port channels except the port channel that is configured as a VLT interconnect link during the booting of the switch 2 Usetheshow link bundle distribution port channel interface number command to display the traffic handling and utilization of the member interfaces of the port channel The following sample output is displayed when you enter this show command EXEC Dell show link bundle distribution port channel Dell show link bundle distribution port channel 1 Link bundle trigger threshold 60 LAG bundle 1 Utilization In Percent O Alarm State Inactive Interface Line Protocol Utilization In Percent Te 0 5 Up 0 Te 0 13 Up 0 Link Aggregation 131 Monitoring the Member Links of a LAG Bundle You can e
409. timer value or remove it by using the no defer timer command 1 View the Uplink status group EXEC Privilege mode show uplink state group Dell show uplink state group Uplink State Group 1 Status 2 Enable the uplink group tracking UPLINK STATE GROUP mode Enabled Down enable Dell conf uplink state group 1 Dell conf uplink state group 1 tenable To disable the uplink group tracking use the no enable command 3 Change the default timer UPLINK STATE GROUP mode defer timer seconds Dell conf uplink state group 1 Dell conf uplink state group 1 defer timer 20 Dell conf uplink state group 1 show config uplink state group 1 downstream TenGigabitEthernet 0 1 32 upstream Port channel 128 defer timer 20 Uplink Failure Detection UFD 223 21 PMUX Mode of the lO Aggregator This chapter describes the various configurations applicable in PMUX mode Introduction This document provides configuration instructions and examples for the Programmable MUX PMUX mode for the Dell Networking M I O Aggregator using Dell Networking OS version 9 3 0 0 This document includes the following Programmable MUX features e Programmable MUX PMUX l Multiple uplink link aggregation group LAGs 10G member ports 40G member ports Uplink failure detection UFD Virtual local area network VLAN configurations on a physical port port channel Virtual link trunking VLT Stacking
410. tion after the offline command is issued Proceed with Offline confirm yes no Dell offline stack unit 0 Warning offline of unit will bring down all the protocols and Debugging and Diagnostics 291 the unit will be operationally down except for running Diagnostics Please make sure that stacking fanout not configured for Diagnostics execution Also reboot online command is necessary for normal operation after the offline command is issued Proceed with Offline confirm yes no yes Dell 2 Confirm the offline status EXEC Privilege mode show system brief Dell show system brief Stack MAC 00 1e c9 de 03 7b Stack Info Unit UnitType Status ReqTyp CurTyp Version Ports 0 Member not present 1 Management online I O Aggregator I O Aggregator 8 3 17 38 56 2 Member not present 3 Member not present 4 Member not present 9 Member not present Dell Trace Logs In addition to the syslog buffer the Dell Networking OS buffers trace messages which are continuously written by various software tasks to report hardware and software events and status information Each trace message provides the date time and name of the Dell Networking OS process All messages are stored in a ring buffer You can save the messages to a file either manually or automatically after failover Auto Save on Crash or Rollover Exception information for MASTER or standby units is stored in the flash TRACE_LOG_DIR directory This dire
411. to upstream and auto downstream ports use the internally propagated PFC priorities to match against the received application priority Otherwise these ports use their locally configured PFC priorities in application priority TLVs If no configuration source is configured auto upstream and auto downstream ports check to see that the locally configured PFC priorities match the priorities in a received application priority TLV e On manual ports an application priority TLV is advertised only if the priorities in the TLV match the PFC priorities configured on the port DCB Configuration Exchange On an Aggregator the DCBX protocol supports the exchange and propagation of configuration information for the following DCB features e Enhanced transmission selection ETS e Priority based flow control PFC DCBx uses the following methods to exchange DCB configuration parameters Asymmetric DCB parameters are exchanged between a DCBx enabled port and a peer port without requiring that a peer port and the local port use the same configured values for the configurations to be compatible For example ETS uses an asymmetric exchange of parameters between DCBx peers Symmetric DCB parameters are exchanged between a DCBx enabled port and a peer port but requires that each configured parameter value be the same for the configurations in order to be compatible For example PFC uses an symmetric exchange of parameters between DCBx peers Config
412. tput Reco TLV pktsError Reco TLV pkts Description Operational status of ETS configuration on local port match or mismatch Type of state machine used for DCBx exchanges of ETS parameters e Feature for legacy DCBx versions e Asymmetric for an IEEE version Status of ETS Configuration TLV advertisements enabled or disabled Status of ETS Recommendation TLV advertisements enabled or disabled Number of ETS Configuration TLVs received and transmitted and number of ETS Error Configuration TLVs received Number of ETS Recommendation TLVs received and transmitted and number of ETS Error Recommendation TLVs received Example of the show stack unit all stack ports all pfc details Command Dell stack unit 0 stack port all Admin mode is On Admin is enabled Priority list is Local is enabled Priority list is Link Delay 45556 pause quantum O Pause Tx pkts 0 Pause Rx pkts stack unit 1 stack port all Admin mode is On Admin is enabled Priority list is Local is enabled Priority list is Link Delay 45556 pause quantum O Pause Tx pkts 0 Pause Rx pkts show stack unit all stack ports all pfc details Example of the show stack unit all stack ports all ets details Command Dell show stack unit all stack ports all ets details Stack unit 0 stack port all Max Supported TC Groups is 4 Number of Traffic Classes is 1 Admin mode is on Admin Parameters Admin is enabled TC grp Priority Bandwidth 0 0 1 2
413. tree towards the root until a value of zero indicating no further containment is found Example of Sample Entity MIBS outputs Dell show inventory optional module Simple Network Management Protocol SNMP 183 Unit Slot Expected Inserted Next Boot Status Power On Off 1 0 SFP SFP AUTO Good On 1 1 OSFP QSFP AUTO Good On x Mismatch Dell The status of the MIBS is as follows snmpwalk c public v 2c 10 16 130 148 1 3 6 1 2 1 47 1 1 1 1 2 SNMPv2 SMI mib 2 47 1 1 1 1 2 1 SNMPv2 SMI mib 2 47 1 1 2 2 STRING PowerConnect I O Aggregator SNMPv2 SMI mib 2 47 1 2 3 STRING Module 0 SNMPv2 SMI mib 2 47 1 2 4 STRING Unit 0 Port 1 10G Level SNMPv2 SMI mib 2 47 1 2 5 STRING Unit 0 Port 2 10G Level SNMPv2 SMI mib 2 47 1 2 6 STRING Unit 0 Port 3 10G Level SNMPv2 SMI mib 2 47 1 2 7 STRING Unit 0 Port 4 10G Level SNMPv2 SMI mib 2 47 1 2 8 STRING Unit 0 Port 5 10G Level SNMPv2 SMI mib 2 47 1 2 9 STRING Unit 0 Port 6 10G Level SNMPv2 SMI mib 2 47 L 2 10 STRING Unit 0 Port 7 10G Level SNMPv2 SMI mib 2 47 L 2 11 STRING Unit 0 Port 8 10G Level SNMPv2 SMI mib 2 47 L 2 12 STRING Unit 0 Port 9 10G Level SNMPv2 SMI mib 2 47 1 2 13 STRING Unit 0 Port 10 10G Level SNMPv2 SMI mib 2 47 L 2 14 STRING Unit 0 Port 11 10G Leve
414. ts to use the backup link to Router1 Figure 27 Uplink Failure Detection How Uplink Failure Detection Works UFD creates an association between upstream and downstream interfaces The association of uplink and downlink interfaces is called an uplink state group An interface in an uplink state group can be a physical interface or a port channel LAG aggregation of physical interfaces An enabled uplink state group tracks the state of all assigned upstream interfaces Failure on an upstream interface results in the automatic disabling of downstream interfaces in the uplink state group As a result downstream devices can execute the protection or recovery procedures they have in place to establish alternate connectivity paths as shown in the following illustration 214 Uplink Failure Detection UFD Core Network Layer 3 Network When an upstream port channel link goes down UFD brings down a downstream link in the same uplink state group Server traffic is diverted over a backup link to upstream devices B Uplink State Group i i Backup Links Uplink State Group C Primary Links Figure 28 Uplink Failure Detection Example If only one of the upstream interfaces in an uplink state group goes down a specified number of downstream ports associated with the upstream interface are put into a Link Down state You can configure this number and is calculated by the ratio of the upstream port bandwidth to the
415. tware is supported To use the SSH client use the following command Open an SSH connection and specify the host name username port number and version of the SSH client EXEC Privilege mode ssh hostname l username p port number v 1 2 hostname is the IP address or host name of the remote device Enter an IPv4 or IPv6 address in dotted decimal format A B C D Configure the Dell Networking system as an SCP SSH server CONFIGURATION mode ip ssh server enable port port number Configure the Dell Networking system as an SSH server that uses only version 1 or 2 CONFIGURATION mode ip ssh server version 1 2 Display SSH connection information EXEC Privilege mode show ip ssh Specifying an SSH Version The following example uses the ip ssh server version 2 command to enable SSH version 2 and the show ip ssh command to confirm the setting 168 Security for M I O Aggregator Dell conf tip ssh server version 2 Dell conf tdo show ip ssh SSH server disabled SSH server version SZ Password Authentication enabled Hostbased Authentication disabled RSA Authentication disabled To disable SSH server functions use the no ip ssh server enable command Using SCP with SSH to Copy a Software Image To use secure copy SCP to copy a software image through an SSH connection from one switch to another use the following commands 1 On Chassis One set the SSH port number port 22 by defaul
416. tworking OS assigns an interface number to each configured or unconfigured physical and logical interface Display the interface index number using the show interfacecommand from EXEC Privilege mode as shown in the following example The interface index is a binary number with bits that indicate the slot number port number interface type and card type of the interface The Dell Networking OS converts this binary index number to decimal and displays it in the output of the show interface command Starting from the least significant bit LSB e the first 14 bits represent the card type e the next 4 bits represent the interface type the next 7 bits represent the port number the next 5 bits represent the slot number e the next 1 bit is O for a physical interface and 1 for a logical interface e the next 1 bit is unused For example the index 44634369 is 10101010010001000100000001 in binary The binary interface index for TenGigabitEthernet 0 41 of an Aggregator Notice that the physical logical bit and the final unused bit Simple Network Management Protocol SNMP 181 are not given The interface is physical so this must be represented by a O bit and the unused bit is always 0 These two bits are not given because they are the most significant bits and leading zeros are often omitted For interface indexing slot and port numbering begins with binary one If the Dell Networking system begins slot and port numbering from 0 bi
417. uffer Buffer Packets Kilobytes 0 3 00 256 1 3 00 256 2 3 00 256 3 3 00 256 4 3 00 256 5 3 00 256 6 3 00 256 7 3 00 256 Example of Viewing the Buffer Profile Linecard De114show buffer profile detail fp uplink stack unit O port set 0 Linecard 0 Port set 0 Buffer profile fsqueue hig Dynamic Buffer 1256 00 Kilobytes Queue Dedicated Buffer Buffer Packets Kilobytes 0 3 00 256 i 3 00 256 2 3 00 256 3 3 00 256 4 3 00 256 5 3 00 256 6 3300 256 7 3 00 256 Debugging and Diagnostics 301 Using a Pre Defined Buffer Profile The Dell Networking OS provides two pre defined buffer profiles one for single queue for example non quality of service QoS applications and one for four queue for example QoS applications You must reload the system for the global buffer profile to take effect a message similar to the following displays Info For the global pre defined buffer profile to tak ffect pleas save the config and reload the system Dell Networking OS Behavior After you configure buffer profile global 1Q the message displays during every bootup Only one reboot is required for the configuration to take effect afterward you may ignore this bootup message Dell Networking OS Behavior The buffer profile does not returned to the default 4Q If you configure 10 save the running config to the startup config and then delete the startup config and reload the chassis The only way to return to the def
418. ult global values are applied Example of Connecting with a TACACS Server Host To specify multiple TACACS server hosts configure the tacacs server host command multiple times If you configure multiple TACACS server hosts Dell Networking OS attempts to connect with them in the order in which they were configured Security for M I O Aggregator 167 To view the TACACS configuration use the show running config tacacs command in EXEC Privilege mode To delete a TACACS server host use the no tacacs server host hostname ip address command freebsd2 telnet 2200 2200 2200 2200 2200 2202 Trying 2200 2200 2200 2200 2200 2202 Connected to 2200 2200 2200 2200 2200 2202 Escape character is Login admin Password Dell Dell Enabling SCP and SSH Secure shell SSH is a protocol for secure remote login and other secure network services over an insecure network Dell Networking OS is compatible with SSH versions 1 5 and 2 in both the client and server modes SSH sessions are encrypted and use authentication SSH is enabled by default For details about the command syntax refer to the Security chapter in the Dell Networking OS Command Line Interface Reference Guide Dell Networking OS SCP which is a remote file copy program that works with SSH Es NOTE The Windows based WinSCP client software is not supported for secure copying between a PC and a Dell Networking OS based system Unix based SCP client sof
419. ult values on the interfaces on which the DCB map is applied By default PFC is not applied on specific 802 1p priorities ETS assigns equal bandwidth to each 802 1p priority e To change the ETS bandwidth allocation configured for a priority group in a DCB map do not modify the existing DCB map configuration Instead first create a new DCB map with the desired PFC and ETS settings and apply the new map to the interfaces to override the previous DCB map settings Then delete the original dotip priority priority group mapping Applying a DCB Map on Server Facing Ethernet Ports You can apply a DCB map only on a physical Ethernet interface and can apply only one DCB map per interface Task Command Command Mode Enter Interface Configuration interface CONFIGURATION mode on a server facing portto tengigabitEthernet slot apply a DCB map port fortygigabitEthernet slot port 234 PMUX Mode of the lO Aggregator Apply the DCB map on an dcb map name INTERFACE Ethernet port Repeat this step to apply a DCB map to more than one port Creating an FCoE VLAN Create a dedicated VLAN to send and receive FC traffic over FCoE links between servers and a fabric over an NPG The NPG receives FCoE traffic and forwards de capsulated FC frames over FC links to SAN switches in a specified fabric When you apply an FCoE map to an Ethernet port the port is automatically configured as a tagged member of the FCoE VLAN Task Command Command Mod
420. unication between VLTi links fails VLT checks the backup link to determine the cause of the failure If the failed peer can still transmit heartbeat messages the Secondary Peer disables all VLT member ports and any Layer 3 interfaces attached to the VLAN associated with the VLT domain If heartbeat messages are not received the Secondary Peer forwards traffic assumes the role of the Primary Peer If the original Primary Peer is restored the VLT peer reassigned as the Primary Peer retains this role and the other peer must be reassigned as a Secondary Peer Peer role changes are reported as SNMP traps VLT Bandwidth Monitoring When bandwidth usage of the VLTi ICL exceeds 80 a syslog error message shown in the following message and an SNMP trap are generated SSTKUNITO M CP VLTMGR 6 VLT LAG ICL Overall Bandwidth utilization of VLT ICL LAG port channel 25 crosses threshold Bandwidth usage 80 250 PMUX Mode of the IO Aggregator When the bandwidth usage drops below the 80 threshold the system generates another syslog message shown in the following message and an SNMP trap SSTKUNITO M CP SVLTMGR 6 VLT LAG ICL Overall Bandwidth utilization of VLT ICL LAG port channel 25 reaches below threshold Bandwidth usage 74 VLT show remote port channel status VLT and Stacking You cannot enable stacking with VLT If you enable stacking on a unit on which you want to enable VLT you must first remove the unit from the existing s
421. uplink LAG 128 bundle that must be up for the LAG bundle to be brought up The default minimum number of member links that must be active for the uplink LAG to be active is 1 Enter the minimum links number command in the Port Channel Interface 128 Configuration mode to specify this value Dell conf interface port channel 128 Dell conf if po 128 minimum links 4 Use the show interfaces port channel command to view information regarding the configured LAG or port channel settings The Minimum number of links to bring Port channel up is field in the output of this command displays the configured number of active links for the LAG to be enabled Dell show interfaces port channel 128 Port channel 128 is up line protocol is down minimum links not up Created by LACP protocol Hardware address is 00 01 02 03 04 05 Current address is 00 01 02 03 04 05 Interface index is 1107492992 inimum number of links to bring Port channel up is 4 Internet address is not set ode of IPv4 Address Assignment NONE DHCP Client ID 000102030405 TU 12000 bytes IP MTU 11982 bytes LineSpeed auto embers in this channel ARP type ARPA ARP Timeout 04 00 00 Last clearing of Show interface counters 05 22 24 Queueing strategy fifo Input Statistics packets 0 bytes 64 byte pkts 0 over 64 byte pkts 0 over 127 byte pkts over 255 byte pkts 0 over 511 byte pkts 0 over 1023 byte pkts Multicasts 0 Broadcasts runts 0 giants 0 throttles CRC 0 over
422. uration Source Election When an auto upstream or auto downstream port receives a DCB configuration from a peer the port first checks to see if there is an active configuration source on the switch e If a configuration source already exists the received peer configuration is checked against the local port configuration If the received configuration is compatible the DCBx marks the port as DCBx enabled If the configuration received from the peer is not compatible a warning message is logged and the DCBx frame error counter is incremented Although DCBx is operationally disabled the port keeps the peer link up and continues to exchange DCBx packets If a compatible peer configuration is later received DCBx is enabled on the port e If there is no configuration source a port may elect itself as the configuration source A port may become the configuration source if the following conditions exist 46 Data Center Bridging DCB No other port is the configuration source The port role is auto upstream The port is enabled with link up and DCBx enabled The port has performed a DCBx exchange with a DCBx peer The switch is capable of supporting the received DCB configuration values through either a symmetric or asymmetric parameter exchange A newly elected configuration source propagates configuration changes received from a peer to the other auto configuration ports Ports receiving auto configuration information f
423. ust be in Default mode not Switchport mode to have VLTi recognize it The system automatically includes the required VLANs in VLTi You do not need to manually select VLANs VLT peer switches operate as separate chassis with independent control and data planes for devices attached to non VLT ports Port channel link aggregation LAG across the ports in the VLT interconnect is required individual ports are not supported Dell Networking strongly recommends configuring a static LAG for VLTi The VLT interconnect synchronizes L2 and L5 control plane information across the two chassis The VLT interconnect is used for data traffic only when there is a link failure that requires using VLTi in order for data packets to reach their final destination Unknown multicast and broadcast traffic can be flooded across the VLT interconnect MAC addresses for VLANs configured across VLT peer chassis are synchronized over the VLT interconnect on an egress port such as a VLT LAG MAC addresses are the same on both VLT peer nodes ARP entries configured across the VLTi are the same on both VLT peer nodes If you shut down the port channel used in the VLT interconnect on a peer switch in a VLT domain in which you did not configure a backup link the switch s role displays in the show vlt brief command output as Primary instead of Standalone When you change the default VLAN ID on a VLT peer switch the VLT interconnect may flap In a VLT domai
424. vice Instead of stopping all traffic on a link as performed by the traditional Ethernet pause mechanism PFC pauses traffic on a link according to the 802 1p priority set on a traffic type You can create lossless flows for storage and server traffic while allowing for loss in case of LAN traffic congestion on the same physical interface The following illustration shows how PFC handles traffic congestion by pausing the transmission of incoming traffic with dotip priority 3 Receive Buffers dotip priorities as virtual transmit queues O N0Aa00 Figure 1 Priority Based Flow Control In the system PFC is implemented as follows e PFC is supported on specified 802 1p priority traffic dot1p O to 7 and is configured per interface However only two lossless queues are supported on an interface one for Fibre Channel over Ethernet FCoE converged traffic and one for Internet Small Computer System Interface iSCSI storage traffic Configure the same lossless queues on all ports Adynamic threshold handles intermittent traffic bursts and varies based on the number of PFC priorities contending for buffers while a static threshold places an upper limit on the transmit time of a queue after receiving a message to pause a specified priority PFC traffic is paused only after surpassing both static and dynamic thresholds for the priority specified for the port e By default PFC is enabled when you enabled DCB When you enable DCB globall
425. w commands Table 3 Displaying DCB Configurations Command show dcb stack unit unit number show interface port type statistics slot port pfc show interface port type summary detail slot port pfc show interface port type summary detail slot port ets Example of the show dcb Command Dell show dcb stack unit 0 port set 0 DCB Status PFC Queue Count Total Buffer lossy lossless PFC Total Buffer in KB PFC Shared Buffer in KB PFC Available Buffer in KB in KB Output Displays the data center bridging status number of PFC enabled ports and number of PFC enabled queues On the master switch in a stack you can specify a stack unit number The range is from 0 to 5 Displays counters for the PFC frames received and transmitted by dotlp priority class on an interface Displays the PFC configuration applied to ingress traffic on an interface including priorities and link delay To clear PFC TLV counters use the clear pfc counters stack unit unit number tengigabitethernet slot port command Displays the ETS configuration applied to egress traffic on an interface including priority groups with priorities and bandwidth allocation To clear ETS TLV counters enter the clear ets counters stack unit unit number command Enabled 2 3822 1912 832 1080 Example of the show interface pfc statistics Command Dell show interfaces tengigabitethernet 0 3 pfc statistics Inte
426. with Traffic Classes Multicast Filtering and Virtual LAN Extensions Definitions of Managed Objects for the Virtual Router Redundancy Protocol Remote Network Monitoring Management Information Base Ethernet Statistics Table Ethernet History Control Table Ethernet History Table Alarm Table Event Table Log Table The Interfaces Group MIB Remote Authentication Dial In User Service RADIUS Remote Network Monitoring Management Information Base for High Capacity Networks 64 bits Ethernet Statistics High Capacity Table Ethernet History High Capacity Table Version 2 of the Protocol Operations for the Simple Network Management Protocol SNMP Standards Compliance RFC 3418 3434 ANSI TIA 1057 draft grant tacacs 02 IEEE 802 1AB IEEE 802 1AB IEEE 802 1AB sFlow org sFlow org FORCE10 IF EXTENSION MIB FORCE10 LINKAGG MIB FORCE10 COPY CONFIG MIB FORCE10 MONMIB FORCE10 PRODUCTS MIB FORCE10 SS CHASSIS MIB FORCE10 SMI FORCE10 SYSTEM COMPONENT MIB FORCE10 TC MIB FORCE10 TRAP ALARM MIB FORCE10 FIPS NOOPING MI B FORCE10 DCB MIB Standards Compliance Full Name Management Information Base MIB for the Simple Network Management Protocol SNMP Remote Monitoring MIB Extensions for High Capacity Alarms High Capacity Alarm Table 64 bits The LLDP Management Information Base extension module for TIA TR41 4 Media Endpoint Discovery information The TACACS Protocol Management Information Base
427. wnstream ports is enabled To disable auto recovery use the no downstream auto recover command Specify the time in seconds to wait for the upstream port channel LAG 128 to come back up before server ports are brought down UPLINK STATE GROUP mode defer timer seconds K NOTE This command is available in Standalone and VLT modes only The range is from 1 to 120 Optional Enter a text description of the uplink state group UPLINK STATE GROUP mode description text The maximum length is 80 alphanumeric characters Optional Disable upstream link tracking without deleting the uplink state group UPLINK STATE GROUP mode no enable The default is upstream link tracking is automatically enabled in an uplink state group To re enable upstream link tracking use the enable command Clearing a UFD Disabled Interface in PMUX mode You can manually bring up a downstream interface in an uplink state group that UFD disabled and is in a UFD Disabled Error state To re enable one or more disabled downstream interfaces and clear the UFD Disabled Error state use the following command Re enable a downstream interface on the switch router that is in a UFD Disabled Error State so that it can send and receive traffic EXEC mode clear ufd disable interface interface uplink state group group id For interface enter one of the following interface types 10 Gigabit Ethernet enter tengigabitethernet slot port slo
428. x pkts 0 Pause Rx pkts 2 Input Appln Priority TLV pkts 0 Output Appln Priority TLV pkts 0 Error Appln Priority TLV Pkts The following table describes the show interface pfc summary command fields Table 4 show interface pfc summary Command Description Fields Description Interface Interface type with stack unit and port number Admin mode is on Admin is enabled PFC Admin mode is on or off with a list of the configured PFC priorities When PFC admin mode Data Center Bridging DCB 51 Fields Remote is enabled Priority list Remote Willing Status is enabled Local is enabled Operational status local port PFC DCBx Oper status State Machine Type TLV Tx Status PFC Link Delay Application Priority TLV FCOE TLV Tx Status Application Priority TLV ISCSI TLV Tx Status Application Priority TLV Local FCOE Priority Map Application Priority TLV Local ISCSI Priority Map 52 Description is on PFC advertisements are enabled to be sent and received from peers received PFC configuration takes effect The admin operational status for a DCBx exchange of PFC configuration is enabled or disabled Operational status enabled or disabled of peer device for DCBx exchange of PFC configuration with a list of the configured PFC priorities Willing status of peer device for DCBx exchange Willing bit received in PFC TLV enabled or disabled DCBx operational status enabled or disabled with a list of the co
429. xamine and view the operating efficiency and the traffic handling capacity of member interfaces of a LAG or port channel bundle This method of analyzing and tracking the number of packets processed by the member interfaces helps you manage and distribute the packets that are handled by the LAG bundle The functionality to detect the working efficiency of the LAG bundle interfaces is automatically activated on all the port channels except the port channel that is configured as a VLT interconnect link during the booting of the switch This functionality is supported on I O Aggregators in stacking standalone and VLT modes and it is not supported in programmable MUX PMUX mode By default this capability is enabled on all of the port channels set up on the switch You can use the show link bundle distribution port channel interface number command to display the traffic handling and utilization of the member interfaces of the port channel The following table describes the output fields of this show command Table 8 Output Field Descriptions for show link bundle distribution port channel Command Field Description Link bundle trigger threshold Threshold value that is the checkpoint exceeding which the link bundle is marked as being overutilized and alarm is generated LAG bundle number Number of the LAG bundle Utilization In Percent Traffic usage in percentage of the packets processed by the port channel Alarm State Indicates whether an a
430. y If it still receives no response the querier removes the group from the list associated with forwarding port and stops forwarding traffic for that group to the subnet IGMP Version 3 Conceptually IGMP version 3 behaves the same as version 2 However there are differences Version 3 adds the ability to filter by multicast source which helps the multicast routing protocols avoid forwarding traffic to subnets where there are no interested receivers 86 Internet Group Management Protocol IGMP To enable filtering routers must keep track of more state information that is the list of sources that must be filtered An additional query type the group and source specific query keeps track of state changes while the group specific and general queries still refresh existing state e Reporting is more efficient and robust Hosts do not suppress query responses non suppression helps track state and enables the immediate leave and IGMP snooping features state change reports are retransmitted to insure delivery and a single membership report bundles multiple statements from a single host rather than sending an individual packet for each statement To accommodate these protocol enhancements the IGMP version 3 packet structure is different from version 2 Queries shown below in query packet format are still sent to the all systems address 224 0 0 1 but reports shown below in report packet format are sent to all the IGMP version 3
431. y you cannot simultaneously enable TX and RX on the interface for flow control and link level flow control is disabled e Buffer space is allocated and de allocated only when you configure a PFC priority on the port e PFC delay constraints place an upper limit on the transmit time of a queue after receiving a message to pause a specified priority 28 Data Center Bridging DCB e By default PFC is enabled on an interface with no dotlp priorities configured You can configure the PFC priorities if the switch negotiates with a remote peer using DCBX During DCBX negotiation with a remote peer DCBx communicates with the remote peer by link layer discovery protocol LLDP type length value TLV to determine current policies such as PFC support and enhanced transmission selection ETS BW allocation If the negotiation succeeds and the port is in DCBX Willing mode to receive a peer configuration PFC parameters from the peer are used to configured PFC priorities on the port If you enable the ink level flow control mechanism on the interface DCBX negotiation with a peer is not performed If the negotiation fails and PFC is enabled on the port any user configured PFC input policies are applied If no PFC dcb map has been previously applied the PFC default setting is used no priorities configured If you do not enable PFC on an interface you can enable the 802 3x link evel pause function By default the link level pause is
432. y assigned 48 bit Multicast address 01 80 C2 00 00 01 is used to send and receive pause frames To allow full duplex flow control stations implementing the pause operation instruct the MAC to enable reception of frames with a destination address equal to this multicast address The pause frame is defined by IEEE 802 3x and uses MAC Control frames to carry the pause commands Ethernet pause frames are supported on full duplex only The only configuration applicable to half duplex ports is rx off tx off Note that if a port is over subscribed Ethernet Pause Frame flow control does not ensure no loss behavior The following error message appears when trying to enable flow control when half duplex is already configured Can t configure flowcontrol when half duplex is configure config ignored The following error message appears when trying to enable half duplex and flow control configuration is on Can t configure half duplex when flowcontrol is on config ignored MTU Size The Aggregator auto configures interfaces to use a maximum MTU size of 12 000 bytes If a packet includes a Layer 2 header the difference in bytes between the link MTU and IP MTU must be enough to include the Layer 2 header For example for VLAN packets if the MTU is 1400 the link MTU must be no less than 1422 1400 byte IP MTU 22 byte VLAN Tag 1422 byte link MTU The MTU range is 592 12000 with a default of 1554 The table below lists out the various
433. y group to increase to the maximum link bandwidth CIN supports only the default dot1p priority queue assignment in a priority group Bandwidth Allocation for DCBX CIN After an ETS output policy is applied to an interface if the DCBX version used in your data center network is CIN a QoS output policy is automatically configured to overwrite the default CIN bandwidth allocation This default setting divides the bandwidth allocated to each port queue equally between the dotip priority traffic assigned to the queue DCBX Operation The data center bridging exchange protocol DCBX is used by DCB devices to exchange configuration information with directly connected peers using the link layer discovery protocol LLDP protocol DCBX can detect the misconfiguration of a peer DCB device and optionally configure peer DCB devices with DCB feature settings to ensure consistent operation in a data center network DCBX is a prerequisite for using DCB features such as priority based flow control PFC and enhanced traffic selection ETS to exchange link level configurations in a converged Ethernet environment DCBX is also deployed in topologies that support lossless operation for FCoE or iSCSI traffic In these scenarios all network devices are DCBX enabled DCBX is enabled end to end The following versions of DCBX are supported on an Aggregator CIN CEE and IEEE2 5 DCBX requires the LLDP to be enabled on all DCB devices DCBx Operation
434. y the master switch while it operated as a stack unit Configuring the Uplink Speed of Interfaces as 40 Gigabit Ethernet You can configure the I O Aggregator switch in standalone VLT and stack modes to operate with an uplink speed of 40 Gigabit Ethernet per second You can use the chassis management controller CMC interface to access the switch and specify the 40 GbE QSFP module ports to function in 40 GbE mode after the subsequent reload operation By default these QSFP modules function in 10GbE mode When you configure the native mode to be 40 GbE the CMC sends a notification to the Aggregator to set the default internal working of all of the ports to be 40 GbE after the switch reloads After you set this configuration you must enter the reboot command not pressing the Reset button which causes the factory default settings to be applied when the device comes up online from the CMC for the uplink speed to be effective This functionality to set the uplink speed is available from the CLI or the CMC interface when the Aggregator functions as a simple MUX or a VLT node with all of the uplink interfaces configured to be member links in the same LAG bundle You cannot configure the uplink speed to be set as 40 GbE if the Aggregator functions in programmable MUX mode with multiple uplink LAG interfaces or in stacking mode This is because CMC is not involved with the configuration of parameters when the Aggregator operates in either of these mo
435. you do not configure server facing ports for LACP based NIC teaming a port is treated as an individual port in active negotiating state Auto Configured LACP Timeout LACP PDUs are exchanged between port channel LAG interfaces to maintain LACP sessions LACP PDUs are transmitted at a slow or fast transmission rate depending on the LACP timeout value configured on the partner system The timeout value is the amount of time that a LAG interface waits for a PDU from the partner system before bringing the LACP session down The default timeout is long timeout 30 seconds and is not user configurable on the Aggregator Link Aggregation 123 LACP Example The below illustration shows how the LACP operates in an Aggregator stack by auto configuring the uplink LAG 128 for the connection to a top of rack ToR switch and a server facing LAG for the connection to an installed server that you configured for LACP based NIC teaming IOR Switch Uplink LAG LAG 128 7 m n ii M1000e Enclosure i Server LAG LAG 1 Ep aT Server 1 Installed Server Blades Figure 17 LACP Operation on an Aggregator 124 Link Aggregation Link Aggregation Control Protocol LACP This chapter contains commands for Dell Networks s implementation of the link aggregation control protocol LACP for creating dynamic link aggregation groups LAGs known as port channels in the Dell Networking Operating System OS K N
436. ysical interface goes down in the port channel another physical interface carries the traffic Port Channel Benefits A port channel interface provides many benefits including easy management link redundancy and sharing Port channels are transparent to network configurations and can be modified and managed as one interface With this feature you can create larger capacity interfaces by utilizing a group of lower speed links For example you can build a 40 Gigabit interface by aggregating four 10 Gigabit Ethernet interfaces together lf one of the four interfaces fails traffic is redistributed across the three remaining interfaces Port Channel Implementation An Aggregator supports only port channels that are dynamically configured using the link aggregation control protocol LACP For more information refer to Link Aggregation Statically configured port channels are not supported The table below lists out the number of port channels per platform Platform Port channels Members Channel M IO Aggregator 128 16 As soon as a port channel is auto configured the Dell Networking OS treats it like a physical interface For example IEEE 802 10 tagging is maintained while the physical interface is in the port channel 102 Interfaces Member ports of a LAG are added and programmed into hardware in a predictable order based on the port ID instead of in the order in which the ports come up With this implementation load balancing yi

Download Pdf Manuals

image

Related Search

Related Contents

  Saniflo 021 Instructions / Assembly  User Manual - Migros  Icy Dock EZ-Fit Lite MB290SP-B  UM : uPD780228 Subseries  

Copyright © All rights reserved.
Failed to retrieve file