How to prepare for Cisco CCNA Data Center 640-916 DCICT

After passing the first exam required to get a CCNA DC certification: DCICN (640-911), obviously, I was also studying for the second exam: DCICT (640-916). As this exam brought less surprise to me in terms of exam content, it was still a lot of information to process and study. Especially since I got a lot of info from different sources and to help me with studying, I decided to do the same thing as with the first exam. You can find the information which I gathered to pass the exam in this post. Hope it helps.

Exam preparation

To prepare, my first source of information was the book from Cisco Press: 640-916 Official Certification Guide from Cisco Press. The book is very extensive in explaining details. Sometimes I found it even a little overwhelming. For me personally, it would be better if they first explain a technology in simple words and then focus more on the details.

Besides the book, I used a lot of Googling. Mainly to find explanations trough the eyes of other people. Also, I already learned that it’s a good thing to have a look at the configuration guides on the Cisco website and for sure it’s good to do some simulation in a lab environment. Even a very basic setup usually reveals stuff which I didn’t think about after reading a guide.

Unlike with the 640-911 exam, I found that this exam cleanly covers the exam objectives which are available from Cisco. The topics were quite evenly distributed and they were clear. Sometimes a small detail in the question really matters, so pay attention to that. Comparing with 640-911, I found the exam easier but the content was more difficult. Maybe confusing but that’s my experience.

As I mentioned also in my previous post: How to prepare for Cisco CCNA Data Center 640-911 DCICN: You need more than this post to pass the exam. It’s important to understand the concepts and purpose of the technologies. All information under here is what I used to help me study.

Network architecture

A classic datacenter network has a modular multi layer network design. The most used model has three layers. Using a modular design is scalable and each layer has it’s own specific task:

Core layer:

  • Default summary route
  • High speed switching
  • Layer 3 routing

Distribution/aggregation layer:

  • Network services (firewall/IPS/ACE/WAAS) available to all servers
  • Policy enforcement
  • Boundary between L2 & L3
  • Aggregates the access switches

Access layer:

  • Host connectivity
  • QoS marking
  • VLAN marking

More information:

Variations on the traditional three layer model:

  • Collapsed core: Distribution and Core are one layer. Can be used in smaller environments
  • Spine and leaf
    • Leaf = Access layer. More leaf-switches = more access connectivity
    • Spine connects all leafs in a redundant way. More spine-switches = more switching capacity
    • FabricPath
    • ACI (Application Centric Infrastructure)

OTV (Overlay Transport Virtualization)

OTV is a technology to extend L2 over geographically separated data centers.

Other examples are: EoMPLS (Ethernet over Multiprotocol Label Switching), VPLS (Virtual Private LAN services), Dark fiber

Characteristics:

  • Creates a logical overlay network over the transport network
  • Designated forwarding device per VLAN: AED (Authoritative Edge Device)
  • OTV devices need to form an adjacency via multicast or unicast
  • Supported on Nexus 7000 and ASR 1000
  • Requires M or F3 line cards
  • MAC addresses are routed
  • BPDU’s are not forwared over overlay interface
  • Uses IS-IS routing protocol for control plane
  • No fragmentation via edge: DF (MTU needs to be large enough)
  • ARP snooping (caches ARP requests and emulates them if needed)
  • Multihoming is supported
  • Requires the LAN transport license
  • OTV encapsulation increases the MTU with 42 bytes

Terminology:

  • OTV edge: device that participates in OTV
  • OTV internal interface: interface on the L2 side of the network
  • OTV join interface: interface on the L3 side of the network (can be a physical/subinterface/logical (port-channel))
  • OTV overlay interface: logical interface that is responsible for the OTV traffic over the join interface

Configuration:

In the example I’ll configure OTV for a multicast trapnsort forVLAN 5 and use VLAN 10 for the OTV traffic. This configuration needs to be done on both side of the OTV edges. (in the example: S1=192.168.100.10, S2=192.168.100.20):

Configure the OTV join interface (L3 side):

General config of OTV

Configure overlay interface to use over multicast:

Debug/check:

More information: http://www.cisco.com/c/en/us/td/docs/switches/datacenter/sw/nx-os/OTV/quick_start_guide/b-Cisco-Nexus-7000-Series-OTV-QSG.html

vPC (virtual Port Channel)

vPC presents two upstream switches as one virtual switch to hosts connection using a port channel

Characteristics:

  • Loop-free
  • Uses all bandwidth (without vPC, one of the links would be blocked by STP)
  • Max. two switches per vPC domain
  • Max. one vPC domain per switch (or VDC)
  • Available of M or F line cards
  • best practice is to use dedicated rate mode
  • vPC keepalive is required to establish the vPC but can be lost during usage
  • Traffic normally doesn’t pass the vPC peer link (but it can be)
  • There is no preemt after a failover

Terminology:

  • vPC peer: one of both switches on the upstream side
  • vPC peer link: link between peers (10GE port-channel)
  • vPV peer keepalive: separate link between peers (mgmt0 is allowed)
  • vPC member port: port used to connect hosts to the virtual port channel
  • orphan port: port that doesn’t use vPC but does provide connectivity to a vPC VLAN
  • CFS: Cisco Fabric Services Protocol: state sync/config over vPC peer link

Configuration:

In the example, I’ll config vPC on one of both vPC peers. Ethernet 2/1 and 2/2 are used for the vPC peer link, mgmt0 is used for keepalive and Ethernet 2/3 is used to connect a host (member port).

Basic configuration and enabling vPC:

vPC configuration and keepalive link:

vPC peer link:

vPC member port:

Debug/check:

More information:

FabricPath

FabricPath is used to get to a spine/leaf network design where a large layer 2 environment is scalable and get’s the benefits of layer 3 routing like multipathing, fast convergence and SPF without creating multiple segments or STP disadvantages.

Characteristics:

  • Cisco proprietary
  • Does address lookup for incoming traffic to find outgoing destination (SPF)
  • ECMP (Equal Cost Multipathing) is possible up to 16 links
  • Uses a tree topology to determine routes for ARP, broadcast or multicast traffic. Trees are per VLAN
  • Uses IS-IS as routing protocol
  • Uses conversational MAC-learning: only learn relevant MAC’s on a port
  • Not available for M1 interface, only on F1/2/3
  • Ehtertype: 0x8903
  • STP BPDUs do not pass core ports
  • For classic switches connected to the FabricPath network appears as one STP bridge with a fixed BID: C84C.75FA.6000). The edge port always needs to be the root for FabricPath VLANs
  • vPC+ enable FabricPath to work with a vPC domain (FabricPath emulates a switch)

Terminology:

  • Core port: ports that form the FabricPath network. Uses FabricPath header to encapsulate all traffic
  • Edge port: link to the classic network (CE): uses standard ethernet frames
  • DRAP (Dynamic Resource Allocation Protocol) assigns ID’s to each switch
  • FTAG (FabricPath forwarding Tag): 10 bit traffic id tag in the FabricPath header

Known unicast behavior: uses an existing mapping between unicast MAC and destination switch ID. Flow based load balancing over equal cost paths can be used.

  1. Receive normal ethernet frame on edge port
  2. Peform a lookup to identify destination switch ID: succes
  3. Once the destination switch ID is known, look for the next-hop to reach the destination
  4. Encapsulate the original ethernet frame with a FabricPath header
  5. The next-hop is a spine switch and it will forward the FabricPath-encapsulated frame to the destination leaf switch
  6. On the destination leaf switch: check if this is really the end-destination and determine the edge port.
  7. Remove the FabricPath header and forward the ethernet frame to the destination edge port

Multicast behavior: uses multidimensional trees calculated by IS-IS. Uses pruning to not forward to switches that do not have receivers

Broadcast behavior: edge switches do not learn MAC’s but updates of known addresses are peformed

Unknown unicast behavior: conversational learning

  1. Edge port owning switch learns the source MAC for the host connected
  2. Perform a lookup to identify destination switch ID: fails
  3. Encapsulate the original ethernet frame with a FabricPath header
  4. Unicast root (elected earlier) will forward the request to all leafs (they own the devices)
  5. Leaf owning the device will answer via the root to the incoming port
  6. Learn the layer 2 route
  7. Learn the destination MAC only on the switch owning the edge port connected to the destination

Configuration:

Install and enable fabricpath:

Optionally configure a fixed switch-id:

Configure FabricPath VLAN’s:

Configure interfaces that will be FabricPath core ports (will form IS-IS adjecency):

Debug/check:

More information:

FEX (Fabric EXtender)

Fex is a technology which allows you to extend the capacity of a parent switch with remote line cards (FEX) in a separate chassis (or card), connected over L3.

Characteristics:

  • FEX-device itself has no local configuration
  • Nexus 2000 series
  • Uses VN-tags (802.1BR) to identify FEX ports over the link between parent and FEX
  • ACLs, VLANs, QoS,… is performed on the parent for any FEX-type
  • Can use static pinning: dedicated link for a group of ports between parent and FEX
  • Can use dynamic pinning: port-channel for all links between parent and FEX
  • Ports on FEX are also called satellite ports

FEX-types:

  • (ToR/Rack) FEX: Nexus 2000 connected to a parent Nexus 5000/7000; fabric to top of rack
  • Adapter FEX: Use a NIC or CNA in one host as a FEX, creates vNICs for the OS: fabric to server
  • VM-FEX: extension on adapter FEX to the level of a hypervisor (KVM, Hyper-V or VMWare). Each guest on the hypervisor can have a vNIC: fabric to VM
  • Blade/chassis FEX: FEX-adapter in a Cisco UCS blade enclosure (for example 2104XP): fabric to UCS blade

Terminology:

  • NIF: Network interface: port on the FEX to the parent switch. Uses VN-tages
  • HIF: Host interface: port on the FEX to the host (server)
  • VIF: Virtual interface HIF including VLAN and other parameters
  • LIF: Logical interface: representation of HIF on the parent

Configuration:

Configuration is only done on the parent switch, there is no console or similar on the FEX-side.

Model overview:

More information:

Storage architecture

Traditional SAN infrastructure types:

  • Point to point (DAS):
    • Direct connection between two FC-devices
    • Uses 1 bit addressing (one port os 0x000000, the other is 0x000001)
  • Arbitrated Loop (FC-AL):
    • All devices are connected in a loop and all devices can access all storage
    • A hub is also considered as a loop-device
    • Uses 8 bit physical address (AL-PA)
    • 126 addresses available for node (1 reserved for FL-port)
  • Switched Fabric
    • Singe tier (one layer of switches)
    • Multitier (usually in a core-edge design): cost-effective for larger designs
    • Uses 24 bit addressing: 8 bit for switch domain (unique for each switch), 8 bit for the area (group of ports in a domain) and 8 bit for the device
    • 239 domains available (01-EF)

For a switched fabric it’s a good practice to have at least two fabrics (A/B) which are either separated by physical switches or VSAN

Device connection process:

  1. Port Initialization
  2. Fabric login (FLOGI)
    1. Switch assigns an FCID, based on the WWN
    2. Switch reserves the necessary buffer2buffer credits (the larger the distance, the more B2B credits are needed)
  3. Port login (PLOGI)
    1. Between nodes that want to communicate

Terminology:

  • File based protocols: CIFS/SMB/NFS
  • Block based protocols: SCSI/iSCSI/FC/FCoE
  • NPV: allows multiple FLOGI on one physical link. It creates multiple virtual F-ports. Since the number of domain ID’s is limited to 239 NPV allows sharing of the same domain ID over multiple edge switches. You can see it as a kind of proxy between edge and core.
  • NPIV: N port identification virtualization: provides a means to assign multiple FC IDs to a single N port, which allows the server to assign unique FC IDs to different applications. Commonly used for virtualization.
  • WWN= hardcoded ~ MAC
    • WWN that starts with 20:=Cisco
    • WWN that has the second byte different from 0 is an extended WWN
  • WWNN = device ID
  • WWPN = device port (different for example for a dual port HBA)
  • FWWN = fabric WWN
  • PWWW = port WWN. PWWW that starts with
  • FSPF: Fabric Shortest Path First
    • Routing for FC
    • Uses domain ID (up to 75/fabric)
    • Loop free over E or TE ports
    • Uses a topology DB ~ comparable with OSPF
  • Directory/name server
    • adres: FF FF FC
    • N-port can do a query on the nameserver
    • Contains address, WWN, volume names
    • Principal/secondary (per VSAN) based on priority
  • FCIP: FC frames in IP packets over normal L3 link (uses acceleration and compression techniques)
  • VSAN: like VLAN for SAN: isolation between FC
    • max 4094 VSAN’s
    • FCID’s can be re-used in different VSANs

Port types:

  • N = Node (server/storage device/tape)
  • F = Fabric (switch)
  • E = Expansion (link between switches, 1 VSAN; ISL)
  • TE = Trunk Expansion (trunk between switches (multiple VSAN) tss 2 switches: EISL)
  • TF= Trunking Fabric: connects to TN port: trunking to node, sends tagged frames
  • NP = Node Proxy: NPV switch (blade server, connected to F-port on switch)
  • VE = Virtual E (FCoE)
  • VF = Virtual F (FCoE)
  • VNP = Virtual NPV (FCoE)
  • FL = Fabric Loop: connects to NL-ports and FL-ports
  • NL = Node loop (connected to hub)

Fibre Channel protocol stack:

  • FC-0: Physcial interface: fiber characteristics like singlemode/multimode
  • FC-1: Encode/decode: clock/data stream/parity/link control
  • FC-2: Framing: flow control/CRC/service class/buffer credits
  • FC-3: Common services: adressing/login/name server
  • FC-4: Application: ULP (upper layer protocols)

Cisco MDS

Characteristics:

  • MDS = Multilayer Director Switch
  • 9500/9700 = modular configuration
  • 9100/9200= fixed configuration
  • Initial setup ask SAN-specific items:
    • Default Zoneset distribution
    • Default Switchport mode (by default: F-port)
  • Nexus 7000 doesn’t do inter-vsan routing and deosn’t have native FC
  • Can be managed/configured by DCNM (Data Center Network Manger)

NPV

In NPV mode, the edge switch relays all traffic from server-side ports to the core switch. The core switch provides F port functionality (such as login and port security) and all the Fibre Channel switching capabilities. This means that the edge looks like a host for the core and show flogi database, … can’t be executed anymore on the edge

When enabling NPV mode (feature npv), the switch config is erased and the switch reboots. Default switch mode is fabric mode.

When enable FCoE and NPV in one time, the switch doesn’t need to change between Fabric mode and NPV so there no write-erase and a reboot (feature fcoe-npv).

Disruptive load balancing can be configured only in NPV mode.

When disruptive load balancing is enabled, NPV redistributes the server interfaces across all available NP uplinks when a new NP uplink becomes operational. To move a server interface from one NP uplink to another NP uplink, NPV forces reinitialization of the server interface so that the server performs a new login to the core switch. This action causes traffic disruption on the attached end devices.

To avoid disruption of server traffic, enable this feature only after adding a new NP uplink, and then disable it again after the server interfaces have been redistributed.

More information:

Debug:

Port-type, VSAN, speed, port-channel:

Check local fabric logins (includes VSAN):

Fabric wide:

Check who is the principal for nameserver:

Unified ports

  • Module names usually have UP in their name
  • When changing a port from ethernet to FC: switch or module reboot
  • When changing a port from ethernet to FC: port is renamed from eth2/10 to fc2/10
  • Lowest numbers are reserved for ethernet, higher for FC

Configuration:

Switch between port types:

Create a VSAN:

Configure ports:

Zoning & masking:

  • Isolation between initiator and target for security reasons
  • Zoning: done on the switch: protects SAN-devices from communicating
  • LUN masking: done on a host: protects parts (LUN) of SAN-devices from communicating
  • A device can be a member of multiple zones
    • One initiator to one target = recommended
    • One initiator to multiple targets = accepted
    • Multiple initiators to multiple targets = not recommended
  • In the default zone devices cannot communicate
  • Maximum 16.000 zones
  • Zone modes:
    • basic
    • enhanced (nobody can change anything until active changes are committed)

Zone configuration:

  1. Add physical ports to VSAN
  2. Configure zones per VSAN
  3. Add zones to the zoneset (only one zoneset can be active per fabric)

List logged on devices:

Create an alias for a device:

Create a zone and add ports:

Create a zoneset and add zone:

Activate a zoneset:

More information:

FCoE (Fibre Channel over Ethernet)

FCoE is a technology to converge Ethernet and Fibre Channel traffic over the same link. Hosts that want to use FCoE need a special CNA-adapter which allows FCoE and Ethernet.

Characteristics

  • FCoE encapsulates an FC frame, which in turn encapsulates a SCSI frame
  • FCoE NPV: parent: NPV, child: NPIV -> FIP snooping
  • VFC binds to VSAN and physical Ethernet interface. The port must connect to a trunk
  • FCoE is only allowed on 10G/40G interfaces
  • FCoE VLAN that matches with the VFC VSAN must be allowed
  • FCoE VLAN can’t be the native VLAN
  • VLAN1 can’t be the FCoE VLAN
  • Multihop FCoE crosses the access layer up to the core (for Nexus 7000: requires F1/F2e or F2 modules)
  • Best practice: once FCoE passed the access layer: dedicated links for FC traffic and Ethernet traffic
  • FCoE is only possible with a Sup2/2e supervisor
  • FCoE on Cisco requires every switch in the path to be FCoE aware = FCoE dense model.

Terminology:

  • DCB: Data Center Bridging: Standard to have a low latency lossless connection suited for FCoE traffic
  • DCB eXchange protocol:  advertise capabilities over link and checks both side prior to data transfer
  • CNA: Converged Network Adapter: Adapter for hosts that is a combination of NIC and FCoE interface (appears as two separate devices for the OS)
  • FCF: FCoE switch
  • FIP: FCoE init protocol
  • FCF: FCoE Fibre Channel forwarder: switches in the FCoE path that are FCoE aware to keep traffic lossless
  • VFC: Virtual FC interface
  • 802.1Qbb (lossless Ethernet) Priority Flow Control
  • 802.1Qaz (bw management) Enhanced Transmission Standard
  • 802.1p: Implements QoS at MAC-level with a 3-bit field called the Priority Code Point (PCP) in the Ethernet frame header

FCoE connection process (FIP):

  1. VLAN discovery (multicast to FCF MAC over native VLAN)
  2. FCF discovery (multicast to FCP MAC)
  3. FLOGI

Configuration:

Enable FCoE:

Create a VSAN for FC:

Create a VLAN for Ethernet. It’s easy to keep the VLAN and VSAN number equal:

Create VFC interface and map it to a physical port. It’s easy to keep the same number for the VFC as the VSAN.

Bind the VFC to a VLAN and VSAN:

Configure the physical port. It must be a trunk and the allowed VLAN must match:

Show translation between VLAN ID and VSAN ID and debug:

 

Configure a trunk over FCoE. This configuration must match on both sides of the trunk:

Create VFC and bind it to a physical interface (number is free to choose):

Map the VFC to the correct VSAN:

Enable the trunk interface:

Debug:

Nexus general

Connection parameters: 9600/8/N/1 (baud/data bits/parity/stop bits)

Planes:

  • Planes are isolated (one faulty plane does not influence the other)
  • Management plane: config/policy/CLI/GUI/SNMP/API/CoPP…
    • EEM: Embedded Event Manager: Can take action based on an event (syslog/CLI/GOLD/environment/hardware)
  • Control plane: OSPF/EIGRP/STP/CDP/BFD/UDLD/LACP/ARP/FabricPath/VRRP/…
    • CoPP: Control Plane Policing: Protects the control plane when problems occur in the data plane (for example: broadcast storem/DoS)
    • Possible configuration for CoPP
      • CIR (Committed information rate)
      • PIR (Peak information rate)
      • EB (Extended burst)
    • Predefined CoPP policies: Strict/Moderate/Lenient/Dense/Skip (default: skip)
    • Ethanalyzer: wireshark for the control plane
  • Data plane: packet forwarding

Ways of data forwarding in the data plane:

  •  Store and forward: get the complete packet/frame in memory and then forward
    • Advantages:
      • Error checking of complete frame (FCS)
      • Possibility to buffer frames (more robust)
      • ACL
  • Cut trough: forward a frame as soon as the destination is known
    • Advantages:
      • Lower latency
      • Flags a frame instead of dropping it in case FCS is not correct
Source Destination Mode
10G 10G Cut-trough
10G 1G Cut-trough
1G 1G Store and forward
1G 10G Store and forward

VRF (Virtual Routing and Forwarding)

A VRF allows one switch to have multiple IP routing protocol stakcs in the same environment.

Characteristics:

  • Virtualization of the IP routing protocol
  • Separate routing and forwarding decisions
  • IPv4 and IPv6 have separate tables
  • Membership on an interface determines which VRF will be used
  • mgmt0 can only be in the management VRF

VRF-aware commands:

Another way is to set the routing-context to a certain VRF. All commands will then be executed in that VRF. Notice the changed prompt:

Create a VRF and add an interface to it:

 

VDC (Virtual Device Context)

A VDC allows one physical switch to be splitted up into multiple virtual switches. Each VDC is running it’s own processes and configuration.

Characteristics:

  • Nexus 7000 with supervisor Sup1 or Sup2: 4+1 (admin VDC)
  • Nexus 7000 with supervisor Sup2E (7000/7700): 8+1 VDC allowed
  • Extra VDCs requires a license
  • The admin VDC can’t own interface but has special privileges to manage the other VDCs and RBAC
  • To communicate between VDCs requires a physical connection between member ports of those VDCs

VDC types:

  • Default VDC: manage physical device, other VDC’s, upgrades, captures,…
  • Nondefault VDC: user-created from default or admin VDC
  • Admin VDC:
    • Can be enabled with sytem admin-vdc (migrate).
    • Replaces the default VDC
    • Features or feature-sets cannot be enabled from here
    • Can’t have interfaces assigned to it
    • Doesn’t require special license
  • Storage VDC:
    • nondefault
    • Requires the FCoE licence (but no VDC license)
    • Can have only FCoE or shared interfaces

Configuration:

Show current used VDCs and license:

Show members of VDC

Create a new VDC and allocate ports to it:

Switch between VDC:

 

Nexus 1000v

The Nexus 1000v is a virtual switch (appliance), running on a hypervisors (VMware, Hyper-v,…). It extends the networking functionality of the hypervisor.

Characteristics:

  • Uses a control-interface (in a separate VLAN) to exchange hearbeat and control messages
  • Uses a packet-interface for CDP and IGMP messages
  • Up to 250 ESX hosts per VSM
  • Port profiles (VLAN, ACL, SPAN, ERSPAN, QoS,…) are mobile and follow the guest when doing a vMotion
  • Every host (hypervisor) needs to have a VEM
  • Comes in Essential (free) and Advanced edition
  • Communication between VSM and VEM can be either L2 or L3 (recommended)
    • in L3 mode, every host needs a vmkernel interface with an IP
  • VSM connects to vCenter using SSL. Communication via the VMWare API

Terminology:

  • VSM: Virutal Switch Module
  • VEM: Virtual Ethernet Module
  • vEth: Virtual Ethernet Port
  • VSG: Virtual Security Gateway
    • deployed on top of 1000v, provides security between guests
    • vPath: first packet of a flow is sent to VSG for inspection

Installation procedure:

  1. Create VLAN’s on the vSwitch on every host that will run a VSM for VSM control and management traffic
  2. Deploy the OVA for the primary VSM (select manual or automatic setup)
  3. During deployment, map the control and management VLAN’s as created in step 1
  4. Connect to the VM’s console and do basic configuration
    1. Admin password
    2. Role: primary/secondary
    3. Domain ID
    4. Basic configuration dialog (as on a normal Nexus switch): SNMP/switch name/IP/ssh/http-server/SVS control mode: L2/L3)
  5. Deploy the OVA for the secondary VSM (also see 3) (select VSM secondary)
  6. Enter the domain ID and admin password of the primary
  7. The secondary automatically gets the configuration of the primary
  8. Create a connection to vCenter (svs connection <name>, protocol, remote ip,…)
    1. Distributed virtual switch gets automatically created on the vCenter
  9. Configure the rest of the network (VLAN’s, port profiles,…)
  10. Create a vmkernel interface for each host running a VSM for VEM connectivity

Configure port profile:

Only after the last command, the port profile gets pushed to VMWare

Debug:

vCenter connection and config:

Specific port profile:

Show connected VSM and VEM:

More information:

UCS (Unified Computing System)

UCS are the x86-servers from Cisco. They come in two variations (B-series: blade and C-series: rack). UCS-server are managed by UCSM (UCS Manager), which is running on Fabric Interconnects.

Fabric Interconnect:

  • UCS 6xxx-series Fabric Interconnect
    • Has 1-2 expansion modules (16 ports/module)
    • 6248: 32 fixed ports, max. 48 ports
    • 6296: 48 fixed ports, max. 96 ports
    • 6100-series has no unified ports. FC is only available via a module
  • Up to two nodes for high availability
  • Data connectivity is active-active
  • Management (UCSM) connectivity is active-passive
  • Cluster port on Fabric Interconnect for interconnection (L1/L2)
  • Connection to blades:
    • Best practice: port channel connection (just add a member if extra capacity is required)
    • Every blade has two paths (2 FEX I/O modules) to the Fabric Interconnects
    • SAN Fabric should not be cross-connected (A/B separate)

Fabric Interconnect port types:

  • Unconfigured
  • Server: to a rack server or blade chassis
  • Ethernet uplink
    • Ethernet uplink (to upstream network)
    • FCoE uplink
    • Appliance: to NFS-storage
  • Fibre Channel uplink
    • Fibre Channel uplink port to upstream SAN-fabric
    • Fibre Channel storage port for Direct Attached FC Storage

Installation procedure:

  1. Setup primary FI
  2. Setup secondary FI (requires the cluster IP and admin password chosen in step 1)
  3. Login to the cluster IP
  4. Configure connectivity
    1. Create VLAN’s
    2. Choose ports for LAN uplink
    3. Create VSAN’s (min 1 fabric A, 1 fabric B)
    4. Choose ports for SAN uplink
  5. Configure port channels
  6. Start discovery

UCSM (UCS Manager)

  • Running on the Fabric Interconnect
  • GUI/CLI/XML API
  • Has a floating (virtual) IP for management
  • Primary and secondary (DB and state is replicated)
  • Can manage up to 160 servers
  • Stateless computing: apply a profile to a physical server to set FW, boot device, BIOS, MAC, VLAN, QoS,…

Most hardware is automatically discovered by UCSM. Before enabling server ports, make sure that the discovery policy is correctly configured.

Chassis discovery process:

  1. Server port on FI becomes linked with an I/O module in a blade chassis
  2. Communicate with the management controller in the blade chassis
  3. Check compatibility/serial/existing or new/…
  4. Accept the chassis
  5. Discover the rest of the hardware: model, firmware, power supplies,…

Chassis/FEX discovery policy (Equipment tab) allows you to set the minimum conditions that should be matched before a chassis is detected. For example minimum number of links required to a chassis, minimum number of PSU or require manual acknowledgement before adding a new chassis.

Server discovery process

  1. Slot in a blade chassis detects new blade
  2. Communicate with the CIMC on the blade in the slot
  3. Discover basic information: mode, serial, BIOS, CIMC, CPU, memory, HBA, NIC,…
  4. Boot UCS Utility OS
  5. Discover further information: local disk info, specific HBA or NIC info

Blade chassis:

  • UCS 5xxx Blade server chassis
    • 5108: 8 blades, 4 PSU, 8 fan modules
    • 2 I/O modules (FEX) on the backside, connectivity to Fabric Interconnect
    • Can connect to Fabric Interconnect with 2/4/8/16 10G links depending on the required capacity

B-series Blade:

  • UCS B200M3 Blade
  • Has a CIMC (Cisco Integrated Management Controller):
    • allows discovery of blades
    • allows KVM access to the server
    • IPMI

C-series Rackserver:

  • UCS C240M3
  • Can also be managed by UCSM
  • also has CIMC

UCS VIC: Virtual Interface Card

  • Mezzanine adapter in server/blade
  • Can have up to 256 virtual network adapters presented to the OS or hypervisor
  • Can be configured as IP, FCoE, Adapter FEX or VM FEX
  • Allows for fabric failover: in hardware failover without the OS being aware of NIC teaming

More information:

Network services

Cisco ACE (Application Control Engine): Load balancer

Features:

  • Can be a separate device or a module in an IOS router
  • Uses intelligent NAT
  • Makes decisions from Layer 3 to Layer 7
  • Decisions are made in hardware
  • Can do SSL encryption/decryption (in hardware by using a daughter card), certificate management and SSL offloading
  • Does HTTP optimization and compression
  • Can be split up in contexts (comparable to VDC on Nexus)
    • Max 250 contexts
    • Admin context to manage
    • Every context is a sandbox
    • Failover is context-aware
    • No Layer 2 communication between contexts
    • Contexts can have dedicated resources (bandwidth, #connections, compression,…)
  • Management:
    • TACAS/RADIUS for RBAC (CLI & GUI)
    • Telnet/SSH per context
    • Webgui (HTTP) for documentation/MIBS/tools
    • XML API
  • ACE4710 has a built-in GUI
  • ACE30: hardware accelerated HTTP compression (GZIP/deflate) & HTTP optimization
  • ACE10/20/30 have no built-in GUI but can use ANM (Application Network Manager): A fat-client to manage multiple ACE devices
  • Predecessor was CSS/CSM

GSS (Global Site Selector): DNS load balancer

Features:

  • Rules determine which IP is returned for the DNS request
  • Uses metrics: proximity lookup to return location-based IP

GSLB (Global server Load Balancing): ACE + GSS

Features:

  • Uses KAL AP (Keepalive Appliance Protocol) for communication between ACE & GSS
  • Load balancing decisions are made by both devices
  • Check the load of VIP
  • ACE: one datacenter
  • GSS: determines which datacenter

WAAS: Wide Area Application Services: WAN optimizer

Features:

  • Has approved/licensed support for ICA and Exchange
  • WAAS-device is between the switch and the WAN-connection
  • Does load sharing and failover
  • Transparent to the network
  • WAAS Central Manager to manage (or CLI)
  • Can be a separate device or a module in an IOS router
  • WCCP: Web Cache Communication Protocol

Hopefully the above information helps somebody to study for the exam or to find some information that is related to Nexus/Data Center.

11 thoughts on “How to prepare for Cisco CCNA Data Center 640-916 DCICT

  1. I can’t thank you enough for this thechnote, it came really really helpful to me in revision, I passed the 640-916 today !
    Thanks!

  2. Failed the exam today wish I had seen this sooner. Obviously a lot of work has gone into it. Good work

Leave a Reply