VCAP6-NV (3V0-643) Study Guide – Part 3

This is part 3 of 20 blogs I am writing covering the exam prep guide for the VMware Certified Advanced Professional – Network Virtualisation Deployment (3V0-643)  VCAP6-NV certification.

At the time of writing there is no VCAP Design exam stream, thus you’re automatically granted the new VMware Certified Implementation Expert – Network Virtualisation (VCIX6-NV) certification by successfully passing the VCAP6-NV Deploy exam.

Read part 2 here.

This blogs covers:
Section 1 – Prepare VMware NSX Infrastructure
Objective 1.2 – Prepare Host Clusters for Network Virtualisation

  • Prepare vSphere Distributed Switching for NSX
  • Prepare a cluster for NSX
    • Add/Remove Hosts from Cluster
  • Configure the appropriate teaming policy for a given implementation
  • Configure VXLAN Transport parameters according to a deployment plan

 

Prepare vSphere Distributed Switching for NSX

In a VCAP deploy exam they could pretty much throw anything at you for this. I would at a guess say they are going to get you to create vSphere Distributed Switches (vDS) and/or join ESXi hosts to specific vDS. I am not going to cover the migration process here, but I will give you an overview of my lab and an idea of what they might be looking for.

Firstly, make sure you understand that the minimum MTU for NSX is 1600 bytes. Double check your vDS to confirm it’s configured at the appropriate size for your environment.

The requirement for 1600 bytes is due to the original Ethernet frame being wrapped (encapsulated) with additional headers for VXLAN, UDP and IP; thus increasing its size, and is now called a VXLAN Encapsulated Frame.

comp-vds2

Check your MTU size on the vDS

 

In my lab I have 3 vSphere clusters:

  • Management Cluster w/ 2 ESXi 6.0 U2 hosts
  • Compute Cluster A w/  2 ESXi 6.0 U2 hosts
  • Compute Cluster B w/ 2 ESXi 6.0 U2 hosts

clusters

I have 2 vSphere Distributed Switches

  • Management vDS
  • Compute vDS

vds

Hosts from all clusters are joined to the Management vDS and this is where the VMkernel port is configured for host management and VMotion.

vds-mgmt

Only hosts from the Compute Clusters are joined to the Compute vDS. These hosts are the ones I will prepare for NSX and where Logical Switches, Distributed Logical Router and Edge Service Gateways will be deployed.

comp-vds3

Now in my environment I have the Compute A & B clusters joined to both vDS (Management vDS & Compute vDS). For a lab environment this is fine, but in a production environment as a best practice you probably would not want this. You would want the Compute A & B clusters joined only to one vDS (the Compute vDS) with the hosts VMkernel ports residing there. Why?

Having hosts joined to multiple vDS works, however having separate vDS for management and compute clusters has benefits:

  • Separation and control of VLANs. Having separate vDS means that VLANs are not configured across both Management and Compute Clusters. You can contain management VLANs to the Management vDS. You can contain customer VLANs stretched into a Service Provider DC to only the Compute Clusters.
  • Limit your VMotion domain. It stops VMs from the Compute Cluster being migrated to the Management Cluster and vice-versa.
  • There will be more…

In the VCAP exam, I can see them giving us a requirement for management and compute hosts to be separated which would require multiple vDS and and moving hosts about etc.

Make sure you know how to migrate hosts and VMs between vDS.

 

Prepare a cluster for NSX

The hosts have to be prepared for NSX. This will install the NSX kernel modules (vib files) and creates the Control and Management Plane for NSX. When installing it’s on a cluster-wide basis.

Make sure you meet all the pre-req’s:

  • Make sure you have the NSX Manager and Controllers deployed
  • The cluster hosts are joined to the same vDS
  • DNS F/R records in-place
  • Your vCenter is functioning
  • You need to disable VUM before deploying Controllers.

If you have a dedicated Management Cluster, you probably would not install NSX on those hosts.

Log into the vSphere Web Client.

Note: Make sure that there are no issues with the cluster prior. Select your cluster then click the Actions icon. If Resolve appears it means a host needs a reboot.  You can also click on the red ‘X’ to tell you the issue. If you cannot clear Resolve you will need to move the hosts to a new cluster and delete the old one.

Click on the Networking and Security icon followed by the Installation tab.

Select and expand your cluster. Hover over your cluster and a blue cog will appear. Click Install. I have selected Compute Cluster A.

prep2

prep3

After the install the cluster hosts show a green tick and state ‘Enabled‘.

prep5

 

Add Hosts to Cluster

OK this task is pretty simple.

Add the host to vCenter but outside of the cluster.

Join it to the same vDS as other hosts in the cluster.

Put the host into maintenance mode.

Move the host into the cluster and it will auto install the VMkernel modules.

Once the install is complete take the host out of maintenance mode.

 

Remove Hosts from Cluster

Two ways to do this.

Via the CLI or from the GUI. Both require a reboot.

From the command line:

esxcli software vib remove –vibname=esx-vxlan

esxcli software vib remove –vibname=esx-vsip

From the GUI:

Put the host in maintenance mode.

Move the host out of the cluster, the VMkernel modules will be removed.

Reboot the host.

Take the host out of maintenance mode.

 

Configure the appropriate teaming policy for a given implementation

This section is based around how your physical switches have been deployed and configured and how you will design the appropriate vSphere Distributed Switch (vDS) Teaming Policy (well that’s how I summed it up!).

In the exam they might test us by providing a physical connectivity diagram or architecture design goal that we will need to take into account.

Teaming Policies will determine how network traffic is load balanced over the physical network cards (pNIC) on the hosts.

Looking at the documentation they might focus on the VTEP (VXLAN Tunnel Endpoint) configuration and how many to configure: one or two.

The VTEPs are responsible for encapsulating and de-encapsulating traffic from VXLAN networks. Every host has one or two VTEPs. They are assigned a VMkernel port that connects to a specific VLAN that the overlay (VXLAN) will operate on.

Why have more than one VTEP on a host? One reason is to load balance your VXLAN traffic over multiple pNICs.

Your host should have at a minimum two uplinks, and they might either be connected to the same switch or split between switches, LACP might be in use or not, and these all determine how you will configure your Teaming Policy.

The following information from here about teaming policies has been cut and pasted from various VMware NSX blogs and white papers

nsx9

 

Single VTEP Uplink Options

single vtep.PNG

 

Multi-VTEPs Uplink Option

b

The recommended teaming mode for VXLAN traffic for ESXi hosts in the Compute Cluster is LACP. It provides sufficient utilization of both links and reduced fail-over time. It offers simplicity of VTEP configuration and troubleshooting.

For ESXi hosts in the Edge Clusters it is recommended to avoid the LACP or Static EtherChannel options. One of the main functions of the edge racks is providing connectivity to the physical network infrastructure and this is typically done using a dedicated VLAN-backed port-group where the NSX Edge (handling the north-south routed communication) establishes routing adjacencies with the next hop L3 devices. Selecting LACP or Static EtherChannel for this VLAN-backed port-group when the ToR switches perform the roles of L3 devices complicates the interaction between the NSX Edge and the ToR devices.

The main thing to take away. Understand the teaming policies above. I can see them asking about this. Know how to configure hosts and the vDS. Read the architecture papers and install guides.

 

Configure VXLAN Transport parameters according to a deployment plan

Make sure your MTU on the vDS is set at 1600 or higher.

Log into the vSphere Web Client.

Click Networking and Security, then Installation followed by the Host Preparation tab.

Select your cluster, hover over it and click the blue cog. Click Configure VXLAN.

Configure your options: vDS, VLAN, MTU, IP Pool, Teaming Policy.

I created a new IP Pool called VTEPpool on the 172.16.0.0./24 network. The hosts Management network is 10.0.0.0/24.

vdszz

Click OK to apply the changes.

The VXLAN status changes to Configured.

vxlan

In vCenter, the hosts that have been configured for VXLAN now show a VMkernel port configured on the 172.16.0.0/24 network ont the Compute vDS.

compu

Segment IDs

They are probably looking also for the configuration of the VXLAN Segment IDs. The Segment ID specifies a set range of VXLAN Network Identifies (VNIs) that can be used, or for simpler terms the number of Logical Switches. The segment isolates VXLAN traffic.

The range starts at 5000 and ends at 16777216 providing some 16.7 million network segments. An example Segment ID range you might configure is 5000-7000, we will walk through this soon.

Considering VLANs are limited to 4094 that’s a pretty big number. For Service Providers the ability to expand beyond the VLAN limit is huge.

With the current  6.2.x versions of NSX there is at present a limit of 10,000 VNIs. This is due to the vCenter vDS configuration maximum of 10,000 port groups.

Make sure if you have more than one NSX implementation (i.e. cross-vCenter NSX) that your Segment IDs do not overlap.

Note: On a single vCenter NSX deployment if you want to add multiple Segment ID ranges you cannot do this from the Web Client and it will need to be done via the CLI (exam task there!).

From the VMware Installation Guide this is the process to add multiple Segment IDs:

segment

To configure Segment IDs from the vSphere Client the process is shown below:

Log into the vSphere Web Client.

Click Networking and Security, then Installation followed by the Segment ID tab, then click Edit.

Enter your Segment ID range, I have chosen 5000-7000

seg2

seg3

 

And that’s about it.

Read the relevant bits from the Install Guide and also the Administration Guide.

Coming up in part 4 covers: Objective 1.3 – Configure and Manage Transport Zones

Be Social, please share!

 

 

  1. […] Here is the next blog (Part 3) covering Object 1.2 – Prepare Host Clusters for Network Virtua… […]

    Like

    Reply

  2. […] 1 – Intro Part 2 – Objective 1.1 Part 3 – Objective 1.2 Part 4 – Objective […]

    Like

    Reply

  3. […] 1 – Intro Part 2 – Objective 1.1 Part 3 – Objective 1.2 Part 4 – Objective 1.3 Part 5 – Objective […]

    Like

    Reply

  4. […] 1 – Intro Part 2 – Objective 1.1 Part 3 – Objective 1.2 Part 4 – Objective 1.3 Part 5 – Objective 2.1 Part 6 – Objective […]

    Like

    Reply

  5. […] 1 – Intro Part 2 – Objective 1.1 Part 3 – Objective 1.2 Part 4 – Objective 1.3 Part 5 – Objective 2.1 Part 6 – Objective 2.2 Part 7 – […]

    Like

    Reply

  6. […] 1 – Intro Part 2 – Objective 1.1 Part 3 – Objective 1.2 Part 4 – Objective 1.3 Part 5 – Objective 2.1 Part 6 – Objective 2.2 Part 7 – […]

    Like

    Reply

  7. […] 1 – Intro Part 2 – Objective 1.1 Part 3 – Objective 1.2 Part 4 – Objective 1.3 Part 5 – Objective 2.1 Part 6 – Objective 2.2 Part 7 – […]

    Like

    Reply

  8. […] 1 – Intro Part 2 – Objective 1.1 Part 3 – Objective 1.2 Part 4 – Objective 1.3 Part 5 – Objective 2.1 Part 6 – Objective 2.2 Part 7 – […]

    Like

    Reply

  9. […] 1 – Intro Part 2 – Objective 1.1 Part 3 – Objective 1.2 Part 4 – Objective 1.3 Part 5 – Objective 2.1 Part 6 – Objective 2.2 Part 7 – […]

    Like

    Reply

  10. […] 1 – Intro Part 2 – Objective 1.1 Part 3 – Objective 1.2 Part 4 – Objective 1.3 Part 5 – Objective 2.1 Part 6 – Objective 2.2 Part 7 – […]

    Like

    Reply

  11. […] 1 – Intro Part 2 – Objective 1.1 Part 3 – Objective 1.2 Part 4 – Objective 1.3 Part 5 – Objective 2.1 Part 6 – Objective 2.2 Part 7 – […]

    Like

    Reply

  12. […] 1 – Intro Part 2 – Objective 1.1 Part 3 – Objective 1.2 Part 4 – Objective 1.3 Part 5 – Objective 2.1 Part 6 – Objective 2.2 Part 7 – […]

    Like

    Reply

  13. […] 1 – Intro Part 2 – Objective 1.1 Part 3 – Objective 1.2 Part 4 – Objective 1.3 Part 5 – Objective 2.1 Part 6 – Objective 2.2 Part 7 – […]

    Like

    Reply

  14. […] 1 – Intro Part 2 – Objective 1.1 Part 3 – Objective 1.2 Part 4 – Objective 1.3 Part 5 – Objective 2.1 Part 6 – Objective 2.2 Part 7 – […]

    Like

    Reply

  15. […] Objective 1.2 – Prepare Host Clusters for Network Virtualization […]

    Like

    Reply

  16. First of all, great job with this blog! I noticed you have a picture of a BGP config under ‘Single VTEP Uplink Options’. Did you mean to do that?

    Liked by 1 person

    Reply

    1. Hi Jaime,
      Thanks for pointing that out. I have updated the diagram to the correct one.

      Like

      Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: