VCAP6-NV (3V0-643) Study Guide – Part 8

This is part 8 of 20 blogs I am writing covering the exam prep guide for the VMware Certified Advanced Professional 6 – Network Virtualisation Deployment (3V0-643)  VCAP6-NV certification.

At the time of writing there is no VCAP Design exam stream, thus you’re automatically granted the new VMware Certified Implementation Expert 6 – Network Virtualisation (VCIX6-NV) certification by successfully passing the VCAP6-NV Deploy exam.

Previous blogs in this series:

Part 1 – Intro
Part 2 – Objective 1.1
Part 3 – Objective 1.2
Part 4 – Objective 1.3
Part 5 – Objective 2.1
Part 6 – Objective 2.2
Part 7 – Objective 2.3

This blogs covers:

Section 3 – Deploy and Manage VMware NSX Network Services
Objective 3.1 – Configure and Manage Logical Load Balancing

  • Configure the appropriate Load Balancer model for a given application topology
  • Configure SSL off-loading
  • Configure a service monitor to define health check parameters for a specific type of network traffic
  • Optimise a server pool to manage and share back-end servers
  • Configure an application profile and rules
  • Configure virtual servers

 

I did a project a few years ago with Riverbed Stingrays Application Delivery Controllers load balancing two physical data centre VMware Horizon View environments.  This was based on the VMware Always-On blueprint for View. Each data centre was designed to handle a maximum of 2000 VDIs with 1000 in use during steady state operations.

We had the smarts built-in to send a user session based on an Active Directory group to a specific primary data centre, and in the event of an outage re-direct them to the surviving data centre. Fail-over, server patching and upgrades just become a pleasure as we could redirect the traffic to the other site. That was pretty cool stuff, cutting-edge at the time and before VMware released the Cloud-Pod Architecture.

I learnt huge amounts about load balancing on that project: virtual server IPs, pools, SSL certificates, session persistence, more scripting; so I am eager to learn NSX Load Balancing and what it’s capable of.

Configure the Appropriate Load Balancer Model for a given Application Topology

So, you must have a functioning NSX Edge Services Gateway (ESG) to configure load balancing, the Distributed Logical Router (DLR) does not support this feature.

The firewall on the ESG must be enabled. Load balancing or NAT cannot be used while it is disabled.

NSX load balancing (from now on referred to as ‘LB’) works at Layer 4 dealing with packets or Layer 7 with sockets.

Packet-based LB at Layer 4 just sends/forwards on the packets to the destination, working with TCP or UDP. It does a bit of work and then sends them on.

Socket-based LB at Layer 7 needs to receive the entire request before sending it on. It’s working with HTTP or HTTPS protocols.

Probably important to know the default mode for a NSX LB is Socket-Based for TCP, HTTP and HTTPS (UDP being the exception).

Also note using Layer 7 socket-based LB will have some impact on the sizing of the Edge. The Edge may need to be resized. VMware recommends Quad-Large or X-Large for LB. Sizing shown on this VMware diagram.

lb2

Most LBs use the same concepts just called something different. This is what NSX calls its main LB services:

Virtual Server: This is a virtual IP and port combination and is listening for requests. Your LB hosts this IP and port. It front-ends your Server Pool. E.g: your virtual server might be 203.1.0.23:4881

Server Pool: This is a grouping of servers either physical or virtual that can service the incoming request e.g. web servers. It normally contains more than one server for redundancy.

Server Pool Member: This is a single instance of one of the Server Pool members.

Service Monitor: These probe the health status of pool members to determine the health of the pool.

Below is some more information on the LB features. This is from the VMware Validated Design documentation. The default Layer 7 socket-based LB has a larger feature set.

lb1

A design decision that needs to be made is to either have one ESG for all LB services, or have a single ESG for each application that needs to be load balanced.

If you have simple LB requirements this might mean that you just modify your central Edge Services Gateway (ESG) and configure that for LB.

If your requirements are higher or you want to segregate LB services you might prefer to deploy a new ESG for each application to be LB. This has a higher overhead but creates a smaller blast radius should someone mis-configure something; and it also means no one from the app’s team is messing with your central ESG.

Another thing to take into account is the deployment topology. Proxy Mode (One-Armed) or Transparent Mode (Inline). The VMware NSX Administration Guide doesn’t include this which I find strange, it should be there. Best information I found about this is in the VMware Validated Design documentation.

The below information is from the VMware Validated Design documentation.

lb-proxy

This model is simpler to deploy and provides greater flexibility than traditional load balancers. It allows deployment of load balancer services directly on the logical segments without requiring any modification on the centralized NSX Edge providing routing communication to the physical network. On the downside, this option requires provisioning more NSX Edge instances and mandates the deployment of source NAT that does not allow the servers in the data centre to have visibility into the original client IP address. The load balancer can insert the original IP address of the client into the HTTP header before performing S-NAT – a function named “Insert X-Forwarded-For HTTP header”. This provides the servers visibility into the client IP address, and is limited to HTTP traffic.

 

lb-inline

This deployment model is also quite simple, and additionally provides the servers full visibility into the original client IP address. It is less flexible from a design perspective as it usually forces the load balancer to serve as default gateway for the logical segments where the server farms are deployed. This implies that only centralized, rather than distributed, routing must be adopted for those segments. Additionally, in this case load balancing is another logical service added to the NSX Edge which is already providing routing services between the logical and the physical networks. Thus it is recommended to increase the form factor of the NSX Edge to X-Large before enabling load-balancing services.

 

Summed up; if you have one central ESG, configure load balancing and it has an interface on both external and internal networks, then the deployment is in Transparent/Inline mode. The VMs on load balanced network will have this ESG as their default gateway.

If an ESG has only one interface on the network containing the load balanced servers then the deployment is in Proxy/One-Arm mode. The VMs on the load balanced network cannot use this ESG as their default gateway and must use the default gateway address configured for the network (the gateway defined when the logical switch was connected to the ESG or DLR).

It’s possible in the exam we are tasked with deploying an ESG, asked to configure from scratch, enable and configure load balancing. The deployment goal VMware gives will determine how to deploy and configure the Edge. It’s the interface configuration that determines your load balancing mode (transparent/inline or proxy/one-armed).

Make sure you practice deploying ESGs and configuring the interfaces in different ways.

My ESG is currently configured in Transparent/Inline mode, thus has an uplink and an internal network. Below is a screen shot of my interfaces. As we walk through this blog we will be configuring load balancing on this ESG.

config

 

How to Enable NSX Load Balancing

This is a pretty simple process once you have deployed and configured your ESG.

Log into the vSphere Web Client.

Click Networking and Security, then NSX Edges.

Double-click the ESG to configure LB on.

Click Manage, then Load Balancer.

Click Global Configuration then click the Edit button.

Select the Enable Load Balancer option. You can additionally define NSX 3rd party services.

load-bal

Load balancing is now enabled, but still needs to be configured for your application to be load balanced.

Below is the basic configuration for load balancing two web servers.

I have  created a Pool called WebPool which contains two web servers: Web 1 VM and Web 2 VM.

pool

I configured an HTTP Application Profile as below.

web

I configured a Virtual Server as below.

logging

I installed Microsoft IIS on both web VMs. Hitting the VIP 172.16.10.100 I get the default IIS page from one of the web servers.

iis

That’s a real basic overview. You need to configure firewall rules to allow external connections. The DNAT rules are automatically created. I have decided not to go into detail here and return focus to the lab objectives. Practice deploying Edges and configuring LB with active VMs i.e. web servers to test the configuration.

Important Note: On the Edit Pool screen when in the Load Balancing section there is a tick box for Transparent. This makes the source client IP visible to the back-end servers. By default the back-end servers will see the source as the internal IP of the LB. Don’t get this confused with Transparent Mode (Inline). It’s not the same thing!

edit-pool

Configure SSL Off-Loading

At this point I have the ESG Load Balancing configured with a virtual server listening on 172.16.10.100 HTTP port 80 with a pool connected which contains two web server VMs with IIS installed.

I could implement SSL certificates on my web servers and change the load balancing configuration from HTTP 80 to HTTPS 443 and just pass through the SSL traffic to the web servers, or I could leave it how it is and configure SSL off-loading.

SSL off-loading is where some ‘thing’ looks after the processing (encryption/decryption) of traffic sent by SSL which takes the SSL processing load off the servers. This ‘thing’ is the NSX Edge Services Gateway (ESG). In simpler terms, the front-end from client to LB is configured with HTTPS, and the back-end from LB to web servers is configured with HTTP. The ESG does the SSL off-loading.

I couldn’t find much information on SSL off loading in the NSX documentation. After trial and error I got it working though.

Create a certificate for your NSX Edge

Note: You can also use self-signed certificates. I have gone a step further and have configured CA signed certificates from my internal CA.

At a high-level the steps are:

  • Create the CSR
  • Copy the CSR content and upload to your CA
  • Retrieve both the certificate and Root CA certificate
  • Import the Root CA certificate
  • Import the certificate

Log into the vSphere Web Client.

Click Networking and Security, then NSX Edges.

Double-click the ESG that you want to configure a certificate for.

Click Manage, then Certificates.

Click the blue cog (actions) and select Generate CSR.

Populate the fields with relevant information.

ssl

Now copy the content under PEM Encoding which is the CSR. Upload the CSR to your CA and retrieve the signed web services certificate, also the Root CA certificate.

ssl2

At this point you have two certificates (a) Root CA and (b) web services certificate.

On your ESG, click Manage, then Certificates.

Click the green + sign to Add a CA Certificate.

Paste the contents of your CA Root Certificate.

root

You can now see the Root CA certificate in the console.

root2

Now copy the contents of your web services certificate. Click on your CSR in the console and then select the blue cog (actions). Click Import Certificate.

ca3

Paste the content of your web services certificate.

ca4

You can now see in the console the CA and web services certificates.

ca-chain

You now have the full certificate chain and can proceed to configure SSL Off-Loading by modifying the Application Profile from the Load Balancer tab.

Tick the box to Enable the web service Service Certificate. Change the Type to HTTPS.

ssl99

Followed by enabling the CA certificate.

ca100

Lastly, modify the Virtual Server. Change the protocol to HTTPS and the Port to 443.

vs

The virtual server IP (VIP) is 172.16.10.100 and I have configured DNS to point webfarm.lab.local at the VIP.

In a browser when I hit the URL of https://webfarm.lab.local I am presented with a secure SSL session (padlock closed in IE).

iis22

All good. SSL off-load complete.

Configure a Service Monitor to Define Health Check Parameters for a Specific Type of Network Traffic

A Service Monitor is just a health check.You select a protocol and some options i.e. protocol, port, the interval, timeout and retry values. The configured Service Monitor is then attached to the Pool of load balanced servers. The Service Monitor determines the health state of the servers in the pool. Its stops client requests being sent to a server that has failed.

The options in NSX for Service Monitors are:

Interval: How often the monitor will poll the server in seconds
Timeout: The maximum time for the response to be received in seconds
Retries: The number of times to recheck before deeming the server offline

HTTP/HTTPS: Can configure a GET or POST and text to send and expected to receive
TCP/UDP: Can configure ports, text to send and expected to receive
ICMP: It is what it is: pings

If you define the Service Monitor as HTTP or HTTPS you will also need to populate the Expected, Method and URL fields.

  • Expected: The string the monitor expects to match in the response e.g. HTTP/1.1
  • Method: GET or POST etc
  • URL: The URL to send in the request

Send: The data to send to the server

Receive: The string to be matched to the response. If EXPECTED is not matched, the monitor does not try to match the Receive content.

The VMware documentation on Service Monitors is really poor!

Service Monitors once configured are added to a Pool containing servers to monitor.

For example, the below shows my WebPool that has the default HTTP Service Monitor attached – which does an HTTP GET, and that’s all. It doesn’t check the response content, only there is a listening web service on port 80.

service-mon

I recommend that you configure some web servers like I have and play with Service Monitors to get a good understanding of the different ways they can be deployed.

Optimise a Server Pool to Manage and Share Back-End Servers

Well Pools is a pretty simple concept. It’s a grouping of similar systems that service the same requests. Think web servers, all of them will be configured the same way (that happens in production doesn’t it? lol), all handle the same requests and a client connection could be sent to and serviced by any system in the pool.

You have a Virtual Server front-ending the Pool with a virtual IP and port.

You use Service Monitors to check the health of the individual servers in the Pool.

You additionally use what I would call load balancing Policies, or what NSX calls Algorithms. These policies or algorithms determine how client requests are load balanced over the servers. NSX supports six different load balancing Policies.

Round-Robin: Pretty simple to understand, it just keeps flip-flopping equally between all the servers in the pool.

Least Conn: The server with the least connections will be sent the next client request.

IP-Hash: Selects a server-based on a hash of an IP address and a weighting.

HTTP HEADER: HTTP Header name is looked up. Server host name can be determined.

URL: Selects a server by basically a hash of the URL and a weighting.

URI: Left part of the URL is hashed and weighted.

There isn’t that much you can do with pools. You can create and delete a pool. You can assign a policy or algorithm to a pool. You can add members to a pool and select the port, the weighting for the server and you can remove members from a pool.

I am not going to screenshot all this so make sure you configure your pool in different ways and see the results.

My WebPool settings: (normally 2 servers, only 1 there when screenshot taken)

pool

And how a member of the pool is configured:

pool1111

pool

Configure an Application Profile and Rules

Application Profiles and Application Rules are different beasts. So I will explain them separately.

Application Profile

An Application Profile is basically a template and determines how traffic is handled or manipulated. You create a profile, determine protocols, session persistence, cookies, SSL and the like. The profile is then attached to a Virtual Server. The Virtual Server will process the traffic based on the configuration of the profile.

An Application Profile will be defined by the application requirements e.g. it requires the HTTPS protocol, no session persistence and uses SSL certificates for SSL Termination etc.

I have a basic Application Profile called Web Profile that is for my web farm. As shown below.

app-prof

To configure Application Profiles:

Log into the vSphere Web Client.

Click Networking and Security, then NSX Edges.

Double-click the ESG that you want to configure for Load Balancing.

Click Manage, then Load Balancer, then Application Profiles.

Click the green + sign to add a new Application Profile.

There are a few settings that can be configured: (see above screen shot)

Type: The incoming traffic protocol

HTTP Redirect: Gives the ability to redirect to another URL

Persistence: The glue that sticks a session to the same back-end server. NSX supports: cookies, source IP, MSRDP (*for RDSH farms)

Insert X-Forwarded For: This allows back-end servers to see the client source IP

Enable Pool Side SSL: Allows SSL communication between LB and back-end servers (*required for end to end SSL)

Service Certificates: SSL certificates required to terminate SSL on the LB

Application Rules

An Application Rule allows more granular control than an Application Profile. Application Rules are created and also attached to a Virtual Server to process the incoming traffic.

Think of an Application Rule as a trigger. If a condition is matched then do something.

To configure Application Rules:

Log into the vSphere Web Client.

Click Networking and Security, then NSX Edges.

Double-click the ESG that you want to configure for Load Balancing.

Click Manage, then Load Balancer, then Application Rules.

Click the green + sign to add a new Application Rule.

app-rule

Once you have created the Application Rule you attach it to the Virtual Server, click the Advanced tab.

vsa

From the VMware NSX Administration Guide, below are some Application Rule examples:

1

12

block

1a

Configure Virtual Servers

The Virtual Server to me is really the glue that joins all the load balancing bits together. The Virtual Server takes the settings from the Application Profile, the Pool, the protocol and port and allows you to create a virtual IP (VIP) that load balances you over a pool of servers.

To Add or Configure a Virtual Server:

Log into the vSphere Web Client.

Click Networking and Security, then NSX Edges.

Double-click the ESG that you want to configure for Load Balancing.

Click Manage, then Load Balancer, then Virtual Servers.

Click the green + sign to add a new Virtual Server, or select an existing Virtual Server and edit it. I have a Virtual Server called Webvs as shown below.

 

vs1

What Settings can be Defined on a Virtual Server (VS)?

Enable: Allows the VS to be enabled or disabled

Acceleration: If you are only using Layer 4 load balancing i.e. TCP or UDP, you can enable acceleration which enables higher performance. This means you are load balancing packets instead of HTTP/HTTPS protocols.

Application Profile: Select an Application Profile already defined

Name: The name for this Virtual Server

IP Address: The virtual IP (VIP) to listen for incoming requests

Protocol: The protocol the VIP will handle traffic for

Port: The port the VIP will listen on

Default Pool: The pool that the Virtual Server will load balance requests over

Connection Limit: The maximum number of concurrent connections

Connection Rate Limit: The maximum allowed new connection requests per second (CPS)

 

 

And that brings us to the end of part 8 in this series. I hope that you learnt something new or more in-depth as I have around NSX Load Balancing.

I recommend reading the load balancing bits in the following VMware documentation:

VMware Validated Design documentation

VMware NSX Administration Guide

Part 9 will encompass the following:

Objective 3.2 – Configure and Manage Logical Virtual Private Networks (VPNs)

 

Be Social; Please share!

  1. […] Part 8 will be based on Section 3 – Deploy and Manage VMware NSX Network Services. Objective 3.… […]

    Like

    Reply

  2. […] Part 1 – Intro Part 2 – Objective 1.1 Part 3 – Objective 1.2 Part 4 – Objective 1.3 Part 5 – Objective 2.1 Part 6 – Objective 2.2 Part 7 – Objective 2.3 Part 8 – Objective 3.1 […]

    Like

    Reply

  3. […] 1.3 Part 5 – Objective 2.1 Part 6 – Objective 2.2 Part 7 – Objective 2.3 Part 8 – Objective 3.1 Part 9A – Objective 3.2 IPSec […]

    Like

    Reply

  4. […] 1.3 Part 5 – Objective 2.1 Part 6 – Objective 2.2 Part 7 – Objective 2.3 Part 8 – Objective 3.1 Part 9A – Objective 3.2 IPSec VPNs Part 9B – Objective 3.2 SSL […]

    Like

    Reply

  5. […] 1.3 Part 5 – Objective 2.1 Part 6 – Objective 2.2 Part 7 – Objective 2.3 Part 8 – Objective 3.1 Part 9A – Objective 3.2 IPSec VPNs Part 9B – Objective 3.2 SSL VPNs Part 9C – […]

    Like

    Reply

  6. […] 1.3 Part 5 – Objective 2.1 Part 6 – Objective 2.2 Part 7 – Objective 2.3 Part 8 – Objective 3.1 Part 9A – Objective 3.2 IPSec VPNs Part 9B – Objective 3.2 SSL VPNs Part 9C – […]

    Like

    Reply

  7. […] 4 – Objective 1.3 Part 5 – Objective 2.1 Part 6 – Objective 2.2 Part 7 – Objective 2.3 Part 8 – Objective 3.1 Part 9A – Objective 3.2 IPSec VPNs Part 9B – Objective 3.2 SSL VPNs Part 9C – Objective 3.2 […]

    Like

    Reply

  8. […] 4 – Objective 1.3 Part 5 – Objective 2.1 Part 6 – Objective 2.2 Part 7 – Objective 2.3 Part 8 – Objective 3.1 Part 9A – Objective 3.2 IPSec VPNs Part 9B – Objective 3.2 SSL VPNs Part 9C – Objective 3.2 […]

    Like

    Reply

  9. […] 4 – Objective 1.3 Part 5 – Objective 2.1 Part 6 – Objective 2.2 Part 7 – Objective 2.3 Part 8 – Objective 3.1 Part 9A – Objective 3.2 IPSec VPNs Part 9B – Objective 3.2 SSL VPNs Part 9C – Objective 3.2 […]

    Like

    Reply

  10. […] 4 – Objective 1.3 Part 5 – Objective 2.1 Part 6 – Objective 2.2 Part 7 – Objective 2.3 Part 8 – Objective 3.1 Part 9A – Objective 3.2 IPSec VPNs Part 9B – Objective 3.2 SSL VPNs Part 9C – Objective 3.2 […]

    Like

    Reply

  11. […] To review the implementation and configuration of the Load Balancer service refer to blog post 8. […]

    Like

    Reply

  12. Sudesh Tendulkar June 25, 2017 at 8:59 pm

    Hello, thanks clinton for putting such a wonderful blog
    I have one question, what is the LB config. for ssl offloading? one arm or inline?
    Also I have read some VMware lab doc and it does the config with one-arm but says that it has two interfaces one internal and one uplink ? How this is possible?,two interfaces and One Arm LB. Can you please guide me how to come to know about One-arm /Inline LB in the exam ?

    Like

    Reply

    1. Hi,

      One arm has one interface directly connected to the network.

      Like

      Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: