So recently I’ve been busy with travel, specifically spending time in Singapore and Bangalore teaching colleagues (my Bangalore colleagues keep saying that the food is “just a little spicy”… HA!) all the neat new features in vSphere 6.0 along with updates for Horizon 6 Version 6.1. (View specifically). As part of this training we ran into some neat new facts that I figured would be worthwhile to share. And specifically, the new TCP/IP stacks that exist as “defaults”.
So, every now and again little things seem to be snuck into our products that aren’t necessarily talked about or have a limited use case. So a bit of history. For the longest time, there was a single TCP/IP stack that was used by the kernel to handle all TCP/IP stack info. By itself, for a few environments, the shared TCP/IP between all vmkernel ports was sufficient for traffic to go through. The challenge resides when you want to do different subnet ranges to separate traffic from various vmkernels ports.
Starting in vSphere 5.5 we introduced a new concept: the custom TCP/IP Stack. Created via the command line, you could separate vmkernel traffic by configuring a new TCP/IP stack. This would allow for different subnets and default gateways to be used for different vmkernels. However, this also means “mucking about” in the command line. Not bad but the potential for “oops” factor is increased. The command itself is:
esxcli network ip netstack add –N “NFS”
So now things would look like this:
The flexibility of having these stacks also allows for the following benefits:
- Route the traffic for migration of powered on (specifically the new vMotion TCP/IP Stack in vSphere 6) or powered off virtual machines(specifically the new Provisioning TCP/IP Stack in vSphere 6) by using a default gateway that is different from the gateway assigned to the default stack on the host. By using a separate default gateway, you can use DHCP for IP address assignment to the VMkernel adapters for migration in a flexible way.
- Assign a separate set of buffers and sockets.
- Avoid routing table conflicts that might otherwise appear when many features are using a common TCP/IP stack.
- Isolate traffic to improve security.
In vSphere 6, things got changed again in regards to the stack. Now we have, in addition to the default stack, two other “defaults”. The first and probably more well known is the vMotion TCP/IP stack. This will allow for vMotions to occur at a Layer 3 level, potentially across a continent, for both within a single vCenter as well as from vCenter to vCenter. Additionally, there is a provisioning TCP/IP stack that will be used for things such as cloning, snapshots and other similar activities. Again, this is to allow for separation of traffic within the vmkernel itself.
The only way to select one of the two new TCP/IP stacks you’ll need to use the vSphere Web Client. You’ll miss out on some of the new vmkernel options (e.g., provisioning) and won’t see the stacks either. So just remember that you’ll have these two defaults along with any you may create at the commandline to help truly separate traffic in the kernel and outside the kernel.