All posts by donovandurand

Powercli script to run commands on ESX hosts

Recently while working with a customer on a large scale NSX environment we hit a product bug that required us to increase the memory allocated to the vswfd (vShield-Statefull-Firewall) process on the ESXi host.

There were 2 main reasons we hit this issue,
1. Due to high churn in the distributed firewall there was a significantly high number of updates the vsfwd daemon has to process
2. Non-optimized way of allocating memory for these updates caused vsfwd to consume all of its allocated memory.

Anyway the purpose of this article is not to talk about the issue but a way to automate the process of updating the vsfwd memory on the ESXi hosts. As this was a large scale environment it was not ideal to manually edit config files on every ESXi host to increase the memory. Automating this process would eliminate errors arising due to manual intervention and keeps configuration consistent across all hosts.

I started writing the script in python and was going to leverage the paramiko module to SSH and run commands. I quickly found it a bit difficult to manage the numerous host IP addresses with python so I switched to PowerCli as I could use the Get-VMhost command to get the list of hosts in a cluster.

I’ve put the script up on the github for anyone interested – script

Until next time and here’s to learning something new each day!!!

Capturing and decoding vxlan encapsulated packets

In this short post we will look at capturing packets that are encapsulated with the VXLAN protocol and how to decode them with wireshark for troubleshooting and debugging purposes. This procedure is handy when you want to analyze network traffic on a logical switch or between logical switches.

In order to capture packets on ESXi we will use the pktcap-uw utility. Pktcap-uw is quite a versatile tool, allowing to capture at multiple points on the network stack and even trace packets through the stack. Further details on pktcap-uw can be found in VMware product documentation –
here

The limitation with the current version of pktcap-uw is that we need to run 2 sets of commands to capture both egress and ingress. With that said lets get to it.. In this environment I will capture packets on vmnic4 on source and destination ESXi hosts.

To capture VXLAN encapsulated packets egressing uplink vmnic4 on the source host

pktcap-uw --uplink vmnic4 --dir 1 --stage 1 -o /tmp/vmnic4-tx.pcap

To capture VXLAN encapsulated packets ingressing uplink vmnic4 on the destination host

pktcap-uw --uplink vmnic4 --dir 0 --stage 0 -o /tmp/vmnic4-rx.pcap

If you have access to the ESXi host and want to look at the packet capture with the VXLAN headers you can use the tcpdump command like so,

tcpdump

This capture can further be imported into wireshark and the frames decoded. When the capture is first opened wireshark displays only the outer source and destination IP which are VXLAN endpoints. We need to map destination UDP port 8472 to the VXLAN protocol to see the inner frames

To do so, open the capture with Wireshark –> Analyze –> Decode As

vxlan_decode

Once decoded wireshark will display the inner source and destination IP address and inner protocol.

vxlan_decap

I hope you find this post helpful, until next time!!

Removing IP addresses from the NSX IP pool

I was recently involved in a NSX deployment where the ESX hosts (VTEPs) were not able to communicate with each other. The NSX manager UI showed that few ESX hosts in the cluster were not prepared even though the entire cluster was prepared. We quickly took a look at the ESX hosts and found that the VXLAN vmk interfaces were missing but the VIBs were still installed. Re-preparing these hosts failed with no IP addresses available in the VTEP IP pool.

To cut a long story short, We had to remove some IP addresses from the IP pool and apparently there is no way to do this from the NSX UI without deleting and re-creating the IP pool. Even with deleting and re-creating the IP pool, you can only provide a single set of contiguous IP addresses. Fortunately there is a Rest API method available to accomplish this.

So to remove an IP address from the pool we first need to find the pool-id. Using a Rest client run this GET request to get the pool-id

https:///api/2.0/services/ipam/pools/scope/globalroot-0

The output would list all the configured IP pools. We need to look at the objectId tag to get the pool-id. Once we have the pool-id we can query the pool to verify the start and end of the IP pool

https:///api/2.0/services/ipam/pools/ipaddresspool-1

To remove an IP address from this pool use the Delete method along with IP address like so,

https:///api/2.0/services/ipam/pool/ipaddresspool-1/ipaddresses/192.168.1.10

Note: With this method you can only remove IP addresses that have been allocated and not free addresses in the pool.

VMware VDS link aggregation enhanced support

Recently while working on a NSX design we decided to use LAG from the ESX hosts to the ToR leaf switches. After configuring LACP in active mode on the physical switches we moved on to configure LACP on the distributed virtual switch. All LACP configuration for the VDS has to be done from the vSphere WebClient but looking at the WebClient we could not find the LACP option on the VDS. After some digging around we figured that the distributed virtual switch was created using the VMware C# client. A distributed virtual switch created from the C# client has only basic support for LACP.

To use advanced LACP options click the “Enhance” option in the distributed virtual switch features box from the vSphere WebClient. This allows us to choose the load balancing algorithm, LACP mode and also creates the link aggregation group with the ESX uplinks.

lag

These are the enhanced LACP features that VDS supports:

  • Support for configuring multiple link aggregation groups (LAGs)
  • LAGs are represented as uplinks in the teaming and failover policy of distributed ports or port groups. You can create a distributed switch configuration that uses both LACP and existing teaming algorithms on different port groups.
  • Multiple load balancing options for LAGs.
  • Centralized switch-level configuration available under Manage > Settings > LAC
  • IGMP versions requirement for VXLAN logical networks

    Recently we were working on an issue were VXLAN Transport Multicast traffic was not being passed on the upstream physical switches causing an outage for the virtual machines that were hosted on these virtual wires.

    Some Background on the environment:

    This was a vCNS 5.5 environment with VXLAN deployed in Multicast mode. This was quite a big environment with multiple virtual wires deployed and multiple virtual machines connected to these virtual wires.
    Virtual Machines were not able to communicate because multicast traffic was not being passed on the physical switches.

    Upon further investigation it was revealed that IGMPv3 join’s were being received by the physical switch and since the physical switch had IGMPv2 enabled it pruned and ignored the IGMPv3 joins. So to resolve the issue, IGMPv3 was enabled on the upstream switches and the ESX hosts were able to join their multicast groups and the virtual machines were reachable on the network.

    Starting with ESX 5.5, the default IGMP version on the ESX host has been changed to v3. This option is configurable and can be reverted to IGMPv2 using the ESX advanced settings,

    Configuration–>Advanced Settings–>Net–>Net.TcpipIGMPDefaultVersion

    Hope this helps someone that run’s into a similar issue.