Tag Archives: NSX

Powercli script to run commands on ESX hosts

Recently while working with a customer on a large scale NSX environment we hit a product bug that required us to increase the memory allocated to the vswfd (vShield-Statefull-Firewall) process on the ESXi host.

There were 2 main reasons we hit this issue,
1. Due to high churn in the distributed firewall there was a significantly high number of updates the vsfwd daemon has to process
2. Non-optimized way of allocating memory for these updates caused vsfwd to consume all of its allocated memory.

Anyway the purpose of this article is not to talk about the issue but a way to automate the process of updating the vsfwd memory on the ESXi hosts. As this was a large scale environment it was not ideal to manually edit config files on every ESXi host to increase the memory. Automating this process would eliminate errors arising due to manual intervention and keeps configuration consistent across all hosts.

I started writing the script in python and was going to leverage the paramiko module to SSH and run commands. I quickly found it a bit difficult to manage the numerous host IP addresses with python so I switched to PowerCli as I could use the Get-VMhost command to get the list of hosts in a cluster.

I’ve put the script up on the github for anyone interested – script

Until next time and here’s to learning something new each day!!!

Capturing and decoding vxlan encapsulated packets

In this short post we will look at capturing packets that are encapsulated with the VXLAN protocol and how to decode them with wireshark for troubleshooting and debugging purposes. This procedure is handy when you want to analyze network traffic on a logical switch or between logical switches.

In order to capture packets on ESXi we will use the pktcap-uw utility. Pktcap-uw is quite a versatile tool, allowing to capture at multiple points on the network stack and even trace packets through the stack. Further details on pktcap-uw can be found in VMware product documentation –
here

The limitation with the current version of pktcap-uw is that we need to run 2 sets of commands to capture both egress and ingress. With that said lets get to it.. In this environment I will capture packets on vmnic4 on source and destination ESXi hosts.

To capture VXLAN encapsulated packets egressing uplink vmnic4 on the source host

pktcap-uw --uplink vmnic4 --dir 1 --stage 1 -o /tmp/vmnic4-tx.pcap

To capture VXLAN encapsulated packets ingressing uplink vmnic4 on the destination host

pktcap-uw --uplink vmnic4 --dir 0 --stage 0 -o /tmp/vmnic4-rx.pcap

If you have access to the ESXi host and want to look at the packet capture with the VXLAN headers you can use the tcpdump command like so,

tcpdump

This capture can further be imported into wireshark and the frames decoded. When the capture is first opened wireshark displays only the outer source and destination IP which are VXLAN endpoints. We need to map destination UDP port 8472 to the VXLAN protocol to see the inner frames

To do so, open the capture with Wireshark –> Analyze –> Decode As

vxlan_decode

Once decoded wireshark will display the inner source and destination IP address and inner protocol.

vxlan_decap

I hope you find this post helpful, until next time!!

Removing IP addresses from the NSX IP pool

I was recently involved in a NSX deployment where the ESX hosts (VTEPs) were not able to communicate with each other. The NSX manager UI showed that few ESX hosts in the cluster were not prepared even though the entire cluster was prepared. We quickly took a look at the ESX hosts and found that the VXLAN vmk interfaces were missing but the VIBs were still installed. Re-preparing these hosts failed with no IP addresses available in the VTEP IP pool.

To cut a long story short, We had to remove some IP addresses from the IP pool and apparently there is no way to do this from the NSX UI without deleting and re-creating the IP pool. Even with deleting and re-creating the IP pool, you can only provide a single set of contiguous IP addresses. Fortunately there is a Rest API method available to accomplish this.

So to remove an IP address from the pool we first need to find the pool-id. Using a Rest client run this GET request to get the pool-id

https:///api/2.0/services/ipam/pools/scope/globalroot-0

The output would list all the configured IP pools. We need to look at the objectId tag to get the pool-id. Once we have the pool-id we can query the pool to verify the start and end of the IP pool

https:///api/2.0/services/ipam/pools/ipaddresspool-1

To remove an IP address from this pool use the Delete method along with IP address like so,

https:///api/2.0/services/ipam/pool/ipaddresspool-1/ipaddresses/192.168.1.10

Note: With this method you can only remove IP addresses that have been allocated and not free addresses in the pool.

IGMP versions requirement for VXLAN logical networks

Recently we were working on an issue were VXLAN Transport Multicast traffic was not being passed on the upstream physical switches causing an outage for the virtual machines that were hosted on these virtual wires.

Some Background on the environment:

This was a vCNS 5.5 environment with VXLAN deployed in Multicast mode. This was quite a big environment with multiple virtual wires deployed and multiple virtual machines connected to these virtual wires.
Virtual Machines were not able to communicate because multicast traffic was not being passed on the physical switches.

Upon further investigation it was revealed that IGMPv3 join’s were being received by the physical switch and since the physical switch had IGMPv2 enabled it pruned and ignored the IGMPv3 joins. So to resolve the issue, IGMPv3 was enabled on the upstream switches and the ESX hosts were able to join their multicast groups and the virtual machines were reachable on the network.

Starting with ESX 5.5, the default IGMP version on the ESX host has been changed to v3. This option is configurable and can be reverted to IGMPv2 using the ESX advanced settings,

Configuration–>Advanced Settings–>Net–>Net.TcpipIGMPDefaultVersion

Hope this helps someone that run’s into a similar issue.

Troubleshooting vCloud Director Internal Networks

These days I’ve been spending my time working with NSX and integrating it with vCloud Director. During some of these tests I ran into an issue with network connectivity on internal networks in vCloud Director.
To expound on the issue, I created an Internal virtual datacenter network in vCloud Director and enabled DHCP services on the internal NSX Edge virtual machine that gets deployed for this internal network. I then deployed two Linux virtual machines connected to this internal network on 2 different ESX hosts. These virtual machines should have received an IP address from the DHCP scope configured on the Edge but for some reason these virtual machines were not getting an IP address and were unable to ping the gateway(Interface on the Edge device).

To isolate if this issue was something specific to the Linux guest I moved all the 3 virtual machines (2 Linux machines and the NSX Edge) to the same ESX host and restarted the networking service. The machine was assigned an IP address of the configured DHCP scope and was able to ping its gateway. So there is nothing wrong with the TCP/IP stack in the guest since network traffic on the same ESX hosts never traverses the external network and is in done in memory.

Digging a litte deeper: The arcane world of log analysis

Bringing out the geek in me and to start digging further I started trawling through the vmkernel logs on the ESX host to see what happens when the virtual machine powers up, i.e. Does it connect to the virtual port…

2014-05-19T09:34:02.904Z cpu15:29956759)World: vm 29956760: 1462: Starting world vmm0:org1-rhel2_(bc4599c3-ff8e-432b-863e-1cdcef544661) of type 8
2014-05-19T09:34:02.904Z cpu15:29956759)Sched: vm 29956760: 6410: Adding world ‘vmm0:org1-rhel2_(bc4599c3-ff8e-432b-863e-1cdcef544661)’, group ‘host/user/pool3’, cpu: shares=-3 min=200 minLimit=-1
max=1000, mem: shares=-3 min=3072 minLimit=-1 max=16384
2014-05-19T09:34:02.904Z cpu15:29956759)Sched: vm 29956760: 6425: renamed group 57859289 to vm.29956759
2014-05-19T09:34:02.904Z cpu15:29956759)Sched: vm 29956760: 6442: group 57859289 is located under group 54783989
2014-05-19T09:34:02.907Z cpu15:29956759)MemSched: vm 29956759: 8263: extended swap to 28290 pgs
2014-05-19T09:34:03.089Z cpu15:29956759)VSCSI: 3750: handle 8370(vscsi0:0):Using sync mode due to sparse disks
2014-05-19T09:34:03.089Z cpu15:29956759)VSCSI: 3792: handle 8370(vscsi0:0):Creating Virtual Device for world 29956760 (FSS handle 1150128849) numBlocks=4194304 (bs=512)
2014-05-19T09:34:03.244Z cpu4:29956760)Net: 2292: connected org1-rhel2 (bc4599c3-ff8e-432b-863e-1cdcef544661).eth0 eth0 to vDS, portID 0x30001e7
2014-05-19T09:34:03.244Z cpu4:29956760)Net: 3055: associated dvPort 1683 with portID 0x30001e7
2014-05-19T09:34:03.247Z cpu4:29956760)NetPort: 2862: resuming traffic on DV port 1683
2014-05-19T09:34:03.247Z cpu4:29956760)vxlan: VDL2_CPSetCPEnabled:2840: Control plane enabled on VXLAN network[5001]
.
.
2014-05-19T09:39:24.824Z cpu11:27610460)WARNING: vxlan: VDL2CPCheckConnUpCB:311: Control plane connection of VXLAN network[5001] is down

The above log snip tracks the power on task for the virtual machine(org2-rhel2) and its quite evident from the last line that the control plane connection is down. These Internal networks use VXLAN as their underlying transport and since VXLAN uses a controller in Unicast mode the next thing to check would be if the ESX hosts can communicate with the NSX controller.
On the ESX host using the esxcli command we can query the VDS for VXLAN configuration.

~ # esxcli network vswitch dvs vmware vxlan list –vds-name Nebula-Networks

VDS ID VDS Name MTU Segment ID Gateway IP Gateway MAC Network Count Vmknic Count
———————————————– ————— —- ———— ————– —————– ————- ————
d7 e6 3d 50 19 d7 02 36-f4 23 96 fe 64 46 1c 33 Nebula-Networks 1600 192.168.1.0 192.168.1.254 00:21:55:08:ec:40 2 1

Immediately noticed the the connection to the controller was down

~ # esxcli network vswitch dvs vmware vxlan network list –vds-name Nebula-Networks

VXLAN ID Multicast IP Control Plane Controller Connection Port Count MAC Entry Count ARP Entry Count
——– ————————- ————- ——————— ———- ————— —————
5000 N/A (headend replication) Enabled () 192.168.1.50 (down) 2 0 0
5001 N/A (headend replication) Enabled () 192.168.1.50 (down) 1 0 0

ESX hosts establishes a connection to the NSX controller using a user world daemon. The netcpa.log shows communication with the controller and also the updates that are pushed from the controller down to the ESX hosts. Looking at these logs its clear that the connection is down.

~ # tail -f /var/log/netcpa.log
2014-05-19T09:34:09.615Z [37281B70 info ‘Default’] Core: Sharding connection 192.168.1.50:0 is timeout
2014-05-19T09:34:09.615Z [37281B70 info ‘Default’] App CORE : 0 unregister connection to 192.168.1.50:0
2014-05-19T09:34:09.615Z [37281B70 info ‘Default’] User of connection 192.168.1.50:0
2014-05-19T09:34:09.615Z [37281B70 info ‘Default’] App CORE : 0 register connection to existing controller to 192.168.1.50 port 1234

To isolate further on comparing the MAC addresses for the controller IP’s it was found the the controllers IP had been assigned to another machine on the network. After shutting down the machine and restarting the netcpa agent the ESX hosts was able to re-establish a connection with the controller.

~ # tail -f /var/log/netcpa.log
2014-05-19T10:37:10.471Z [5DC5DB70 info ‘Default’] Core: ShardingSlice length of peer 192.168.1.50: 4194304
2014-05-19T10:37:10.471Z [5DC5DB70 info ‘Default’] Vxlan: core app ready on 192.168.1.50:0
2014-05-19T10:37:10.472Z [5DC5DB70 info ‘Default’] Vxlan: send VNI Membership Update(Join) to the controller: VNI 5000 controller 192.168.1.50
2014-05-19T10:37:10.472Z [5DC5DB70 info ‘Default’] Vxlan: send VNI Membership Update(Join) to the controller: VNI 5001 controller 192.168.1.50
2014-05-19T10:37:10.472Z [5DC5DB70 info ‘Default’] Core: Controller is ready: 192.168.1.50:0
2014-05-19T10:37:10.472Z [FFE59100 info ‘Default’] Core: Sharding Segment Update message: server 192.168.1.50 startSliceId 0 numSlices 1024
2014-05-19T10:37:10.473Z [FFE59100 info ‘Default’] Vxlan: receive VNI Membership Update(Join) from the controller: VNI 5000 controller 192.168.1.50 len 23
2014-05-19T10:37:10.473Z [FFE59100 info ‘Default’] Vxlan: set VNI 5000 (mcast proxy: Enabled, arp proxy: Enabled)
2014-05-19T10:37:10.474Z [FFE59100 info ‘Default’] Vxlan: receive VNI Membership Update(Join) from the controller: VNI 5001 controller 192.168.1.50 len 23
2014-05-19T10:37:10.474Z [FFE59100 info ‘Default’] Vxlan: set VNI 5001 (mcast proxy: Enabled, arp proxy: Enabled)

If the controller IP address gets changed and cannot be reverted to the original IP, the /etc/vmware/netcpa/config-by-vsm.xml file on the ESX host can be edited to add the new controller IP address.

While this issue may be quite simple and something that happens most of the time in a large network, I hope you found the approach to the problem useful. Feedback welcome!!