Tag Archives: troubleshooting

Promiscuous mode with NSX Distributed firewall

TL;DR – When running appliances that require promiscuous mode it’s advised to place the appliance in the DFW exclusion list, especially if the appliance is a security hardened virtual appliance

Last year I was working with a customer that had rolled out NSX to implement micro-segmentation. This customer is a mid-sized customer that is integrating NSX into their existing production vSphere environment. The customer had decided to implement security policy based on higher level abstractions i.e. security groups but also decided they need security policy enforced everywhere, using the Applied To column set to Distributed Firewall. Not an ideal implementation but perhaps that discussion is for another blog post

The environment being a brownfield environment, the customer wanted to implement security policy to the existing virtual appliances. They were using F5 virtual appliance for load balancing and had specific requirements to configure promiscuous mode on the F5 port group to support VLAN groups(https://support.f5.com/kb/en-us/products/big-ip_ltm/releasenotes/product/relnotes_ltm_ve_10_2_1.html)

Long story short, by configuring the F5 port groups in promiscuous mode they were facing intermittent packet drops across the environment. We were quickly able to identify that the workloads impacted were the workloads that resided on the host that had the F5 virtual appliances. As we could not power off the appliance we decided to place the F5 appliance in an exclusion list and like magic, everything returned to normal, no packet drops.

Well now that we understood what was causing the issue but as with all things we needed to understand why this behaviour was seen. We started by picking a virtual machine on the F5 host and turned up logging on the rule that allowed traffic for a particular service, in this case, SSH. We also added a tag to that rule so we could filter the logs.

We started to follow an SSH session that we initiated from 10.68.0.250 to 10.68.145.170. We were seeing the SYN from 10.68.0.250 reaches 10.68.145.170 and this is allowed by the firewall as rule 1795 permits it but interestingly enough we were also seeing the same traffic flow on a different filter, in this case, 53210

 

/var/log/dfwpktlog.log
T01:29:48.363Z 28341 INET match PASS domain-c7/1795 IN 52 TCP 10.68.0.250/65448->10.68.145.170/22 S 12345_ADMIN
T01:29:48.363Z 53210 INET match PASS domain-c7/1795 IN 52 TCP 10.68.0.250/65448->10.68.145.170/22 S 12345_ADMIN

 

Note: You can get the filter hash for a virtual machine by running vsipioctl getfilters. Each vNIC gets a filter hash

 

[[email protected]:~] vsipioctl getfilters

Filter Name : nic-11240904-eth0-vmware-sfw.2
VM UUID : 50 1e 02 7b b0 88 05 47-c2 ed dd 92 22 93 12 1d
VNIC Index : 0
Service Profile : –NOT SET–
Filter Hash : 28341

 

Looking at the output of vsipioctl getfilters we identified that the duplicate filter (53210) belonged to the F5 virtual appliance.

We also captured packets at the switchport of the virtual machine and we see 10.68.145.170 sending a SYN/ACK back to the client, this packet was also seen on both filters. The subsequent frame shows the client immediately sending a RST packet to close the connection. This was a bit strange as the firewall rule allows this traffic and there was no other reason why the client would immediately send a RST packet

Screen Shot 2018-01-17 at 2.07.11 PM

 

Interestingly the dfwpktlogs had recorded packets on the duplicate filter hitting a different rule (rule 1026) and this rule had a REJECT action so a RST packet would be sent out for TCP connections


/var/log/dfwpktlog.log
T01:29:48.364Z 53210 INET match REJECT domain-c7/1026 IN 52 TCP 10.68.145.170/22->10.68.0.250/65448 SA default-deny
T01:29:48.364Z 53210 INET match REJECT domain-c7/1026 IN 40 TCP 10.68.145.170/22->10.68.0.250/65448 R default-deny

 

Why would traffic be hitting a different rule when the initial SYN was evaluated by rule 1795 ??

So, when a virtual machine is connected to a promiscuous portgroup its filter is also in promiscuous mode and hence its filter would receive all traffic. In this particular case, the F5 appliance was placed in security group that had its own security policy applied to it. Duplicated traffic that was sent to this filter was being subjected to this security policy. The security policy had a REJECT action configured and hence a RST packet was being sent to the client

Having identified this, we provided the customer with a couple of options to mitigate the issue.

I hope you found this article to be as interesting as this issue was to me.Until next time !!

Troubleshooting vCloud Director Internal Networks

These days I’ve been spending my time working with NSX and integrating it with vCloud Director. During some of these tests I ran into an issue with network connectivity on internal networks in vCloud Director.
To expound on the issue, I created an Internal virtual datacenter network in vCloud Director and enabled DHCP services on the internal NSX Edge virtual machine that gets deployed for this internal network. I then deployed two Linux virtual machines connected to this internal network on 2 different ESX hosts. These virtual machines should have received an IP address from the DHCP scope configured on the Edge but for some reason these virtual machines were not getting an IP address and were unable to ping the gateway(Interface on the Edge device).

To isolate if this issue was something specific to the Linux guest I moved all the 3 virtual machines (2 Linux machines and the NSX Edge) to the same ESX host and restarted the networking service. The machine was assigned an IP address of the configured DHCP scope and was able to ping its gateway. So there is nothing wrong with the TCP/IP stack in the guest since network traffic on the same ESX hosts never traverses the external network and is in done in memory.

Digging a litte deeper: The arcane world of log analysis

Bringing out the geek in me and to start digging further I started trawling through the vmkernel logs on the ESX host to see what happens when the virtual machine powers up, i.e. Does it connect to the virtual port…

2014-05-19T09:34:02.904Z cpu15:29956759)World: vm 29956760: 1462: Starting world vmm0:org1-rhel2_(bc4599c3-ff8e-432b-863e-1cdcef544661) of type 8
2014-05-19T09:34:02.904Z cpu15:29956759)Sched: vm 29956760: 6410: Adding world ‘vmm0:org1-rhel2_(bc4599c3-ff8e-432b-863e-1cdcef544661)’, group ‘host/user/pool3’, cpu: shares=-3 min=200 minLimit=-1
max=1000, mem: shares=-3 min=3072 minLimit=-1 max=16384
2014-05-19T09:34:02.904Z cpu15:29956759)Sched: vm 29956760: 6425: renamed group 57859289 to vm.29956759
2014-05-19T09:34:02.904Z cpu15:29956759)Sched: vm 29956760: 6442: group 57859289 is located under group 54783989
2014-05-19T09:34:02.907Z cpu15:29956759)MemSched: vm 29956759: 8263: extended swap to 28290 pgs
2014-05-19T09:34:03.089Z cpu15:29956759)VSCSI: 3750: handle 8370(vscsi0:0):Using sync mode due to sparse disks
2014-05-19T09:34:03.089Z cpu15:29956759)VSCSI: 3792: handle 8370(vscsi0:0):Creating Virtual Device for world 29956760 (FSS handle 1150128849) numBlocks=4194304 (bs=512)
2014-05-19T09:34:03.244Z cpu4:29956760)Net: 2292: connected org1-rhel2 (bc4599c3-ff8e-432b-863e-1cdcef544661).eth0 eth0 to vDS, portID 0x30001e7
2014-05-19T09:34:03.244Z cpu4:29956760)Net: 3055: associated dvPort 1683 with portID 0x30001e7
2014-05-19T09:34:03.247Z cpu4:29956760)NetPort: 2862: resuming traffic on DV port 1683
2014-05-19T09:34:03.247Z cpu4:29956760)vxlan: VDL2_CPSetCPEnabled:2840: Control plane enabled on VXLAN network[5001]
.
.
2014-05-19T09:39:24.824Z cpu11:27610460)WARNING: vxlan: VDL2CPCheckConnUpCB:311: Control plane connection of VXLAN network[5001] is down

The above log snip tracks the power on task for the virtual machine(org2-rhel2) and its quite evident from the last line that the control plane connection is down. These Internal networks use VXLAN as their underlying transport and since VXLAN uses a controller in Unicast mode the next thing to check would be if the ESX hosts can communicate with the NSX controller.
On the ESX host using the esxcli command we can query the VDS for VXLAN configuration.

~ # esxcli network vswitch dvs vmware vxlan list –vds-name Nebula-Networks

VDS ID VDS Name MTU Segment ID Gateway IP Gateway MAC Network Count Vmknic Count
———————————————– ————— —- ———— ————– —————– ————- ————
d7 e6 3d 50 19 d7 02 36-f4 23 96 fe 64 46 1c 33 Nebula-Networks 1600 192.168.1.0 192.168.1.254 00:21:55:08:ec:40 2 1

Immediately noticed the the connection to the controller was down

~ # esxcli network vswitch dvs vmware vxlan network list –vds-name Nebula-Networks

VXLAN ID Multicast IP Control Plane Controller Connection Port Count MAC Entry Count ARP Entry Count
——– ————————- ————- ——————— ———- ————— —————
5000 N/A (headend replication) Enabled () 192.168.1.50 (down) 2 0 0
5001 N/A (headend replication) Enabled () 192.168.1.50 (down) 1 0 0

ESX hosts establishes a connection to the NSX controller using a user world daemon. The netcpa.log shows communication with the controller and also the updates that are pushed from the controller down to the ESX hosts. Looking at these logs its clear that the connection is down.

~ # tail -f /var/log/netcpa.log
2014-05-19T09:34:09.615Z [37281B70 info ‘Default’] Core: Sharding connection 192.168.1.50:0 is timeout
2014-05-19T09:34:09.615Z [37281B70 info ‘Default’] App CORE : 0 unregister connection to 192.168.1.50:0
2014-05-19T09:34:09.615Z [37281B70 info ‘Default’] User of connection 192.168.1.50:0
2014-05-19T09:34:09.615Z [37281B70 info ‘Default’] App CORE : 0 register connection to existing controller to 192.168.1.50 port 1234

To isolate further on comparing the MAC addresses for the controller IP’s it was found the the controllers IP had been assigned to another machine on the network. After shutting down the machine and restarting the netcpa agent the ESX hosts was able to re-establish a connection with the controller.

~ # tail -f /var/log/netcpa.log
2014-05-19T10:37:10.471Z [5DC5DB70 info ‘Default’] Core: ShardingSlice length of peer 192.168.1.50: 4194304
2014-05-19T10:37:10.471Z [5DC5DB70 info ‘Default’] Vxlan: core app ready on 192.168.1.50:0
2014-05-19T10:37:10.472Z [5DC5DB70 info ‘Default’] Vxlan: send VNI Membership Update(Join) to the controller: VNI 5000 controller 192.168.1.50
2014-05-19T10:37:10.472Z [5DC5DB70 info ‘Default’] Vxlan: send VNI Membership Update(Join) to the controller: VNI 5001 controller 192.168.1.50
2014-05-19T10:37:10.472Z [5DC5DB70 info ‘Default’] Core: Controller is ready: 192.168.1.50:0
2014-05-19T10:37:10.472Z [FFE59100 info ‘Default’] Core: Sharding Segment Update message: server 192.168.1.50 startSliceId 0 numSlices 1024
2014-05-19T10:37:10.473Z [FFE59100 info ‘Default’] Vxlan: receive VNI Membership Update(Join) from the controller: VNI 5000 controller 192.168.1.50 len 23
2014-05-19T10:37:10.473Z [FFE59100 info ‘Default’] Vxlan: set VNI 5000 (mcast proxy: Enabled, arp proxy: Enabled)
2014-05-19T10:37:10.474Z [FFE59100 info ‘Default’] Vxlan: receive VNI Membership Update(Join) from the controller: VNI 5001 controller 192.168.1.50 len 23
2014-05-19T10:37:10.474Z [FFE59100 info ‘Default’] Vxlan: set VNI 5001 (mcast proxy: Enabled, arp proxy: Enabled)

If the controller IP address gets changed and cannot be reverted to the original IP, the /etc/vmware/netcpa/config-by-vsm.xml file on the ESX host can be edited to add the new controller IP address.

While this issue may be quite simple and something that happens most of the time in a large network, I hope you found the approach to the problem useful. Feedback welcome!!