Bug 848101 - 3.1 beta2 [vdsm] port-mirroring: vdsm doesn't remove port-mirroring after migration ends successfully on source (also for hot-plug)
3.1 beta2 [vdsm] port-mirroring: vdsm doesn't remove port-mirroring after mig...
Status: CLOSED ERRATA
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: vdsm (Show other bugs)
6.3
x86_64 Linux
high Severity urgent
: beta
: ---
Assigned To: Dan Kenigsberg
GenadiC
network
: ZStream
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2012-08-14 11:31 EDT by Haim
Modified: 2014-01-12 19:53 EST (History)
11 users (show)

See Also:
Fixed In Version: vdsm-4.9.6-32.0
Doc Type: Bug Fix
Doc Text:
Previously, VDSM did not remove port mirroring on a virtual machine's source network after the virtual machine had been migrated. This blocked all traffic to the bridge network, as the mirroring destination did not exist after the migration succeeded. VDSM now implements unsetPortMirroring, which removes port mirroring on the source network when hot unplugging the mirroring target, or after the virtual machine is successfully migrated.
Story Points: ---
Clone Of:
Environment:
Last Closed: 2012-12-04 14:05:48 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Haim 2012-08-14 11:31:17 EDT
Description of problem:

vdsm does not remove the port mirroring after migration succeed in src. This would block all traffic to the bridge since the mirroring destination does not existed. 
the port mirroring should be removed before the tap fd is 
closed. (For hot-unplug, vdsm needs remove the port mirroing after 
device_del but before netdev_del. For migration, vdsm needs remove the 
port mirroring before quit the src guest.)

note for QE: 

- please test both hot-plug and migration when verifying this fix.

expected results:

- vdsm should remove the port mirroring before closing fd

current results:
- vdsm tries to remove port mirroring after closing fd prevents from traffic going out from the bridge since the mirroring destination does not exists.
Comment 1 Itamar Heim 2012-08-14 11:36:49 EDT
what's the use case for live migrating a VM with port mirroring?
Comment 3 lpeer 2012-08-15 04:34:27 EDT
(In reply to comment #1)
> what's the use case for live migrating a VM with port mirroring?

1. Let's assume we have an appliance that monitors traffic from a specific sets of VMs. Once we have VM affinity and we can define that multiple VMs must run on the same host (and migrate together i assume) a port mirroring appliance can be one of these VMs and migrate with them.

2. If we want to sample the network I don't see a reason to lock the vm to a specific host.

3. Going forward we would like to support/implement a solution where a single VM can monitor the traffic of the whole network and we won't need one appliance per host (we haven't find a way to do it yet). Then it would be really useful to support VM migration with port-mirroring.

Generally, when discussing this feature we agreed that if a user would like to pin the appliance VM to host he has the tools to do so but there is no reason to force him to. 

Implementation wise we figured that there is overhead for supporting migration with port-mirroring in vdsm but there is also a overhead for blocking migration and preventing multiple appliances on the same host in the engine. When we looked at the complications of solving the issue we agreed the simple solution is to solve this in vdsm level.
Comment 4 Yaniv Kaul 2012-08-15 05:15:31 EDT
(In reply to comment #3)
> (In reply to comment #1)
> > what's the use case for live migrating a VM with port mirroring?
> 
> 1. Let's assume we have an appliance that monitors traffic from a specific
> sets of VMs. Once we have VM affinity and we can define that multiple VMs
> must run on the same host (and migrate together i assume) a port mirroring
> appliance can be one of these VMs and migrate with them.

You will miss packets during migration.

> 
> 2. If we want to sample the network I don't see a reason to lock the vm to a
> specific host.

Again, during migration, you will miss packets.

> 
> 3. Going forward we would like to support/implement a solution where a
> single VM can monitor the traffic of the whole network and we won't need one
> appliance per host (we haven't find a way to do it yet). Then it would be
> really useful to support VM migration with port-mirroring.

That really implies that the port-mirroring VM is not moving, and traffic is tunneled to it.

> 
> Generally, when discussing this feature we agreed that if a user would like
> to pin the appliance VM to host he has the tools to do so but there is no
> reason to force him to. 
> 
> Implementation wise we figured that there is overhead for supporting
> migration with port-mirroring in vdsm but there is also a overhead for
> blocking migration and preventing multiple appliances on the same host in
> the engine. When we looked at the complications of solving the issue we
> agreed the simple solution is to solve this in vdsm level.
Comment 5 Andrew Cathrow 2012-08-15 09:30:06 EDT
I'm not sure I understand the use case for migrating the VM.
Making that non migratable makes more sense to me.

When we have OVS support and a controller we can do more sophisticated handling of monitor/span ports but for now forcing this VM to be non-migratable seems to be the right thing.

I don't see a reason to group a monitoring VM with application VMs as described in point 1 in comment #3. 

In these environments the monitoring VMs are typically linked at a higher level - eg. a central system collates data from multiple appliances for analysis. Moving a monitoring VM doesn't seem like an important use case.

Adding Simon to make the final call
Comment 6 Dan Kenigsberg 2012-08-30 15:23:32 EDT
unset port mirroring when hot plugging the mirroring target
http://gerrit.ovirt.org/7425
Comment 8 GenadiC 2012-09-10 08:58:46 EDT
Verified in SI17
Comment 12 errata-xmlrpc 2012-12-04 14:05:48 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHSA-2012-1508.html

Note You need to log in before you can comment on or make changes to this bug.