+++ This bug is a downstream clone. The original bug is: +++ +++ bug 1414970 +++ ====================================================================== Description of problem: Testing Tape passthrough in a VM through scsi capability, one can still try to live migrate a VM from one host to another in the same cluster. The operation will eventually fail at vdsm level and the VM will remain running in the source host. The idea of this BZ is to block/limit the migration feature if a VM has hostdev passed through / attached to it instead of allow the migration and fail later Version-Release number of selected component (if applicable): rhevm-4.0.6.3-0.1.el7ev.noarch qemu-kvm-rhev-2.6.0-27.el7.x86_64 vdsm-4.18.21-1.el7ev.x86_64 How reproducible: 100% Steps to Reproduce: 1. Assign a FC tape device (or emulated one) from host to VM in Virtual Machines -> Host Devices -> Add device 2. Start the VM. 3. Migrate it to another host in the cluster Actual results: Migration will eventually fail in vdsm with: ~~~ Thread-54::ERROR::2017-01-19 17:28:18,500::migration::383::virt.vm::(run) vmId=`5f532b0a-0702-456a-a83f-b1b682bf2fea`::Failed to migrate Traceback (most recent call last): File "/usr/share/vdsm/virt/migration.py", line 365, in run self._startUnderlyingMigration(time.time()) File "/usr/share/vdsm/virt/migration.py", line 438, in _startUnderlyingMigration self._perform_with_conv_schedule(duri, muri) File "/usr/share/vdsm/virt/migration.py", line 498, in _perform_with_conv_schedule self._perform_migration(duri, muri) File "/usr/share/vdsm/virt/migration.py", line 478, in _perform_migration self._vm._dom.migrateToURI3(duri, params, flags) File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 69, in f ret = attr(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 123, in wrapper ret = f(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 917, in wrapper return func(inst, *args, **kwargs) File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1939, in migrateToURI3 if ret == -1: raise libvirtError ('virDomainMigrateToURI3() failed', dom=self) libvirtError: Requested operation is not valid: domain has assigned non-USB host devices ~~~ Expected results: Block/limit/warn migration function from UI before trying the migration Additional info: (Originally by Javier Coscia)
(In reply to Javier Coscia from comment #0) > Description of problem: > > Testing Tape passthrough in a VM through scsi capability, one can still try > to live migrate a VM from one host to another in the same cluster. > The operation will eventually fail at vdsm level and the VM will remain > running in the source host. how could it have migration enabled? the same rule as for PCI pt should be applied - the guest needs to be pinned to the host (Originally by michal.skrivanek)
Yes, normally the scheduling filter policy unit should filter out other hosts and thus prevent the migration, but from the logs it seems the operation was allowed. It may indicate a bug in the procedure checking availability of free host devices on the host - will need to investigate the logs further.... (Originally by Martin Betak)
Well the HostDevice FILTER scheduling policy unit should - in conjunction with PinToHost FILTER scheduling policy unit - prevent such behavior. One thing that could cause this behavior is that the PinToHost policy unit was disabled. @Javier can you please confirm that the "PinToHost" policy unit was active at that time? (Originally by Martin Betak)
Ok, it turns out that this was a bug in the Host Device Filter Policy Unit that enabled under certain conditions VMs to be ran or migrated to hosts completely different than the one they are "pinned to". Fix posted u/s. (Originally by Martin Betak)
WARN: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason: [FOUND NON-ACKED FLAGS: {'rhevm-4.1.z': '?'}] For more info please contact: rhv-devops
Verify with: Red Hat Virtualization Manager Version: 4.1.2.1-0.1.el7 Steps: 1. Create VM with host device attached 2. Run Vm on host 3. Migrate VM Results: Migrate failed. I fill BZ for info why migration failed: https://bugzilla.redhat.com/show_bug.cgi?id=1448689
What do you mean by "migrate failed"? Also, did you notice https://bugzilla.redhat.com/show_bug.cgi?id=1414970#c14 ?
(In reply to Michal Skrivanek from comment #20) > What do you mean by "migrate failed"? Migration failed since device attached to VM. > Also, did you notice https://bugzilla.redhat.com/show_bug.cgi?id=1414970#c14 > ? @Michael: Can you verify it with SR-IOV see (https://bugzilla.redhat.com/show_bug.cgi?id=1414970#c14)
Hi Israel, We need to test that it didn't break or broke our SR_IOV migration on 4.1.2. We will run an automation to test there is no regression in our feature caused by this fix, and once it pass, i will ack.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2017:1280