Description of problem: Cluster policy behave as it use network filter, even when network filter is not used. Version-Release number of selected component (if applicable): is17 How reproducible: always Steps to Reproduce: 1. Create cluster with 2hosts. 2. Create cluster policy with only memory filter. 3. Assign this policy to cluster with hosts. 4. Create non-required network. 5. Assign this network to host1. 6. Create vm with vnic with non-required network. 7. Run this vm. 8. Migrate vm. Actual results: Not possible to migrate vm on host without non-required network. Expected results: Possible to migrate vm, because network filter is not used.
I cannot reproduce this on master, so it seems we have already fixed this. We need to find the patch that did that though..
Created attachment 820285 [details] engine.log Update: I noticed that there persist running task: Migrating VM vm to Host <UNKNOWN> (please also fix that "<UNKNOWN>") attaching logs from engine, host1 and host2.
Created attachment 820286 [details] vdsm log from host1
Created attachment 820287 [details] vdsm log from host2
If the device is really not required, this should probably not happen. Thread-1063::ERROR::2013-11-06 10:45:47,313::vm::321::vm.Vm::(run) vmId=`c96c7bf5-5f2f-4f44-baa9-7432c7f59c19`::Failed to migrate Traceback (most recent call last): File "/usr/share/vdsm/vm.py", line 308, in run File "/usr/share/vdsm/vm.py", line 385, in _startUnderlyingMigration File "/usr/share/vdsm/vm.py", line 836, in f File "/usr/lib64/python2.6/site-packages/vdsm/libvirtconnection.py", line 76, in wrapper File "/usr/lib64/python2.6/site-packages/libvirt.py", line 1178, in migrateToURI2 libvirtError: Cannot get interface MTU on 'nonrq': No such device After seeing the logs this is definitely not an scheduling issue as the migration was started properly. Michal: Can you ask somebody from the virt team to take a look at this please?
non-reuired network doesn't mean optional. If you start/migrate a VM with that network on a host without a physical nick assigned to that network it will fail. it would have to be "unplugged" first and then started/migrated, but there's no such infrastructure on engine at the moment. And anyway it may not be the desired behavior
Closing based on comment 6, as this is not a scheduling issue and works as currently designed. If you wish to change the design please open an RFE for it.