Bug 1019812 - Cluster policy behave as it use network filter, even when network filter is not used.
Cluster policy behave as it use network filter, even when network filter is n...
Status: CLOSED NOTABUG
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine (Show other bugs)
3.3.0
Unspecified Unspecified
unspecified Severity medium
: ---
: 3.3.0
Assigned To: Martin Sivák
Lukas Svaty
sla
: Triaged
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-10-16 08:54 EDT by Ondra Machacek
Modified: 2016-02-10 15:13 EST (History)
8 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-11-06 13:17:51 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: SLA
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
engine.log (31.47 KB, text/plain)
2013-11-06 05:51 EST, Ondra Machacek
no flags Details
vdsm log from host1 (394.38 KB, text/plain)
2013-11-06 05:51 EST, Ondra Machacek
no flags Details
vdsm log from host2 (188.57 KB, text/plain)
2013-11-06 05:52 EST, Ondra Machacek
no flags Details

  None (edit)
Description Ondra Machacek 2013-10-16 08:54:03 EDT
Description of problem:
Cluster policy behave as it use network filter, even when network filter
is not used.

Version-Release number of selected component (if applicable):
is17

How reproducible:
always

Steps to Reproduce:
1. Create cluster with 2hosts.
2. Create cluster policy with only memory filter.
3. Assign this policy to cluster with hosts.
4. Create non-required network.
5. Assign this network to host1.
6. Create vm with vnic with non-required network.
7. Run this vm.
8. Migrate vm.

Actual results:
Not possible to migrate vm on host without non-required network.

Expected results:
Possible to migrate vm, because network filter is not used.
Comment 1 Martin Sivák 2013-10-18 05:40:05 EDT
I cannot reproduce this on master, so it seems we have already fixed this. We need to find the patch that did that though..
Comment 2 Ondra Machacek 2013-11-06 05:51:26 EST
Created attachment 820285 [details]
engine.log

Update:
I noticed that there persist running task: Migrating VM vm to Host <UNKNOWN>
(please also fix that "<UNKNOWN>")

attaching logs from engine, host1 and host2.
Comment 3 Ondra Machacek 2013-11-06 05:51:55 EST
Created attachment 820286 [details]
vdsm log from host1
Comment 4 Ondra Machacek 2013-11-06 05:52:15 EST
Created attachment 820287 [details]
vdsm log from host2
Comment 5 Martin Sivák 2013-11-06 09:25:03 EST
If the device is really not required, this should probably not happen.

Thread-1063::ERROR::2013-11-06 10:45:47,313::vm::321::vm.Vm::(run) vmId=`c96c7bf5-5f2f-4f44-baa9-7432c7f59c19`::Failed to migrate
Traceback (most recent call last):
  File "/usr/share/vdsm/vm.py", line 308, in run
  File "/usr/share/vdsm/vm.py", line 385, in _startUnderlyingMigration
  File "/usr/share/vdsm/vm.py", line 836, in f
  File "/usr/lib64/python2.6/site-packages/vdsm/libvirtconnection.py", line 76, in wrapper
  File "/usr/lib64/python2.6/site-packages/libvirt.py", line 1178, in migrateToURI2
libvirtError: Cannot get interface MTU on 'nonrq': No such device

After seeing the logs this is definitely not an scheduling issue as the migration was started properly.

Michal: Can you ask somebody from the virt team to take a look at this please?
Comment 6 Michal Skrivanek 2013-11-06 10:08:07 EST
non-reuired network doesn't mean optional. If you start/migrate a VM with that network on a host without a physical nick assigned to that network it will fail.

it would have to be "unplugged" first and then started/migrated, but there's no such infrastructure on engine at the moment. And anyway it may not be the desired behavior
Comment 7 Doron Fediuck 2013-11-06 13:17:51 EST
Closing based on comment 6, as this is not a scheduling issue and works as
currently designed.
If you wish to change the design please open an RFE for it.

Note You need to log in before you can comment on or make changes to this bug.