Bug 1019812

Summary: Cluster policy behave as it use network filter, even when network filter is not used.
Product: Red Hat Enterprise Virtualization Manager Reporter: Ondra Machacek <omachace>
Component: ovirt-engineAssignee: Martin Sivák <msivak>
Status: CLOSED NOTABUG QA Contact: Lukas Svaty <lsvaty>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 3.3.0CC: acathrow, dfediuck, iheim, lpeer, mavital, michal.skrivanek, Rhev-m-bugs, yeylon
Target Milestone: ---Keywords: Triaged
Target Release: 3.3.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard: sla
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2013-11-06 18:17:51 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: SLA RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
engine.log
none
vdsm log from host1
none
vdsm log from host2 none

Description Ondra Machacek 2013-10-16 12:54:03 UTC
Description of problem:
Cluster policy behave as it use network filter, even when network filter
is not used.

Version-Release number of selected component (if applicable):
is17

How reproducible:
always

Steps to Reproduce:
1. Create cluster with 2hosts.
2. Create cluster policy with only memory filter.
3. Assign this policy to cluster with hosts.
4. Create non-required network.
5. Assign this network to host1.
6. Create vm with vnic with non-required network.
7. Run this vm.
8. Migrate vm.

Actual results:
Not possible to migrate vm on host without non-required network.

Expected results:
Possible to migrate vm, because network filter is not used.

Comment 1 Martin Sivák 2013-10-18 09:40:05 UTC
I cannot reproduce this on master, so it seems we have already fixed this. We need to find the patch that did that though..

Comment 2 Ondra Machacek 2013-11-06 10:51:26 UTC
Created attachment 820285 [details]
engine.log

Update:
I noticed that there persist running task: Migrating VM vm to Host <UNKNOWN>
(please also fix that "<UNKNOWN>")

attaching logs from engine, host1 and host2.

Comment 3 Ondra Machacek 2013-11-06 10:51:55 UTC
Created attachment 820286 [details]
vdsm log from host1

Comment 4 Ondra Machacek 2013-11-06 10:52:15 UTC
Created attachment 820287 [details]
vdsm log from host2

Comment 5 Martin Sivák 2013-11-06 14:25:03 UTC
If the device is really not required, this should probably not happen.

Thread-1063::ERROR::2013-11-06 10:45:47,313::vm::321::vm.Vm::(run) vmId=`c96c7bf5-5f2f-4f44-baa9-7432c7f59c19`::Failed to migrate
Traceback (most recent call last):
  File "/usr/share/vdsm/vm.py", line 308, in run
  File "/usr/share/vdsm/vm.py", line 385, in _startUnderlyingMigration
  File "/usr/share/vdsm/vm.py", line 836, in f
  File "/usr/lib64/python2.6/site-packages/vdsm/libvirtconnection.py", line 76, in wrapper
  File "/usr/lib64/python2.6/site-packages/libvirt.py", line 1178, in migrateToURI2
libvirtError: Cannot get interface MTU on 'nonrq': No such device

After seeing the logs this is definitely not an scheduling issue as the migration was started properly.

Michal: Can you ask somebody from the virt team to take a look at this please?

Comment 6 Michal Skrivanek 2013-11-06 15:08:07 UTC
non-reuired network doesn't mean optional. If you start/migrate a VM with that network on a host without a physical nick assigned to that network it will fail.

it would have to be "unplugged" first and then started/migrated, but there's no such infrastructure on engine at the moment. And anyway it may not be the desired behavior

Comment 7 Doron Fediuck 2013-11-06 18:17:51 UTC
Closing based on comment 6, as this is not a scheduling issue and works as
currently designed.
If you wish to change the design please open an RFE for it.