Bug 1390575 - Import VM from data domain failed when trying to import a VM without re-assign MACs, but there is no MACs left in the destination pool
Summary: Import VM from data domain failed when trying to import a VM without re-assig...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: BLL.Network
Version: 4.1.0
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: ovirt-4.1.1
: 4.1.1
Assignee: Yevgeny Zaspitsky
QA Contact: Michael Burman
URL:
Whiteboard:
Depends On:
Blocks: 1226206
TreeView+ depends on / blocked
 
Reported: 2016-11-01 12:39 UTC by Michael Burman
Modified: 2017-04-21 09:37 UTC (History)
3 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2017-04-21 09:37:00 UTC
oVirt Team: Network
Embargoed:
rule-engine: ovirt-4.1+


Attachments (Terms of Use)
engine log (1.09 MB, application/x-gzip)
2016-11-01 12:39 UTC, Michael Burman
no flags Details


Links
System ID Private Priority Status Summary Last Updated
oVirt gerrit 71195 0 master MERGED engine: avoid failing on "no free macs" when no re-assign 2017-01-29 10:48:49 UTC
oVirt gerrit 71196 0 ovirt-engine-4.1 MERGED engine: avoid failing on "no free macs" when no re-assign 2017-02-01 10:21:05 UTC
oVirt gerrit 71251 0 ovirt-engine-4.1 MERGED core: replace "" invalid mac address from input with null 2017-02-01 09:33:29 UTC

Description Michael Burman 2016-11-01 12:39:15 UTC
Description of problem:
Import VM from data domain failed when trying to import a VM without re-assign MACs, but there is no MACs left in the destination pool. 

When trying to import a VM with vNICs without re-assigning macs and there are no macs left in the destination pool, we failing, but we shouldn't fail, because we are not trying to re-assign macs from the destination pool. 
We should only fail if trying to re-assign macs and there are no macs left in the pool. 

2016-11-01 14:10:38,924 - Dummy-1 - storagedomains - DEBUG - Action request content is --  url:/ovirt-engine/api/storagedomains/06d1a9ea-d224-4b27-8038-b50ae747fa61/vms/64339f86-e003-48e8-b2
97-5ba83a8ea7bd/register body:<action>
   <async>false</async>
   <cluster>
       <name>golden_env_mixed_1</name>
   </cluster>
   <grace_period>
       <expiry>10</expiry>
   </grace_period>
   <reassign_bad_macs>false</reassign_bad_macs>
</action>

2016-11-01 14:10:38,925 - Dummy-1 - storagedomains - INFO - Using Correlation-Id: storagedomains_syncAction_a3cdba2e-c9bf-4f55
2016-11-01 14:10:39,140 - Thread-2 - stuck_handler - WARNING - sys._current_frames failed with exception: 139655286601840

2016-11-01 14:10:39,984 - Dummy-1 - core_api - DEBUG - Request POST response time: 0.220
2016-11-01 14:10:39,985 - Dummy-1 - storagedomains - DEBUG - Cleaning Correlation-Id: storagedomains_syncAction_a3cdba2e-c9bf-4f55
2016-11-01 14:10:39,986 - Dummy-1 - api_utils - ERROR - Failed to syncAction element NOT as expected:
       Status: 409
       Reason: Conflict
       Detail: [Cannot import VM. Not enough MAC addresses left in MAC Address Pool.]

Version-Release number of selected component (if applicable):
4.1.0-0.0.master.20161031231324.git5d8702e.el7.centos.noarch + Yevgeny rpms

How reproducible:
100% on master + yevgeny rpms

Steps to Reproduce:
1. Try to import VM from data domain with 3 vNICs to destination DC that has only 1 mac left and don't re-assign macs.

Actual results:
Failed. [Cannot import VM. Not enough MAC addresses left in MAC Address Pool.]

Expected results:
Import should succeed, because we not re-assigning macs from the pool.
The result should be like on 4.0.5 today.
Import succeeded, warning about no macs left in the pool should displayed and other warning about macs that are out of the pool range should displayed as well.

Additional info:
We are testing Yevgeny rpms.

Only scenario in which we re-assigning macs and there are no macs left in the pool, should fail the VM import.

Comment 1 Michael Burman 2016-11-01 12:39:41 UTC
Created attachment 1216096 [details]
engine log

Comment 2 Michael Burman 2016-11-01 15:19:04 UTC
This is the code we testing - https://gerrit.ovirt.org/#/c/65278/

Comment 3 Yevgeny Zaspitsky 2017-01-03 12:27:45 UTC
Could this be re-tested on the 4.1 build?

Comment 4 Michael Burman 2017-01-05 09:06:14 UTC
Yes, it still relevant for 4.1.0-0.4.master.20170104122005.git51b1bcf.el7.centos
We are blocking the VM import in case of no MAC left in the pool, but we didn't asked to re-assign any MACs from the destination pool. 
We want to import the VM with it's current MACs. 
We shouldn't get blocked, we should succeed. 

Only scenario in which we re-assigning macs and there are no macs left in the pool, should fail the VM import.

Comment 5 Yevgeny Zaspitsky 2017-01-05 09:47:09 UTC
So the case is a VM with 3 vNics whereas only 2 MACs are available in the mac-pool. The vNics bear 3 MACs on them, what are those MACs of the NICs? Some of them (or all) had to be either duplicate of the existing ones or to be external to the pool's ranges.
The only valid scenario here is there are duplicates and they're allowed in the pool definition.
In other cases:
* MAC is external to the pool ranges - the VM isn't allowed to be imported
* it is duplicate and duplicates aren't allowed - the VM isn't allowed to be imported

Comment 6 Michael Burman 2017-01-05 10:00:23 UTC
Yevgeny, it is a very simple scenario, please don't complicate this. 
MAC is external to the pool ranges, yes, and you have no reason to block me from importing such VM. It's very valid, it must be allowed! and it's allowed today with a warning about the macs out of range. There is nothing wrong with this. 
It's the same as adding new vNIC to VM with manual mac out side the range. You can't block this!

Comment 7 Dan Kenigsberg 2017-01-18 10:40:05 UTC
We have discussed the issue face-to-face and accepted the following:


When we import a "bad" mac, and do not request to reassign it, we take it as is, without any modification.

This should happen regardless of how many free macs are available in the destination macpool.

Comment 8 Michael Burman 2017-02-12 10:08:22 UTC
Verified on - 4.1.1-0.1.el7


Note You need to log in before you can comment on or make changes to this bug.