Description of problem: When switching MAC pool on specific DC, if VM is using specific MAC on one "MAC pool", there is no limitation to try to allocate the same MAC on another MAC pool as it doesn't know that this MAC is already in use. As a result automatic allocation of MAC will always fail. As well adding the vNIC with the same MAC as unplugged will succeed when should be blocked Version-Release number of selected component (if applicable): How reproducible: Always Steps to Reproduce: 1. Use setup with Default MAC pool and VM with at least one NIC 2. Create another MAC pool with the same range as the default one 3. Change the MAC pool for the DC to the new MAC pool 4. Change the MAC pool back to the default one 5. Try to add additional NIC to VM as plugged - fails 6. Try to add vNIC with custom MAC, that is equal to the MAC of existing vNIC on VM - succeeds Actual results: 5) Adding new vNIC will fail (as it tries to add vNIC with the MAC that is already in use and because it came back from another MAC pool it doesn't know that) 6) Adding the vNIC as unplugged with the same MAC succeeds thoug should fail because it came back from another MAC pool and it doesn't know it's already used Expected results: Should work as expected Additional info:
Fix to steps to reproduce (as former step 4 was not needed) 1. Use setup with Default MAC pool and VM with at least one NIC 2. Create another MAC pool with the same range as the default one 3. Change the MAC pool for the DC to the new MAC pool 4. Try to add additional NIC to VM as plugged - fails 5. Try to add vNIC with custom MAC, that is equal to the MAC of existing vNIC on VM - succeeds
Does this happen in a subset of the default mac pool or with a different range?
Does it happen with a superset of the mac pool?
What is the mode of failure? Can you share engine.log and screenshot?
This bug is not marked for z-stream, yet the milestone is for a z-stream version, therefore the milestone has been reset. Please set the correct milestone or add the z-stream flag.
It happens with the same range. When creating a new vNIC it will by default take the first MAC in the range, so if you create superset that start with the same MAC as the original MAC pool, you'll still fail. We did the test togeather with Alona, so she knows exactly where the bug in the code
Created attachment 1085435 [details] screen shot of an error
In oVirt testing is done on single release by default. Therefore I'm removing the 4.0 flag. If you think this bug must be tested in 4.0 as well, please re-add the flag. Please note we might not have testing resources to handle the 4.0 clone.
development of 'mac pools' faced several waterfalls back to specification phase, wildly changing requirements / behavior long after deadline. Reading feature page and code I feel, that line "When DataCenter definition changes so that after change different pool is used, all MACs belonging to that data center are removed from old pool and reinserted to new one." is violated, and given code was removed during CR. I did not find code related to migration MACs from one pool to another, while given code existed once. What I believe it should be done in respect to this bug and feature page is to return back code for migration MACs between pools like this: when call is made to UpdateStoragePoolCommand with request which changes pool to another one, we need to find all MAC usages, free them in old pool and push them into new one. There are at least two problems with that: a) figuring out, which MACs belongs to current datacenter, since MAC pools are potentially shared between multiple datacenters. We can find out all NICs of given DC and collect their MACs, but this is not correct, since we have no guarantee, that MACs were used for assigning MAC addresses to nics. Maybe in current code it's ok (no idea if it is), but MAC pool is generic concept and can provide MAC to anyone for any other use, than just assigning MAC address to VmNetworkInterface. b) target mac pool can have differently set allowancy of duplicates and can have insufficient ranges; this would mean, that adding macs forcibly (which means ignoring duplicity setting) would have to be used and if moved mac is out of range, it will be added as a custom mac address. I need advice/decision how do fix this. If we need to do migration between pools or want do it differently. And if former is true (do migrate), then how to find out macs, which should be released from one pool and registered in another.
1. Currently the macPool specifies addresses only to vnics- so you have to search for all the macs used by vnics in the dc. Remove the macs from the old pool (if allow duplicates is allowed, you should remove the mac only if there is no other dc using the same mac pool and using this address) and add them to the new one. 2. If the macs that are used by the dc exist in the pool (and allow duplicates is not allowed)- the 'change pool' operation should be blocked.
What if source pool is configured, that it does not allow duplicates, but *contains* them(remember method 'forceAddMac' — allowing adding mac regardless of anything). Block / don't block? On this answer depends, how do we check for duplicates. If we do block it, user can't probably do anything about it, and has to hunt down duplicate mac by himself without any help from our app, if he want to swith to this pool. If we allow change of pool in this situation, but disallow it when duplicates are turned on, it feels weird (copying them when duplicates are disallowed, and not copying them, when they're allowed)
This bug is referenced in git log for ovirt-engine-3.6.1.1. Please set target release to 3.6.1.1 accordingly unless additional patches are needed.
Verified on - 3.6.1.1-0.1.el6
According to verification status and target milestone this issue should be fixed in oVirt 3.6.1. Closing current release.