Description of problem: The engine fails to add a new NIC to a VM if the next MAC in the sequence from the Pool is held by a VM on a different cluster. Version-Release number of selected component (if applicable): rhvm-4.3.5.6-0.1.el7.noarch How reproducible: Always Steps to Reproduce: 1. Create 2 MAC Pools with 2 MAC addresses each engine=# select id,name,from_mac,to_mac from mac_pool_ranges,mac_pools where mac_pool_id = id ; id | name | from_mac | to_mac --------------------------------------+---------+-------------------+------------------- 16577283-7775-4a18-8865-fecf04bee63f | P1 | 56:ff:ff:ff:ff:00 | 56:ff:ff:ff:ff:01 12ed354b-2b1b-4f11-91f1-db09ff8b39d4 | P2 | 56:ff:ff:ff:00:00 | 56:ff:ff:ff:00:01 2. Create 2 Clusters, each using one of the MAC Pools engine=# select name,mac_pool_id from cluster_view ; name | mac_pool_id ---------+-------------------------------------- C2 | 12ed354b-2b1b-4f11-91f1-db09ff8b39d4 C1 | 16577283-7775-4a18-8865-fecf04bee63f 3. Create VM1 (empty) 4. Add NIC to VM1 engine=# select vm_name,cluster_name,name,mac_addr from vm_interface,vms where vm_interface.vm_guid=vms.vm_guid; vm_name | cluster_name | name | mac_addr --------------+--------------+-------+------------------- VM1 | C1 | nic1 | 56:ff:ff:ff:ff:00 5. Add another NIC to VM1 engine=# select vm_name,cluster_name,name,mac_addr from vm_interface,vms where vm_interface.vm_guid=vms.vm_guid; vm_name | cluster_name | name | mac_addr --------------+--------------+-------+------------------- VM1 | C1 | nic1 | 56:ff:ff:ff:ff:00 VM1 | C1 | nic2 | 56:ff:ff:ff:ff:01 6. Delete nic2 from VM1. engine=# select vm_name,cluster_name,name,mac_addr from vm_interface,vms where vm_interface.vm_guid=vms.vm_guid; vm_name | cluster_name | name | mac_addr --------------+--------------+-------+------------------- VM1 | C1 | nic1 | 56:ff:ff:ff:ff:00 ~~~ NOTE: the point of steps 4, 5 and 6 is to make the 'next' MAC from Pool P1 be "56:ff:ff:ff:ff:00" This is to make step 9 attempt to get this MAC from P1, while its in use by VM1. ~~~ 7. Move VM1 to C2 8. Create VM2 on C1 (empty) engine=# select vm_name,cluster_name from vms; vm_name | cluster_name --------------+-------------- VM1 | C2 VM2 | C1 9. Add NIC to VM2 2019-10-10 13:13:39,014+10 WARN [org.ovirt.engine.core.bll.network.vm.ActivateDeactivateVmNicCommand] (default task-14) [93099e4] Validation of action 'ActivateDeactivateVmNic' failed for user admin@internal-authz. Reasons: VAR__ACTION__ACTIVATE,VAR__TYPE__INTERFACE,NETWORK_MAC_ADDRESS_IN_USE,$MacAddress 56:ff:ff:ff:ff:00,$VmName VM1 2019-10-10 13:13:39,047+10 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-14) [93099e4] EVENT_ID: NETWORK_ADD_VM_INTERFACE_FAILED(933), Failed to add Interface nic1 (VirtIO) to VM VM2. (User: admin@internal-authz) However, 56:ff:ff:ff:ff:01 is free on P1, the engine failed to get the first MAC and did not try again to get the next. This only seem to happen if the VM holding the MAC (VM1) is on a different cluster (C2), otherwise the engine gets the next MAC if the 'next in sequence' is not free and adding the NIC succeeds. Actual results: Engine fails to find next free MAC, fails NIC Add while there are free macs in the pool Expected results: If the VM that has the next MAC is in a different cluster, try to get another MAC.
Is there a reason to share the MAC Pool between two clusters? If not, I recommend that each cluster uses it's own MAC Pool, like described in the admin guide 1.5. MAC Address Pools https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.3/html-single/administration_guide/index#sect-MAC_Address_Pools There very similar problems known in bug 1446913 and bug 1410440 .
(In reply to Dominik Holler from comment #1) > Is there a reason to share the MAC Pool between two clusters? > If not, I recommend that each cluster uses it's own MAC Pool, like described > in > the admin guide 1.5. MAC Address Pools > https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.3/ > html-single/administration_guide/index#sect-MAC_Address_Pools > > There very similar problems known in bug 1446913 and bug 1410440 . It's not shared, each cluster (C1 and C2) uses their own MAC Pools (P1 and P2). BZ1410440 looks quite different. BZ1446913 sound a bit similar, but its not clear enough to me, so not sure if its the same thing. This one has a customer ticket attached.
As discussed on yesterday's meeting: https://bugzilla.redhat.com/show_bug.cgi?id=1878930
verify in build : 4.4.3.5-0.5
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Low: Red Hat Virtualization security, bug fix, and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2020:5179