Bug 1760170 - If an in-use MAC is held by a VM on a different cluster, the engine does not attempt to get the next free MAC.
Summary: If an in-use MAC is held by a VM on a different cluster, the engine does not ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine
Version: 4.3.5
Hardware: x86_64
OS: Linux
high
medium
Target Milestone: ovirt-4.4.3
: ---
Assignee: eraviv
QA Contact: michal
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-10-10 03:26 UTC by Germano Veit Michel
Modified: 2024-06-13 22:15 UTC (History)
12 users (show)

Fixed In Version: ovirt-engine-ovirt-engine-4.4.3.5
Doc Type: Bug Fix
Doc Text:
Previously, the MAC Pool search functionality failed to find unused addresses. As a result, creating a vNIC failed. In this release, the MAC pool search is now able to locate an unused address in the pool, and all unused addresses are assigned from a pool.
Clone Of:
Environment:
Last Closed: 2020-11-24 13:09:18 UTC
oVirt Team: Network
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 4490131 0 None None None 2019-10-10 03:42:55 UTC
Red Hat Product Errata RHSA-2020:5179 0 None None None 2020-11-24 13:10:34 UTC
oVirt gerrit 110411 0 master ABANDONED [WIP]core: check mac address in use before allocating 2021-01-16 18:55:28 UTC
oVirt gerrit 110612 0 master MERGED core: mac storage test no macs left 2021-01-16 18:55:28 UTC
oVirt gerrit 111041 0 master MERGED core: allocate unused macs before used ones 2021-01-16 18:55:28 UTC
oVirt gerrit 111043 0 master ABANDONED core: add\update vnic - unplug vnic if mac used 2021-01-16 18:56:07 UTC
oVirt gerrit 111044 0 master ABANDONED core: add vms - unplug vnic if mac used 2021-01-16 18:55:29 UTC

Description Germano Veit Michel 2019-10-10 03:26:24 UTC
Description of problem:

The engine fails to add a new NIC to a VM if the next MAC in the sequence from the Pool is held by a VM on a different cluster.

Version-Release number of selected component (if applicable):
rhvm-4.3.5.6-0.1.el7.noarch

How reproducible:
Always

Steps to Reproduce:
1. Create 2 MAC Pools with 2 MAC addresses each

engine=# select id,name,from_mac,to_mac from mac_pool_ranges,mac_pools where mac_pool_id = id ;
                  id                  |  name   |     from_mac      |      to_mac       
--------------------------------------+---------+-------------------+-------------------
 16577283-7775-4a18-8865-fecf04bee63f | P1      | 56:ff:ff:ff:ff:00 | 56:ff:ff:ff:ff:01
 12ed354b-2b1b-4f11-91f1-db09ff8b39d4 | P2      | 56:ff:ff:ff:00:00 | 56:ff:ff:ff:00:01


2. Create 2 Clusters, each using one of the MAC Pools

engine=# select name,mac_pool_id from cluster_view ;
  name   |             mac_pool_id              
---------+--------------------------------------
 C2      | 12ed354b-2b1b-4f11-91f1-db09ff8b39d4
 C1      | 16577283-7775-4a18-8865-fecf04bee63f

3. Create VM1 (empty)

4. Add NIC to VM1

engine=# select vm_name,cluster_name,name,mac_addr from vm_interface,vms where vm_interface.vm_guid=vms.vm_guid;
   vm_name    | cluster_name | name  |     mac_addr      
--------------+--------------+-------+-------------------
 VM1          | C1           | nic1  | 56:ff:ff:ff:ff:00

5. Add another NIC to VM1

engine=# select vm_name,cluster_name,name,mac_addr from vm_interface,vms where vm_interface.vm_guid=vms.vm_guid;
   vm_name    | cluster_name | name  |     mac_addr      
--------------+--------------+-------+-------------------
 VM1          | C1           | nic1  | 56:ff:ff:ff:ff:00
 VM1          | C1           | nic2  | 56:ff:ff:ff:ff:01

6. Delete nic2 from VM1.

engine=# select vm_name,cluster_name,name,mac_addr from vm_interface,vms where vm_interface.vm_guid=vms.vm_guid;
   vm_name    | cluster_name | name  |     mac_addr      
--------------+--------------+-------+-------------------
 VM1          | C1           | nic1  | 56:ff:ff:ff:ff:00

~~~
NOTE: the point of steps 4, 5 and 6 is to make the 'next' MAC from Pool P1 be "56:ff:ff:ff:ff:00"
This is to make step 9 attempt to get this MAC from P1, while its in use by VM1.
~~~

7. Move VM1 to C2

8. Create VM2 on C1 (empty)

engine=# select vm_name,cluster_name from vms;
   vm_name    | cluster_name 
--------------+--------------
 VM1          | C2
 VM2          | C1

9. Add NIC to VM2

2019-10-10 13:13:39,014+10 WARN  [org.ovirt.engine.core.bll.network.vm.ActivateDeactivateVmNicCommand] (default task-14) [93099e4] Validation of action 'ActivateDeactivateVmNic' failed for user admin@internal-authz. Reasons: VAR__ACTION__ACTIVATE,VAR__TYPE__INTERFACE,NETWORK_MAC_ADDRESS_IN_USE,$MacAddress 56:ff:ff:ff:ff:00,$VmName VM1
2019-10-10 13:13:39,047+10 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-14) [93099e4] EVENT_ID: NETWORK_ADD_VM_INTERFACE_FAILED(933), Failed to add Interface nic1 (VirtIO) to VM VM2. (User: admin@internal-authz)

However, 56:ff:ff:ff:ff:01 is free on P1, the engine failed to get the first MAC and did not try again to get the next.
This only seem to happen if the VM holding the MAC (VM1) is on a different cluster (C2), otherwise the engine gets the next MAC if the 'next in sequence' is not free and adding the NIC succeeds.

Actual results:
Engine fails to find next free MAC, fails NIC Add while there are free macs in the pool

Expected results:
If the VM that has the next MAC is in a different cluster, try to get another MAC.

Comment 1 Dominik Holler 2019-10-10 07:27:34 UTC
Is there a reason to share the MAC Pool between two clusters?
If not, I recommend that each cluster uses it's own MAC Pool, like described in
the admin guide 1.5. MAC Address Pools
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.3/html-single/administration_guide/index#sect-MAC_Address_Pools

There very similar problems known in bug 1446913 and bug 1410440 .

Comment 2 Germano Veit Michel 2019-10-10 07:38:13 UTC
(In reply to Dominik Holler from comment #1)
> Is there a reason to share the MAC Pool between two clusters?
> If not, I recommend that each cluster uses it's own MAC Pool, like described
> in
> the admin guide 1.5. MAC Address Pools
> https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.3/
> html-single/administration_guide/index#sect-MAC_Address_Pools
> 
> There very similar problems known in bug 1446913 and bug 1410440 .

It's not shared, each cluster (C1 and C2) uses their own MAC Pools (P1 and P2).

BZ1410440 looks quite different.
BZ1446913 sound a bit similar, but its not clear enough to me, so not 
sure if its the same thing.

This one has a customer ticket attached.

Comment 16 Germano Veit Michel 2020-09-14 23:55:32 UTC
As discussed on yesterday's meeting:

https://bugzilla.redhat.com/show_bug.cgi?id=1878930

Comment 19 michal 2020-10-04 11:24:00 UTC
verify in build : 4.4.3.5-0.5

Comment 20 michal 2020-10-04 11:24:10 UTC
verify in build : 4.4.3.5-0.5

Comment 24 errata-xmlrpc 2020-11-24 13:09:18 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Low: Red Hat Virtualization security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:5179


Note You need to log in before you can comment on or make changes to this bug.