This service will be undergoing maintenance at 00:00 UTC, 2017-10-23 It is expected to last about 30 minutes
Bug 1459143 - Duplicated MAC on creating VM from template
Duplicated MAC on creating VM from template
Status: CLOSED NOTABUG
Product: ovirt-engine
Classification: oVirt
Component: Backend.Core (Show other bugs)
4.1.1.8
Unspecified Unspecified
medium Severity high (vote)
: ---
: ---
Assigned To: Martin Mucha
meital avital
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2017-06-06 08:14 EDT by nicolas
Modified: 2017-06-11 08:12 EDT (History)
4 users (show)

See Also:
Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2017-06-11 05:54:46 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: Network
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
engine.log (22.06 KB, application/x-gzip)
2017-06-07 07:27 EDT, nicolas
no flags Details

  None (edit)
Description nicolas 2017-06-06 08:14:58 EDT
Description of problem:

We have a few MAC address pools and all of them are nearly full. I just created a new VM based on a template and seems that it assigned a MAC address for the NIC that already existed.

for vm in vms_serv.list():
    nics = conn.follow_link(vm.nics)
    for nic in nics:
        mac = nic.mac
        if mac.address == '00:1a:4a:4d:dd:9a':
            print "VM: %s" % (vm.name)

VM1
VM2

The first of these VMs has been created from scratch and is not based on any template. The second one is the one I created.

None of the MAC pools are allowed to duplicate MACs, so this shouldn't happen.

Version-Release number of selected component (if applicable):

4.1.1.8

How reproducible:

I managed to reproduce the issue twice.

Steps to Reproduce:
1. Create a VM based on a template, having some nearly full MAC address pools.
2. Run the code above filtering by the MAC of the new created VM.

Actual results:

There are 2 VMs with the same MAC address.

Expected results:

VM should be different, or if there's not available MAC address, warn the user that there are no MACs left.
Comment 1 Martin Perina 2017-06-07 05:02:34 EDT
It seems to me more like a bug in backend code than SDK, Dan could someone from network team please take a look?
Comment 2 Dan Kenigsberg 2017-06-07 05:07:48 EDT
Please attach your engine.log. I suspect that this is a dup of bug 1435485
Comment 3 nicolas 2017-06-07 07:27 EDT
Created attachment 1285768 [details]
engine.log

Please find attached the requested log. The created machine's name is test2.

Note that the referred bug mentions it happens with VmPools, not sure if it also affects standalone machines as in my case.

Also note that there still are unused MAC addresses in the pool, so duplicates shouldn't be used at all.

After creating the VM I run the snippet in my OP and the result is:

In [2]: for vm in vms_serv.list():
   ...:         nics = conn.follow_link(vm.nics)
   ...:         for nic in nics:
   ...:                 mac = nic.mac
   ...:                 if mac.address == '00:1a:4a:4d:dd:a2':
   ...:                         print "VM: %s" % (vm.name)
   ...:             
VM: AS_MM_CD1_FS
VM: test2

(being 00:1a:4a:4d:dd:a2 the duplicate MAC)
Comment 4 Alona Kaplan 2017-06-11 04:24:38 EDT
It doesn't seem to be duplicate of bug 1435485.

Nicolas, can you please attach to the bug a dump of your db (containing the dup macs)?

Are you sure both of the vms with the dup mac are using the same mac pool?
(They belong to the same cluster or two different clusters that are using the same mac pool?)
Comment 5 nicolas 2017-06-11 04:58:58 EDT
For security reasons I sent the DB dump directly to your mail (not sure if you can upload it so only other RH members can see it, if so, feel free to do it).

I realized the two machines belong to two different datacenters (thus two different clusters as well). As far as I could see, the two clusters use the same mac pools.

We have 2 MAC pools:

Default:
00:1a:4a:4d:cc:00 - 00:1a:4a:4d:cc:ff
00:1a:4a:4d:dd:00 - 00:1a:4a:4d:dd:ff
00:1a:4a:97:5f:00 - 00:1a:4a:97:5f:ff
00:1a:4a:97:6e:00 - 00:1a:4a:97:6f:ff

Adicional-DOCINT2:
00:1a:4a:97:ee:00 - 00:1a:4a:97:ee:ff
Comment 6 Alona Kaplan 2017-06-11 05:54:46 EDT
Looking at your dump -
vm 'AS_MM_CD1_FS' belongs to cluster 'VDI' which uses  mac-pool 'Adicional-DOCINT2'

vm 'test2' belongs to cluster 'Cluster-Rojo' which uses mac-pool 'Default'

As the vms belong to different mac pools it is not a bug they have the same mac.

The question is how 'AS_MM_CD1_FS' got the mac '00:1a:4a:4d:dd:a2'? It is not in its mac pool's range.
There are several ways to get to this situation-
* After the vm and its vnics were created-
 * changing the pools range.
 * changing the cluster's mac pool.
 * moving the vm to a new cluster.
* importing a vm (the original mac addresses will be used).

I'm closing the bug as not a bug.
Comment 7 nicolas 2017-06-11 06:34:52 EDT
Please note that the AS_MM_CD1_FS machine was created first.

None of the circumstances you described took place. Note that I created the test2 machine to illustrate this situation.

* None of the VM's nic's MACs have been set manually - it's the engine who chose them.
* Pool ranges, MAC pools haven't been changed.
* Machines haven't been moved between clusters nor have been imported. All of them are bare-metal creation.

Since none of the MAC pools have the "allow duplicate MACs" check enabled, and don't have overlapped MAC ranges, IMO this situation shouldn't take place.

Sorry, but I disagree, this still is a bug situation to me.
Comment 8 Alona Kaplan 2017-06-11 08:12:48 EDT
The fact that you have the same mac in two different mac pools is not a bug.
test2's mac is perfectly ok.

The fact that you have a mac that is outside the range of the pool it belongs to, may be a bug and may not.
We have some valid scenarios that may lead to this situation (as described previously).

If you think the mac is not in the range due to a bug, please open a new bug describing the scenario (if you manage to reproduce it, it can be really helpful)..

Note You need to log in before you can comment on or make changes to this bug.