Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1485688

Summary: [downstream clone - 4.1.7] [Pool] VMs are still created with duplicate MAC addresses after 4.0.7 upgrade
Product: Red Hat Enterprise Virtualization Manager Reporter: rhev-integ
Component: ovirt-engineAssignee: Martin Mucha <mmucha>
Status: CLOSED ERRATA QA Contact: Michael Burman <mburman>
Severity: urgent Docs Contact:
Priority: high    
Version: 4.0.7CC: alkaplan, bgraveno, bkorren, danken, gveitmic, lsurette, lveyde, mburman, mkalinin, mmucha, mtessun, rbalakri, Rhev-m-bugs, rmcswain, srevivo, ykaul, ylavi
Target Milestone: ovirt-4.1.7Keywords: ZStream
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: ovirt-engine-4.1.7.2 Doc Type: Bug Fix
Doc Text:
This update fixes a Manager issue that allowed duplicate MAC addresses even when duplicates are disallowed.
Story Points: ---
Clone Of: 1435485 Environment:
Last Closed: 2017-11-07 17:27:54 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Network RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1435485    
Bug Blocks:    

Description rhev-integ 2017-08-27 08:07:04 UTC
+++ This bug is a downstream clone. The original bug is: +++
+++   bug 1435485 +++
======================================================================

Description of problem:

We have a report of the following BZ not being fixed by it's 4.0.7 clone:
BZ1400043 - [Vm Pool] VMs are created with duplicate MAC addresses

First try of new version (4.0.7) resulted in 12 VMs with Duplicate MACs.

Version-Release number of selected component (if applicable):
rhevm-4.0.7.4-0.1.el7ev.noarch

(Originally by Germano Veit Michel)

Comment 7 rhev-integ 2017-08-27 08:07:40 UTC
Upgrade to 4.0.7 was from 4.0.6

(Originally by Germano Veit Michel)

Comment 8 rhev-integ 2017-08-27 08:07:47 UTC
One interesting thing is that they have 5-6 VM Pools. Could this increase the probability of hitting the bug? It seems very easy to hit in that environment.

(Originally by Germano Veit Michel)

Comment 10 rhev-integ 2017-08-27 08:07:59 UTC
A possible reproduction of the bug -

-Make sure the MacPool used by the dc doesn't allow duplicates.

1. Create a template with one vnic ('tmp1').
2. Create VmPool ('pool') from 'tmp1' with 2 vms ('pool-1' and 'pool-2'). Set the number of prestarted vms as 2.
3. Wait for the vms to be up.
4. Unplug the nic from vm 'pool-1' (lets call its current mac address 'x'). Change its mac address (new mac 'y'). Plug it back.
5. Add a vnic to vm 'pool-2' and set its mac address to 'x' (the old mac address of the vnic we uplugged and plugged).
6. Stop vm 'pool-1'.

Result - Both vms 'pool-1' and 'pool-2' have vnic with 'x' mac.


Explanation of what causes the bug - when stopping a vm that was started by the pool, the original snapshot (before the run) is restored. The macs of the vnics in the original snapshot are added to the mac pool using 'forceAdd'. It means that it ignores if the mac is already in the pool.
So if a mac in the original snapshot was taken by another vm. We will end up with duplicate macs.

(Originally by Alona Kaplan)

Comment 11 rhev-integ 2017-08-27 08:08:05 UTC
Latest logs after a new test (with the snapshot related errors fixed) do not show the problem anymore.

I believe we are hitting the scenario Alona described, as the MAC Pool was close to been exhausted therefore the chances of another VM taking the MAC of the original snapshot were quite high.

(Originally by Germano Veit Michel)

Comment 26 Michael Burman 2017-10-01 06:56:20 UTC
Verified on -  4.1.7.2-0.1.el7

Summary and results:

Stateless scenarios - PASS
Statefull/snapshot scenarios - PASS
Regression - All new regression bugs which has been caused by the fix for this report has been verified

Comment 28 errata-xmlrpc 2017-11-07 17:27:54 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2017:3138