Bug 1491155

Summary: [Text] Report owner(s) of colliding MAC address if already in use
Product: [oVirt] ovirt-engine Reporter: Michael Burman <mburman>
Component: BLL.NetworkAssignee: Ales Musil <amusil>
Status: CLOSED CURRENTRELEASE QA Contact: Michael Burman <mburman>
Severity: medium Docs Contact:
Priority: low    
Version: 4.2.0CC: bugs, danken, lveyde
Target Milestone: ovirt-4.2.4Flags: rule-engine: ovirt-4.2+
rule-engine: ovirt-4.3+
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: ovirt-engine-4.2.4.1 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-06-26 08:39:17 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Network RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description Michael Burman 2017-09-13 08:08:25 UTC
Description of problem:
[Text] - Improve the new text in the error message for MAC is already in use.

We have added a new to text error message if trying assign MAC address which is already in use - 

"Error while executing action: 

VM11:
MAC Address 00:00:00:00:00:26 is already in use. It can be used either by some nic of VM or by nic in one of snapshots in system."

The text should be fixed and improved, currently it's confusing and user/admin may think that we suggesting and not warning that the MAC is already used by other VM or snapshot.

1) MAC Address 00:00:00:00:00:26 is already in use. It is in use by VM<name> or it is in use by snapshot<name> in the cluster.
2) VM name/snapshot name must be specified in the error message, to let the admin/user know which VM or snapshot using this MAC address 

Version-Release number of selected component (if applicable):
4.2.0-0.0.master.20170912134930.gitc81ca84.el7.centos

Comment 1 Martin Mucha 2017-09-18 08:12:09 UTC
ad 1) text replacement "can be" -> "is" can be done.

ad 2) message is produced in two places

2.1 — when adding/updating VM nic. In that case, mac pool is consulted if mac is free. If it's not, message is produced. But original design does not contain information who owns given mac, only that it's owned by someone. We can easily query all VmNics in system, but we also have to deserialize all snapshots and you don't want to do that, that's going to be slow. So either we can live with 'undefined' slowness, or we have to refactor mac pools, so that they also know, who owns which mac. I believe we don't want neither.

2.2 - when plugging nic, following method is called. org.ovirt.engine.core.bll.network.VmInterfaceManager#existsPluggedInterfaceWithSameMac

all plugged nics with same mac are scanned in DB (not in snapshots), and we can provide specific VM/s NIC/s in error message.

---
Dan please decide if you want anything from 2.1

Comment 2 Dan Kenigsberg 2017-09-18 20:17:15 UTC
We surely do NOT want to refactor mac pools.

On the other hand, is it really too slow to scan for the owner of a MAC, only on the very rare case that someone explicitly asked for a colliding MAC? We now do the scan on Engine startup, so it should not be too hard to estimate it.

Comment 3 Martin Mucha 2017-09-19 14:37:23 UTC

for 1 macPoolId, we have to scan all clusters using it. For each such cluster we read all VMs from db, and for each VM we have to obtain all its nics. This is easy part. For every stateless running VM, we have to fetch it's original snapshot, which is stored in ovf in db.

For time estimation, I need to know typical number of clusters using 1 macPool,  number of VMs in it, and number of VMs running stateless in it.

Comment 4 Dan Kenigsberg 2017-09-19 20:18:56 UTC
the time seems linear in the number of VMs and the number of snashots. Can you provide an estimate for 100 VMs with one snapshot each, each with a single vNIC?

Comment 5 Michael Burman 2018-06-03 13:27:35 UTC
Verified on - 4.2.4.1-0.1.el7

MAC Address 00:00:00:00:00:53 is already in use by VM or snapshot: V2.

Comment 6 Sandro Bonazzola 2018-06-26 08:39:17 UTC
This bugzilla is included in oVirt 4.2.4 release, published on June 26th 2018.

Since the problem described in this bug report should be
resolved in oVirt 4.2.4 release, it has been closed with a resolution of CURRENT RELEASE.

If the solution does not work for you, please open a new bug report.