Bug 1244502 - HE deployment | MAC Vendor ID is not the same for HE-VM as for the rest of the VMs within the host cluster.
Summary: HE deployment | MAC Vendor ID is not the same for HE-VM as for the rest of th...
Keywords:
Status: CLOSED DEFERRED
Alias: None
Product: ovirt-hosted-engine-setup
Classification: oVirt
Component: Network
Version: 1.3.0
Hardware: x86_64
OS: Linux
medium
low
Target Milestone: ---
: ---
Assignee: Yedidyah Bar David
QA Contact: Nikolai Sednev
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-07-19 12:32 UTC by Nikolai Sednev
Modified: 2019-04-28 13:44 UTC (History)
15 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-09-05 08:05:52 UTC
oVirt Team: Integration
Embargoed:
ylavi: ovirt-4.2+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1244521 0 low CLOSED HE-VM is running with another VM with the same MAC addresses at the same host, while regular VMs with the same MACs can'... 2021-02-22 00:41:40 UTC
Red Hat Bugzilla 1254961 0 medium CLOSED engine-config&WebUI show different info for MAC pool range 2021-02-22 00:41:40 UTC
oVirt gerrit 81398 0 master POST Change MAC address to match the engine's 2017-09-04 12:34:08 UTC

Internal Links: 1244521 1254961

Description Nikolai Sednev 2015-07-19 12:32:05 UTC
Description of problem:
HE deployment | MAC Vendor ID is not the same for HE-VM as for the rest of the VMs within the host cluster.

During HE deployment customer offered to choose MAC address for the HE-VM, it's possible to chose it randomly from the offered range of 00:16:3E (which appears to be vendor ID Xensource, Inc.). After HE deployment completes, all default MAC address ranges for the VMs under host cluster appear to be 00:1A:4A (which vendor ID is Qumranet Inc.). 
Please provide default MAC address range for HE-VM during it's deployment from the same range of all other VMs after HE is deployed, to be aligned to the same vendor ID as Qumranet Inc. 

Version-Release number of selected component (if applicable):
[root@alma03 ~]# rpm -qa ovirt* vdsm* libvirt* qemu* sanlock* gluster*
qemu-kvm-ev-2.1.2-23.el7_1.3.1.x86_64
glusterfs-geo-replication-3.7.2-3.el7.x86_64
libvirt-daemon-driver-nwfilter-1.2.8-16.el7_1.3.x86_64
vdsm-jsonrpc-4.17.0-1054.git562e711.el7.noarch
glusterfs-client-xlators-3.7.2-3.el7.x86_64
libvirt-daemon-driver-nodedev-1.2.8-16.el7_1.3.x86_64
ovirt-engine-sdk-python-3.6.0.0-0.15.20150625.gitfc90daf.el7.centos.noarch
sanlock-lib-3.2.2-2.el7.x86_64
vdsm-cli-4.17.0-1054.git562e711.el7.noarch
glusterfs-cli-3.7.2-3.el7.x86_64
libvirt-daemon-config-nwfilter-1.2.8-16.el7_1.3.x86_64
vdsm-4.17.0-1054.git562e711.el7.noarch
vdsm-xmlrpc-4.17.0-1054.git562e711.el7.noarch
qemu-img-ev-2.1.2-23.el7_1.3.1.x86_64
glusterfs-server-3.7.2-3.el7.x86_64
libvirt-daemon-1.2.8-16.el7_1.3.x86_64
libvirt-daemon-driver-secret-1.2.8-16.el7_1.3.x86_64
libvirt-daemon-driver-qemu-1.2.8-16.el7_1.3.x86_64
ovirt-hosted-engine-setup-1.3.0-0.0.master.20150623153111.git68138d4.el7.noarch
sanlock-python-3.2.2-2.el7.x86_64
glusterfs-3.7.2-3.el7.x86_64
libvirt-daemon-driver-interface-1.2.8-16.el7_1.3.x86_64
vdsm-gluster-4.17.0-1054.git562e711.el7.noarch
sanlock-3.2.2-2.el7.x86_64
qemu-kvm-common-ev-2.1.2-23.el7_1.3.1.x86_64
glusterfs-fuse-3.7.2-3.el7.x86_64
libvirt-python-1.2.8-7.el7_1.1.x86_64
libvirt-daemon-kvm-1.2.8-16.el7_1.3.x86_64
ovirt-vmconsole-1.0.0-0.0.master.20150616120945.gitc1fb2bd.el7.noarch
vdsm-infra-4.17.0-1054.git562e711.el7.noarch
vdsm-yajsonrpc-4.17.0-1054.git562e711.el7.noarch
glusterfs-api-3.7.2-3.el7.x86_64
libvirt-client-1.2.8-16.el7_1.3.x86_64
libvirt-daemon-driver-storage-1.2.8-16.el7_1.3.x86_64
libvirt-daemon-driver-network-1.2.8-16.el7_1.3.x86_64
ovirt-host-deploy-1.4.0-0.0.master.20150617062825.git06a8f80.el7.noarch
glusterfs-libs-3.7.2-3.el7.x86_64
libvirt-lock-sanlock-1.2.8-16.el7_1.3.x86_64
ovirt-hosted-engine-ha-1.3.0-0.0.master.20150615153650.20150615153645.git5f8c290.el7.noarch
vdsm-python-4.17.0-1054.git562e711.el7.noarch
Red Hat Enterprise Linux Server release 7.1 (Maipo)

How reproducible:
100%

Steps to Reproduce:
1.Deploy HE on host and get to the default MAC address question, asked during HE deployment.
2.Check for default MAC address vendor id value at first 3 octets of the MAC address.
3.Deploy the HE and create VM with default MAC address.
4.Compare VM's vendor id with HE-VM's vendor id and realize that they're not from the same vendor.

Actual results:
Default MAC address range given for HE-VM appears to be from Xen (Citrix) instead of Qumranet.

Expected results:
Default vendor id during HE deployment should belong to Qumranet vendor id MAC address range.

Additional info:

Comment 1 Lev Veyde 2015-08-09 13:35:51 UTC
(In reply to Nikolai Sednev from comment #0)
> Description of problem:
> HE deployment | MAC Vendor ID is not the same for HE-VM as for the rest of
> the VMs within the host cluster.
> 
> During HE deployment customer offered to choose MAC address for the HE-VM,
> it's possible to chose it randomly from the offered range of 00:16:3E (which
> appears to be vendor ID Xensource, Inc.). After HE deployment completes, all
> default MAC address ranges for the VMs under host cluster appear to be
> 00:1A:4A (which vendor ID is Qumranet Inc.). 
> Please provide default MAC address range for HE-VM during it's deployment
> from the same range of all other VMs after HE is deployed, to be aligned to
> the same vendor ID as Qumranet Inc. 
> 
> Version-Release number of selected component (if applicable):
> [root@alma03 ~]# rpm -qa ovirt* vdsm* libvirt* qemu* sanlock* gluster*
> qemu-kvm-ev-2.1.2-23.el7_1.3.1.x86_64
> glusterfs-geo-replication-3.7.2-3.el7.x86_64
> libvirt-daemon-driver-nwfilter-1.2.8-16.el7_1.3.x86_64
> vdsm-jsonrpc-4.17.0-1054.git562e711.el7.noarch
> glusterfs-client-xlators-3.7.2-3.el7.x86_64
> libvirt-daemon-driver-nodedev-1.2.8-16.el7_1.3.x86_64
> ovirt-engine-sdk-python-3.6.0.0-0.15.20150625.gitfc90daf.el7.centos.noarch
> sanlock-lib-3.2.2-2.el7.x86_64
> vdsm-cli-4.17.0-1054.git562e711.el7.noarch
> glusterfs-cli-3.7.2-3.el7.x86_64
> libvirt-daemon-config-nwfilter-1.2.8-16.el7_1.3.x86_64
> vdsm-4.17.0-1054.git562e711.el7.noarch
> vdsm-xmlrpc-4.17.0-1054.git562e711.el7.noarch
> qemu-img-ev-2.1.2-23.el7_1.3.1.x86_64
> glusterfs-server-3.7.2-3.el7.x86_64
> libvirt-daemon-1.2.8-16.el7_1.3.x86_64
> libvirt-daemon-driver-secret-1.2.8-16.el7_1.3.x86_64
> libvirt-daemon-driver-qemu-1.2.8-16.el7_1.3.x86_64
> ovirt-hosted-engine-setup-1.3.0-0.0.master.20150623153111.git68138d4.el7.
> noarch
> sanlock-python-3.2.2-2.el7.x86_64
> glusterfs-3.7.2-3.el7.x86_64
> libvirt-daemon-driver-interface-1.2.8-16.el7_1.3.x86_64
> vdsm-gluster-4.17.0-1054.git562e711.el7.noarch
> sanlock-3.2.2-2.el7.x86_64
> qemu-kvm-common-ev-2.1.2-23.el7_1.3.1.x86_64
> glusterfs-fuse-3.7.2-3.el7.x86_64
> libvirt-python-1.2.8-7.el7_1.1.x86_64
> libvirt-daemon-kvm-1.2.8-16.el7_1.3.x86_64
> ovirt-vmconsole-1.0.0-0.0.master.20150616120945.gitc1fb2bd.el7.noarch
> vdsm-infra-4.17.0-1054.git562e711.el7.noarch
> vdsm-yajsonrpc-4.17.0-1054.git562e711.el7.noarch
> glusterfs-api-3.7.2-3.el7.x86_64
> libvirt-client-1.2.8-16.el7_1.3.x86_64
> libvirt-daemon-driver-storage-1.2.8-16.el7_1.3.x86_64
> libvirt-daemon-driver-network-1.2.8-16.el7_1.3.x86_64
> ovirt-host-deploy-1.4.0-0.0.master.20150617062825.git06a8f80.el7.noarch
> glusterfs-libs-3.7.2-3.el7.x86_64
> libvirt-lock-sanlock-1.2.8-16.el7_1.3.x86_64
> ovirt-hosted-engine-ha-1.3.0-0.0.master.20150615153650.20150615153645.
> git5f8c290.el7.noarch
> vdsm-python-4.17.0-1054.git562e711.el7.noarch
> Red Hat Enterprise Linux Server release 7.1 (Maipo)
> 
> How reproducible:
> 100%
> 
> Steps to Reproduce:
> 1.Deploy HE on host and get to the default MAC address question, asked
> during HE deployment.
> 2.Check for default MAC address vendor id value at first 3 octets of the MAC
> address.
> 3.Deploy the HE and create VM with default MAC address.
> 4.Compare VM's vendor id with HE-VM's vendor id and realize that they're not
> from the same vendor.
> 
> Actual results:
> Default MAC address range given for HE-VM appears to be from Xen (Citrix)
> instead of Qumranet.
> 
> Expected results:
> Default vendor id during HE deployment should belong to Qumranet vendor id
> MAC address range.
> 
> Additional info:

The issue here is a bit more complicated, as we can't use MAC addresses from our default Qumranet range, unless we'll also implement a mechanism to add/update the chosen MAC into the Engine's DB, to avoid possible address conflicts.

More simpler solution is to use a locally administered MAC range, which still won't avoid the possibility of the conflict if the same MAC address is manually entered by the user for one of the VMs.

Comment 2 Yaniv Lavi 2015-08-10 11:00:50 UTC
Does adding the HE vm resolve this issue, if we assign a MAC from the qumranet range?

Comment 3 Sandro Bonazzola 2015-09-04 07:47:27 UTC
Note that MAC range is going to be dropped in engine config.
Martin, is this still needed?

Comment 4 Sandro Bonazzola 2015-10-02 09:16:57 UTC
Martin? Roy?

Comment 5 Roy Golan 2015-10-11 12:15:51 UTC
After importing the HE VM, the MAC is unique in the system.

As for the CITRIX vendor - admin can try to edit the VM nic after imported and assign a new nic and MAC.

Comment 6 Nikolai Sednev 2015-10-11 15:12:23 UTC
(In reply to Roy Golan from comment #5)
> After importing the HE VM, the MAC is unique in the system.
> 
> As for the CITRIX vendor - admin can try to edit the VM nic after imported
> and assign a new nic and MAC.

I'm not talking about eporting the HE-VM actually, I'm talking about not relevant MAC vendor ID being assigned to the HE-VM, it should be of qumranet range, not the CITRIX anyway.
So what's the question?

Comment 7 Sandro Bonazzola 2015-10-12 07:59:06 UTC
As far as I understood, qumranet range shouldn't be applied anymore to mac pool range, according to bug #1254961. So no need to restrict the mac pool range for Hosted Engine as well. Right?

Comment 8 Dan Kenigsberg 2015-10-12 10:31:18 UTC
We now have multiple user-defined mac pools. HOWEVER, it would be nicer to default to the Qumranet range (lacking an explicit request from the user)

Comment 9 Sandro Bonazzola 2015-10-12 12:55:46 UTC
(In reply to Dan Kenigsberg from comment #8)
> We now have multiple user-defined mac pools. HOWEVER, it would be nicer to
> default to the Qumranet range (lacking an explicit request from the user)

Ok, this make more sense. Targeting 4.0.

Comment 11 Doron Fediuck 2015-10-12 12:56:49 UTC
I agree.
However note that the user can always use the MAC he wants, including a CITRIX one so this is not a real issue. Having that said, we will default to a Qumranet MAC once we conclude the hosted engine work for importing the VM.

Comment 17 Doron Fediuck 2017-09-05 08:05:52 UTC
The current state is that the HE MAC is out of scope and consistent with previous versions. So the chances of collisions are minimal. If we move to use something from our own range (or user defined one), we indeed need to care for other scenarios such as backup/restore and importing VMs which are utilizing addresses from our existing pool.

So thanks for trying but for now I'll close this issue. If you ever have time to properly handle it feel free to reopen.


Note You need to log in before you can comment on or make changes to this bug.