Bug 1244521 - HE-VM is running with another VM with the same MAC addresses at the same host, while regular VMs with the same MACs can't run at the same host and stopped by libvirt.
Summary: HE-VM is running with another VM with the same MAC addresses at the same host...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: General
Version: 3.6.0
Hardware: x86_64
OS: Linux
low
low
Target Milestone: ovirt-3.6.2
: 3.6.2.6
Assignee: Martin Mucha
QA Contact: Nikolai Sednev
URL:
Whiteboard:
Depends On: 1269768 1269846
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-07-19 15:38 UTC by Nikolai Sednev
Modified: 2016-02-18 11:11 UTC (History)
20 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2016-02-18 11:11:31 UTC
oVirt Team: Network
Embargoed:
rule-engine: ovirt-3.6.z+
ylavi: planning_ack+
rule-engine: devel_ack+
rule-engine: testing_ack+


Attachments (Terms of Use)
engine logs (253.01 KB, application/x-gzip)
2015-07-19 15:38 UTC, Nikolai Sednev
no flags Details
host's logs (3.90 MB, application/x-gzip)
2015-07-19 15:41 UTC, Nikolai Sednev
no flags Details
engines sosreport (7.17 MB, application/x-xz)
2015-11-29 19:46 UTC, Nikolai Sednev
no flags Details
hosts sosreport (9.78 MB, application/x-xz)
2015-11-29 19:48 UTC, Nikolai Sednev
no flags Details
sosreport from HE-host (8.15 MB, application/x-xz)
2016-01-13 18:11 UTC, Nikolai Sednev
no flags Details
sosreport from HE-VM (7.34 MB, application/x-xz)
2016-01-13 18:16 UTC, Nikolai Sednev
no flags Details
sosreport from the engine (7.13 MB, application/x-xz)
2016-01-14 17:18 UTC, Nikolai Sednev
no flags Details
sosreport from HE-host (7.22 MB, application/x-xz)
2016-01-14 17:22 UTC, Nikolai Sednev
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1244502 0 medium CLOSED HE deployment | MAC Vendor ID is not the same for HE-VM as for the rest of the VMs within the host cluster. 2021-02-22 00:41:40 UTC

Internal Links: 1244502

Description Nikolai Sednev 2015-07-19 15:38:29 UTC
Created attachment 1053628 [details]
engine logs

Description of problem:
HE-VM is running with another VM with the same MAC addresses at the same host, while regular VMs with the same MACs can't run at the same host and stopped by libvirt.

2015-07-19 11:55:23.998+0000: 1383: error : virNetDevTapCreateInBridgePort:568 : unsupported configuration: Unable to use MAC address starting with reserved value 0xFE - 'fe:16:3e:7c:cc:c0' -


Version-Release number of selected component (if applicable):
Host's side:
[root@alma02 ~]# rpm -qa vdsm* sanlock* qemu* mom* libvirt* ovirt* gluster*
glusterfs-fuse-3.7.2-3.el7.x86_64
libvirt-python-1.2.8-7.el7_1.1.x86_64
libvirt-daemon-kvm-1.2.8-16.el7_1.3.x86_64
vdsm-python-4.17.0-1054.git562e711.el7.noarch
glusterfs-api-3.7.2-3.el7.x86_64
libvirt-client-1.2.8-16.el7_1.3.x86_64
libvirt-daemon-driver-storage-1.2.8-16.el7_1.3.x86_64
libvirt-daemon-driver-network-1.2.8-16.el7_1.3.x86_64
vdsm-gluster-4.17.0-1054.git562e711.el7.noarch
sanlock-3.2.2-2.el7.x86_64
qemu-kvm-common-ev-2.1.2-23.el7_1.3.1.x86_64
glusterfs-libs-3.7.2-3.el7.x86_64
libvirt-lock-sanlock-1.2.8-16.el7_1.3.x86_64
ovirt-vmconsole-host-1.0.0-0.0.master.20150616120945.gitc1fb2bd.el7.noarch
sanlock-lib-3.2.2-2.el7.x86_64
vdsm-cli-4.17.0-1054.git562e711.el7.noarch
glusterfs-3.7.2-3.el7.x86_64
qemu-kvm-ev-2.1.2-23.el7_1.3.1.x86_64
glusterfs-geo-replication-3.7.2-3.el7.x86_64
libvirt-daemon-driver-nwfilter-1.2.8-16.el7_1.3.x86_64
libvirt-daemon-driver-interface-1.2.8-16.el7_1.3.x86_64
ovirt-vmconsole-1.0.0-0.0.master.20150616120945.gitc1fb2bd.el7.noarch
mom-0.4.5-2.el7.noarch
ovirt-hosted-engine-setup-1.3.0-0.0.master.20150623153111.git68138d4.el7.noarch
vdsm-xmlrpc-4.17.0-1054.git562e711.el7.noarch
ovirt-engine-sdk-python-3.6.0.0-0.15.20150625.gitfc90daf.el7.centos.noarch
qemu-kvm-tools-ev-2.1.2-23.el7_1.3.1.x86_64
glusterfs-client-xlators-3.7.2-3.el7.x86_64
libvirt-daemon-driver-nodedev-1.2.8-16.el7_1.3.x86_64
vdsm-4.17.0-1054.git562e711.el7.noarch
sanlock-python-3.2.2-2.el7.x86_64
glusterfs-cli-3.7.2-3.el7.x86_64
libvirt-daemon-config-nwfilter-1.2.8-16.el7_1.3.x86_64
ovirt-host-deploy-1.4.0-0.0.master.20150617062825.git06a8f80.el7.noarch
vdsm-jsonrpc-4.17.0-1054.git562e711.el7.noarch
qemu-img-ev-2.1.2-23.el7_1.3.1.x86_64
glusterfs-server-3.7.2-3.el7.x86_64
libvirt-daemon-1.2.8-16.el7_1.3.x86_64
libvirt-daemon-driver-secret-1.2.8-16.el7_1.3.x86_64
libvirt-daemon-driver-qemu-1.2.8-16.el7_1.3.x86_64
ovirt-hosted-engine-ha-1.3.0-0.0.master.20150615153650.20150615153645.git5f8c290.el7.noarch
vdsm-infra-4.17.0-1054.git562e711.el7.noarch
vdsm-yajsonrpc-4.17.0-1054.git562e711.el7.noarch
Red Hat Enterprise Linux Server release 7.1 (Maipo)


Engine side:
[root@nsednev-he-3 ~]# rpm -qa ovirt*
ovirt-host-deploy-java-1.4.0-0.0.master.20150617062845.git06a8f80.el6.noarch
ovirt-engine-restapi-3.6.0-0.0.master.20150627185750.git6f063c1.el6.noarch
ovirt-engine-userportal-3.6.0-0.0.master.20150627185750.git6f063c1.el6.noarch
ovirt-iso-uploader-3.6.0-0.0.master.20150618073905.gitea4158a.el6.noarch
ovirt-engine-wildfly-overlay-001-2.el6.noarch
ovirt-engine-lib-3.6.0-0.0.master.20150627185750.git6f063c1.el6.noarch
ovirt-engine-setup-base-3.6.0-0.0.master.20150627185750.git6f063c1.el6.noarch
ovirt-engine-vmconsole-proxy-helper-3.6.0-0.0.master.20150627185750.git6f063c1.el6.noarch
ovirt-engine-dbscripts-3.6.0-0.0.master.20150627185750.git6f063c1.el6.noarch
ovirt-engine-tools-3.6.0-0.0.master.20150627185750.git6f063c1.el6.noarch
ovirt-engine-setup-plugin-ovirt-engine-3.6.0-0.0.master.20150627185750.git6f063c1.el6.noarch
ovirt-engine-webadmin-portal-3.6.0-0.0.master.20150627185750.git6f063c1.el6.noarch
ovirt-engine-backend-3.6.0-0.0.master.20150627185750.git6f063c1.el6.noarch
ovirt-engine-extensions-api-impl-3.6.0-0.0.master.20150627185750.git6f063c1.el6.noarch
ovirt-engine-extension-aaa-jdbc-1.0.0-0.0.master.20150616142746.git03b5d8b.el6.noarch
ovirt-image-uploader-3.6.0-0.0.master.20150128151752.git3f60704.el6.noarch
ovirt-host-deploy-1.4.0-0.0.master.20150617062845.git06a8f80.el6.noarch
ovirt-engine-wildfly-8.2.0-1.el6.x86_64
ovirt-engine-setup-plugin-ovirt-engine-common-3.6.0-0.0.master.20150627185750.git6f063c1.el6.noarch
ovirt-engine-setup-plugin-vmconsole-proxy-helper-3.6.0-0.0.master.20150627185750.git6f063c1.el6.noarch
ovirt-engine-websocket-proxy-3.6.0-0.0.master.20150627185750.git6f063c1.el6.noarch
ovirt-engine-setup-3.6.0-0.0.master.20150627185750.git6f063c1.el6.noarch
ovirt-engine-3.6.0-0.0.master.20150627185750.git6f063c1.el6.noarch
ovirt-release-master-001-0.9.master.noarch
ovirt-engine-sdk-python-3.6.0.0-0.15.20150625.gitfc90daf.el6.noarch
ovirt-engine-cli-3.6.0.0-0.3.20150623.git53408f5.el6.noarch
ovirt-engine-setup-plugin-websocket-proxy-3.6.0-0.0.master.20150627185750.git6f063c1.el6.noarch
Red Hat Enterprise Linux Server release 6.7 (Santiago)

How reproducible:
100%

Steps to Reproduce:
1.Deploy HE 3.6 on one host RHEL7.1, use NFS for HE SD.
2.After HE deployed, create NFS SD, for regular VMs and to bring default SD and data center online.
3.Go to data center tab and then press edit button.
4.Go to MAC Address Pool sub tab and create new pool with starting MAC of the HE-VM and ending MAC with the same MAC of the HE-VM, i.e. if your HE-VM's MAC was 00:16:3E:7C:CC:CC, then fill in this MAC address in to "From" and "To" fields of the MAC pool, then press OK button.
5.Go to Virtual Machines and there create regular VM named "DUPHOSTEDENGINEVM", give to it NIC interface as virt/io and then choose it's booting media as network instead of hard disk (this is just to shorten the process, you also can give real hard disk volume from NFS).
6.Start the VM.
7.You should see that now you have two VMs running at the same host with the same MAC addresses, one is HE-VM and another is "DUPHOSTEDENGINEVM".
vnet0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::fc16:3eff:fe7c:cccc  prefixlen 64  scopeid 0x20<link>
        ether fe:16:3e:7c:cc:cc  txqueuelen 500  (Ethernet)
        RX packets 17363  bytes 10569852 (10.0 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 164166  bytes 45939585 (43.8 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vnet1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::fc16:3eff:fe7c:cccc  prefixlen 64  scopeid 0x20<link>
        ether fe:16:3e:7c:cc:cc  txqueuelen 500  (Ethernet)
        RX packets 2741  bytes 141290 (137.9 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 27377  bytes 6233614 (5.9 MiB)
        TX errors 0  dropped 40328 overruns 0  carrier 0  collisions 0
8.Now create anothe MAC address pool at the data center tab with any MAC address range.
9.Create another VM named "testdupvm" and give to it virt/io nic with some boot disk or start from network, then start the vm.
10.Check which MAC address was received by "testdupvm", for example it received "fe:16:3e:7c:cc:c0".
11.Go to data center and create another MAC address pool starting and ending with the same MAC fe:16:3e:7c:cc:c0, which was already acquired by "testdupvm".
12.Create and run "testdupvm2", it should try to receive the same MAC of the "testdupvm" (fe:16:3e:7c:cc:c0), but will fail with error described above.

Actual results:
For some reason HE-VM running at the same host with another VM, while both share the same MAC address and this is not prevented at all by the engine or any backend components, while the same is not possible for regular VMs with the same MACs and duplicate MAC recognized too late, only at the VM start by libvirt. 

Expected results:
Running VMs with the same MAC should be prevented much earlier and not by libvirt, prevention should behave the same for HE-VM and for regular VMs.

Additional info:
Logs from host and engine attached.

Comment 1 Nikolai Sednev 2015-07-19 15:41:46 UTC
Created attachment 1053629 [details]
host's logs

Comment 2 Barak 2015-07-22 09:43:23 UTC
It looks like we need to add validation when adding a MAC pool - to handle the use case like HE & custom MAC added before the MAC pool.

Comment 3 Nikolai Sednev 2015-07-22 11:46:02 UTC
(In reply to Barak from comment #2)
> It looks like we need to add validation when adding a MAC pool - to handle
> the use case like HE & custom MAC added before the MAC pool.

Validation may be also made towards non VM owned MACs, to get rid of MAC duplication within the network with VMs. This might be done by using GARP (gratuitous ARP) per each desired MAC, before deploying it for VM, to reassure that we don't have a duplication already.

Comment 4 Yaniv Lavi 2015-10-11 14:01:26 UTC
According to :
https://bugzilla.redhat.com/show_bug.cgi?id=1244502#c5

This should be resolved in 3.6. Can you test and close if this is resolved?
Of course this is only resolved when the allow duplication is false.

Comment 5 Nikolai Sednev 2015-10-25 16:16:00 UTC
Not fixed, tested on rhevm-3.6.0.1-0.1.el6.noarch running on RHEL6.7, which was managing pair of RHEL7.2 hosts with these components:
ovirt-vmconsole-1.0.0-0.0.6.master.el7ev.noarch
ovirt-release36-001-0.5.beta.noarch
mom-0.5.1-2.el7.noarch
ovirt-hosted-engine-setup-1.3.1-0.0.master.20151020145724.git565c3f9.el7.centos.noarch
ovirt-setup-lib-1.0.0-1.20150922141000.git147e275.el7.centos.noarch
ovirt-host-deploy-1.5.0-0.0.master.20151015221110.gitc2abfed.el7.noarch
ovirt-release36-snapshot-001-0.5.beta.noarch
qemu-kvm-rhev-2.3.0-31.el7.x86_64
ovirt-hosted-engine-ha-1.3.1-1.20151016090950.git5ea5093.el7.noarch
vdsm-4.17.10-5.el7ev.noarch
sanlock-3.2.4-1.el7.x86_64
ovirt-engine-sdk-python-3.6.0.4-0.1.20151014.git117764a.el7.centos.noarch
ovirt-vmconsole-host-1.0.0-0.0.6.master.el7ev.noarch
libvirt-client-1.2.17-12.el7.x86_64

Comment 6 Nikolai Sednev 2015-10-25 16:31:01 UTC
Same behavior reproduced also in rhevm-3.6.0.2-0.1.el6.noarch.

Comment 7 Sandro Bonazzola 2015-10-26 12:39:04 UTC
this is an automated message. oVirt 3.6.0 RC3 has been released and GA is targeted to next week, Nov 4th 2015.
Please review this bug and if not a blocker, please postpone to a later release.
All bugs not postponed on GA release will be automatically re-targeted to

- 3.6.1 if severity >= high
- 4.0 if severity < high

Comment 8 Dan Kenigsberg 2015-11-10 09:09:21 UTC
We suspect that this bug is caused by bug 1269846: the new MAC pool added in Nikolai's step 4 is not populated with the HE MAC address.

However Sandro, can you point Martin to the point in hosted-engine-setup code where it injects the HE-allocated MAC address into Engine's database?

Comment 9 Sandro Bonazzola 2015-11-16 15:16:45 UTC
(In reply to Dan Kenigsberg from comment #8)
> We suspect that this bug is caused by bug 1269846: the new MAC pool added in
> Nikolai's step 4 is not populated with the HE MAC address.
> 
> However Sandro, can you point Martin to the point in hosted-engine-setup
> code where it injects the HE-allocated MAC address into Engine's database?

Moving the needinfo on Roy. Hosted Engine setup is not injecting anything into the DB.

Comment 10 Roy Golan 2015-11-17 13:33:54 UTC
(In reply to Sandro Bonazzola from comment #9)
> (In reply to Dan Kenigsberg from comment #8)
> > We suspect that this bug is caused by bug 1269846: the new MAC pool added in
> > Nikolai's step 4 is not populated with the HE MAC address.
> > 
> > However Sandro, can you point Martin to the point in hosted-engine-setup
> > code where it injects the HE-allocated MAC address into Engine's database?
> 
> Moving the needinfo on Roy. Hosted Engine setup is not injecting anything
> into the DB.

In 3.6.1 we are importing the engine Vm into the DB, including the network devices.
This should reserve the MAC and prevent it from being flapped by new Vms.

Comment 11 Dan Kenigsberg 2015-11-17 16:15:43 UTC
(In reply to Roy Golan from comment #10)

> In 3.6.1 we are importing the engine Vm into the DB, including the network
> devices.
> This should reserve the MAC and prevent it from being flapped by new Vms.

Nikolai, would you be kind to re-test this in a 3.6.1 build?

Comment 12 Nikolai Sednev 2015-11-18 09:16:19 UTC
(In reply to Dan Kenigsberg from comment #11)
> (In reply to Roy Golan from comment #10)
> 
> > In 3.6.1 we are importing the engine Vm into the DB, including the network
> > devices.
> > This should reserve the MAC and prevent it from being flapped by new Vms.
> 
> Nikolai, would you be kind to re-test this in a 3.6.1 build?

Sure, when we'll receive the Roy's fixed component, as currently it still not ON_QA.
PSB
1269768

Comment 13 Nikolai Sednev 2015-11-29 19:43:53 UTC
Reproduced on latest build with these components on cleanly installed environment.

Host:
ovirt-vmconsole-host-1.0.1-0.0.master.20151105234454.git3e5d52e.el7.noarch
ovirt-release36-002-2.noarch
sanlock-3.2.4-1.el7.x86_64
ovirt-setup-lib-1.0.1-0.0.master.20151126203321.git2da7763.el7.centos.noarch
ovirt-engine-sdk-python-3.6.1.1-0.1.20151127.git2400b22.el7.centos.noarch
vdsm-4.17.11-7.gitc0752ac.el7.noarch
ovirt-vmconsole-1.0.1-0.0.master.20151105234454.git3e5d52e.el7.noarch
ovirt-release36-snapshot-002-2.noarch
qemu-kvm-rhev-2.3.0-31.el7_2.3.x86_64
mom-0.5.1-2.el7.noarch
ovirt-hosted-engine-ha-1.3.3.1-0.0.master.20151125134310.20151125134307.git2718494.el7.noarch
ovirt-hosted-engine-setup-1.3.1.1-0.0.master.20151124151641.git8763f36.el7.centos.noarch
ovirt-host-deploy-1.4.2-0.0.master.20151122153544.gitfc808fc.el7.noarch
libvirt-client-1.2.17-13.el7.x86_64
Linux version 3.10.0-327.el7.x86_64 (mockbuild.eng.bos.redhat.com) (gcc version 4.8.3 20140911 (Red Hat 4.8.3-9) (GCC) ) #1 SMP Thu Oct 29 17:29:29 EDT 2015

Engine:
ovirt-host-deploy-java-1.4.1-1.el6ev.noarch
ovirt-vmconsole-1.0.0-1.el6ev.noarch
ovirt-host-deploy-1.4.1-1.el6ev.noarch
ovirt-vmconsole-proxy-1.0.0-1.el6ev.noarch
rhevm-3.6.1-0.2.el6.noarch
ovirt-engine-extension-aaa-jdbc-1.0.3-1.el6ev.noarch

Vnets created and seen on host:
vnet0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::fc16:3eff:fe7b:b853  prefixlen 64  scopeid 0x20<link>
        ether fe:16:3e:7b:b8:53  txqueuelen 500  (Ethernet)
        RX packets 55296  bytes 22536417 (21.4 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 144641  bytes 34122620 (32.5 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vnet1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::fc16:3eff:fe7b:b853  prefixlen 64  scopeid 0x20<link>
        ether fe:16:3e:7b:b8:53  txqueuelen 500  (Ethernet)                                                                                       
        RX packets 355  bytes 19602 (19.1 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 1507  bytes 436306 (426.0 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Sosreports from host and engine attached.

Comment 14 Nikolai Sednev 2015-11-29 19:46:18 UTC
Created attachment 1100265 [details]
engines sosreport

Comment 15 Nikolai Sednev 2015-11-29 19:48:53 UTC
Created attachment 1100266 [details]
hosts sosreport

Comment 16 Dan Kenigsberg 2015-11-30 14:38:17 UTC
Thank you Nikolai.

Can you extract the output of brctl show with vnet0 and vnet1? Are they connected to the same bridge?
Can you extract the domxml of the two VMs?

Comment 17 Nikolai Sednev 2015-11-30 16:28:04 UTC
(In reply to Dan Kenigsberg from comment #16)
> Thank you Nikolai.
> 
> Can you extract the output of brctl show with vnet0 and vnet1? Are they
> connected to the same bridge?
> Can you extract the domxml of the two VMs?

# ifconfig
enp3s0f0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::a236:9fff:fe3a:c4f0  prefixlen 64  scopeid 0x20<link>
        ether a0:36:9f:3a:c4:f0  txqueuelen 1000  (Ethernet)
        RX packets 75187189  bytes 100293511937 (93.4 GiB)
        RX errors 0  dropped 3780  overruns 0  frame 0
        TX packets 49218159  bytes 22589584263 (21.0 GiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 0  (Local Loopback)
        RX packets 1605156  bytes 1006338061 (959.7 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 1605156  bytes 1006338061 (959.7 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ovirtmgmt: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.35.117.24  netmask 255.255.255.0  broadcast 10.35.117.255
        inet6 fe80::a236:9fff:fe3a:c4f0  prefixlen 64  scopeid 0x20<link>
        ether a0:36:9f:3a:c4:f0  txqueuelen 0  (Ethernet)
        RX packets 72996254  bytes 96320778851 (89.7 GiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 36502412  bytes 21687337501 (20.1 GiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vnet0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::fc16:3eff:fe7b:b853  prefixlen 64  scopeid 0x20<link>
        ether fe:16:3e:7b:b8:53  txqueuelen 500  (Ethernet)
        RX packets 239860  bytes 73232798 (69.8 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 968234  bytes 204537761 (195.0 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vnet1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::fc16:3eff:fe7b:b853  prefixlen 64  scopeid 0x20<link>
        ether fe:16:3e:7b:b8:53  txqueuelen 500  (Ethernet)
        RX packets 447  bytes 22573 (22.0 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 779  bytes 633486 (618.6 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@alma03 ~]# brctl show vnet0
bridge name     bridge id               STP enabled     interfaces
vnet0           can't get info Operation not supported
[root@alma03 ~]# brctl show vnet1
bridge name     bridge id               STP enabled     interfaces
vnet1           can't get info Operation not supported

brctl show
bridge name     bridge id               STP enabled     interfaces
;vdsmdummy;             8000.000000000000       no
ovirtmgmt               8000.a0369f3ac4f0       no              enp3s0f0
                                                        vnet0
                                                        vnet1




virsh -r dumpxml cf3ed598-2eb5-4d44-8195-3e89a9d56977
<domain type='kvm' id='7'>                                                                                                                        
  <name>HostedEngine</name>                                                                                                                       
  <uuid>cf3ed598-2eb5-4d44-8195-3e89a9d56977</uuid>                                                                                               
  <metadata xmlns:ovirt="http://ovirt.org/vm/tune/1.0">                                                                                           
    <ovirt:qos/>                                                                                                                                  
  </metadata>                                                                                                                                     
  <memory unit='KiB'>4194304</memory>                                                                                                             
  <currentMemory unit='KiB'>4194304</currentMemory>
  <vcpu placement='static'>2</vcpu>
  <cputune>
    <shares>1020</shares>
  </cputune>
  <resource>
    <partition>/machine</partition>
  </resource>
  <sysinfo type='smbios'>
    <system>
      <entry name='manufacturer'>oVirt</entry>
      <entry name='product'>oVirt Node</entry>
      <entry name='version'>7.2-9.el7</entry>
      <entry name='serial'>4C4C4544-0059-4410-8053-B7C04F573032</entry>
      <entry name='uuid'>cf3ed598-2eb5-4d44-8195-3e89a9d56977</entry>
    </system>
  </sysinfo>
  <os>
    <type arch='x86_64' machine='rhel6.5.0'>hvm</type>
    <smbios mode='sysinfo'/>
  </os>
  <features>
    <acpi/>
  </features>
  <cpu mode='custom' match='exact'>
    <model fallback='allow'>SandyBridge</model>
  </cpu>
  <clock offset='variable' adjustment='0' basis='utc'>
    <timer name='rtc' tickpolicy='catchup'/>
    <timer name='pit' tickpolicy='delay'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>destroy</on_reboot>
  <on_crash>destroy</on_crash>
  <devices>
    <emulator>/usr/libexec/qemu-kvm</emulator>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source startupPolicy='optional'/>
      <backingStore/>
      <target dev='hdc' bus='ide'/>
      <readonly/>
      <serial></serial>
      <alias name='ide0-1-0'/>
      <address type='drive' controller='0' bus='1' target='0' unit='0'/>
    </disk>
    <disk type='file' device='disk' snapshot='no'>
      <driver name='qemu' type='raw' cache='none' error_policy='stop' io='threads'/>
      <source file='/var/run/vdsm/storage/8ac16336-7e92-48ae-838e-6bae32d28a5a/c4fd81c5-81e0-45e2-8da3-409e20ff6147/623fa0cc-19b2-431d-80b2-9e333b9d01e4'>
        <seclabel model='selinux' labelskip='yes'/>
      </source>
      <backingStore/>
      <target dev='vda' bus='virtio'/>
      <serial>c4fd81c5-81e0-45e2-8da3-409e20ff6147</serial>
      <boot order='1'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </disk>
    <controller type='scsi' index='0' model='virtio-scsi'>
      <alias name='scsi0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </controller>
    <controller type='usb' index='0'>
      <alias name='usb'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pci-root'>
      <alias name='pci.0'/>
    </controller>
    <controller type='ide' index='0'>
      <alias name='ide'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <alias name='virtio-serial0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </controller>
    <lease>
      <lockspace>8ac16336-7e92-48ae-838e-6bae32d28a5a</lockspace>
      <key>623fa0cc-19b2-431d-80b2-9e333b9d01e4</key>
      <target path='/rhev/data-center/mnt/10.35.64.11:_vol_RHEV_Virt_nsednev__he__bugs__1__3__6_/8ac16336-7e92-48ae-838e-6bae32d28a5a/images/c4fd81c5-81e0-45e2-8da3-409e20ff6147/623fa0cc-19b2-431d-80b2-9e333b9d01e4.lease'/>
    </lease>
    <interface type='bridge'>
      <mac address='00:16:3e:7b:b8:53'/>
      <source bridge='ovirtmgmt'/>
      <target dev='vnet0'/>
      <model type='virtio'/>
      <filterref filter='vdsm-no-mac-spoofing'/>
      <link state='up'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>
    <console type='pty' tty='/dev/pts/1'>
      <source path='/dev/pts/1'/>
      <target type='virtio' port='0'/>
      <alias name='console0'/>
    </console>
    <channel type='unix'>
      <source mode='bind' path='/var/lib/libvirt/qemu/channels/cf3ed598-2eb5-4d44-8195-3e89a9d56977.com.redhat.rhevm.vdsm'/>
      <target type='virtio' name='com.redhat.rhevm.vdsm' state='connected'/>
      <alias name='channel0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <channel type='unix'>
      <source mode='bind' path='/var/lib/libvirt/qemu/channels/cf3ed598-2eb5-4d44-8195-3e89a9d56977.org.qemu.guest_agent.0'/>
      <target type='virtio' name='org.qemu.guest_agent.0' state='connected'/>
      <alias name='channel1'/>
      <address type='virtio-serial' controller='0' bus='0' port='2'/>
    </channel>
    <channel type='unix'>
      <source mode='bind' path='/var/lib/libvirt/qemu/channels/cf3ed598-2eb5-4d44-8195-3e89a9d56977.org.ovirt.hosted-engine-setup.0'/>
      <target type='virtio' name='org.ovirt.hosted-engine-setup.0' state='disconnected'/>
      <alias name='channel2'/>
      <address type='virtio-serial' controller='0' bus='0' port='3'/>
    </channel>
    <input type='mouse' bus='ps2'/>
    <input type='keyboard' bus='ps2'/>
    <graphics type='vnc' port='5900' autoport='yes' listen='0' passwdValidTo='1970-01-01T00:00:01'>
      <listen type='address' address='0'/>
    </graphics>
    <video>
      <model type='cirrus' vram='16384' heads='1'/>
      <alias name='video0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </video>
    <memballoon model='none'>
      <alias name='balloon0'/>
    </memballoon>
  </devices>
  <seclabel type='dynamic' model='selinux' relabel='yes'>
    <label>system_u:system_r:svirt_t:s0:c293,c1016</label>
    <imagelabel>system_u:object_r:svirt_image_t:s0:c293,c1016</imagelabel>
  </seclabel>
</domain>









 virsh -r dumpxml 332c7be8-0a27-4472-9274-7c46ad34414c
<domain type='kvm' id='11'>
  <name>VM_MAC_DUP_WITH_HE</name>
  <uuid>332c7be8-0a27-4472-9274-7c46ad34414c</uuid>
  <metadata xmlns:ovirt="http://ovirt.org/vm/tune/1.0">
    <ovirt:qos/>
  </metadata>
  <maxMemory slots='16' unit='KiB'>4294967296</maxMemory>
  <memory unit='KiB'>1048576</memory>
  <currentMemory unit='KiB'>1048576</currentMemory>
  <vcpu placement='static' current='1'>16</vcpu>
  <cputune>
    <shares>1020</shares>
  </cputune>
  <numatune>
    <memory mode='interleave' nodeset='0'/>
  </numatune>
  <resource>
    <partition>/machine</partition>
  </resource>
  <sysinfo type='smbios'>
    <system>
      <entry name='manufacturer'>oVirt</entry>
      <entry name='product'>oVirt Node</entry>
      <entry name='version'>7.2-9.el7</entry>
      <entry name='serial'>4C4C4544-0059-4410-8053-B7C04F573032</entry>
      <entry name='uuid'>332c7be8-0a27-4472-9274-7c46ad34414c</entry>
    </system>
  </sysinfo>
  <os>
    <type arch='x86_64' machine='pc-i440fx-rhel7.2.0'>hvm</type>
    <smbios mode='sysinfo'/>
  </os>
  <features>
    <acpi/>
  </features>
  <cpu mode='custom' match='exact'>
    <model fallback='allow'>SandyBridge</model>
    <topology sockets='16' cores='1' threads='1'/>
    <numa>
      <cell id='0' cpus='0' memory='1048576' unit='KiB'/>
    </numa>
  </cpu>
  <clock offset='variable' adjustment='0' basis='utc'>
    <timer name='rtc' tickpolicy='catchup'/>
    <timer name='pit' tickpolicy='delay'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <devices>
    <emulator>/usr/libexec/qemu-kvm</emulator>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source startupPolicy='optional'/>
      <backingStore/>
      <target dev='hdc' bus='ide'/>
      <readonly/>
      <serial></serial>
      <alias name='ide0-1-0'/>
      <address type='drive' controller='0' bus='1' target='0' unit='0'/>
    </disk>
    <disk type='file' device='disk' snapshot='no'>
      <driver name='qemu' type='raw' cache='none' error_policy='stop' io='threads'/>
      <source file='/rhev/data-center/00000001-0001-0001-0001-000000000165/a78b8c38-0a04-47ce-9ead-4b1e08fece6e/images/8d9e167e-885f-4afb-bae8-91b33d294d7f/8145e52f-2678-421a-821b-64eaf851903e'>
        <seclabel model='selinux' labelskip='yes'/>
      </source>
      <backingStore/>
      <target dev='vda' bus='virtio'/>
      <serial>8d9e167e-885f-4afb-bae8-91b33d294d7f</serial>
      <boot order='2'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </disk>
    <controller type='scsi' index='0' model='virtio-scsi'>
      <alias name='scsi0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </controller>
    <controller type='virtio-serial' index='0' ports='16'>
      <alias name='virtio-serial0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </controller>
    <controller type='usb' index='0'>
      <alias name='usb'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pci-root'>
      <alias name='pci.0'/>
    </controller>
    <controller type='ide' index='0'>
      <alias name='ide'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
    </controller>
    <interface type='bridge'>
      <mac address='00:16:3e:7b:b8:53'/>
      <source bridge='ovirtmgmt'/>
      <target dev='vnet1'/>
      <model type='virtio'/>
      <filterref filter='vdsm-no-mac-spoofing'/>
      <link state='up'/>
      <boot order='1'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>
    <serial type='unix'>
      <source mode='bind' path='/var/run/ovirt-vmconsole-console/332c7be8-0a27-4472-9274-7c46ad34414c.sock'/>
      <target port='0'/>
      <alias name='serial0'/>
    </serial>
    <console type='unix'>
      <source mode='bind' path='/var/run/ovirt-vmconsole-console/332c7be8-0a27-4472-9274-7c46ad34414c.sock'/>
      <target type='serial' port='0'/>
      <alias name='serial0'/>
    </console>
    <channel type='unix'>
      <source mode='bind' path='/var/lib/libvirt/qemu/channels/332c7be8-0a27-4472-9274-7c46ad34414c.com.redhat.rhevm.vdsm'/>
      <target type='virtio' name='com.redhat.rhevm.vdsm' state='disconnected'/>
      <alias name='channel0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <channel type='unix'>
      <source mode='bind' path='/var/lib/libvirt/qemu/channels/332c7be8-0a27-4472-9274-7c46ad34414c.org.qemu.guest_agent.0'/>
      <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/>
      <alias name='channel1'/>
      <address type='virtio-serial' controller='0' bus='0' port='2'/>
    </channel>
    <channel type='spicevmc'>
      <target type='virtio' name='com.redhat.spice.0' state='disconnected'/>
      <alias name='channel2'/>
      <address type='virtio-serial' controller='0' bus='0' port='3'/>
    </channel>
    <input type='mouse' bus='ps2'/>
    <input type='keyboard' bus='ps2'/>
    <graphics type='spice' port='5901' tlsPort='5902' autoport='yes' listen='0' passwdValidTo='2015-11-30T16:18:09' connected='disconnect'>
      <listen type='address' address='0'/>
    </graphics>
    <video>
      <model type='qxl' ram='65536' vram='32768' vgamem='16384' heads='1'/>
      <alias name='video0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </video>
    <memballoon model='none'>
      <alias name='balloon0'/>
    </memballoon>
  </devices>
  <seclabel type='dynamic' model='selinux' relabel='yes'>
    <label>system_u:system_r:svirt_t:s0:c85,c667</label>
    <imagelabel>system_u:object_r:svirt_image_t:s0:c85,c667</imagelabel>
  </seclabel>
</domain>

Comment 18 Red Hat Bugzilla Rules Engine 2015-12-02 00:02:06 UTC
Bug tickets must have version flags set prior to targeting them to a release. Please ask maintainer to set the correct version flags and only then set the target milestone.

Comment 19 Sandro Bonazzola 2015-12-23 15:08:26 UTC
This bug has target milestone 3.6.2 and is on modified without a target release.
This may be perfectly correct, but please check if the patch fixing this bug is included in ovirt-engine-3.6.2. If it's included, please set target-release to 3.6.2 and move to ON_QA. Thanks.

Comment 20 Nikolai Sednev 2016-01-13 18:00:46 UTC
Reproduced on these components:
Host:
mom-0.5.1-1.el7ev.noarch
ovirt-vmconsole-1.0.0-1.el7ev.noarch
ovirt-hosted-engine-ha-1.3.3.6-1.el7ev.noarch
qemu-kvm-rhev-2.3.0-31.el7_2.5.x86_64
ovirt-vmconsole-host-1.0.0-1.el7ev.noarch
ovirt-host-deploy-1.4.1-1.el7ev.noarch
libvirt-client-1.2.17-13.el7_2.2.x86_64
sanlock-3.2.4-2.el7_2.x86_64
ovirt-setup-lib-1.0.1-1.el7ev.noarch
vdsm-4.17.15-0.el7ev.noarch
ovirt-hosted-engine-setup-1.3.2.1-1.el7ev.noarch

Engine:
rhevm-dwh-setup-3.6.2-1.el6ev.noarch
ovirt-vmconsole-1.0.0-1.el6ev.noarch
rhevm-dwh-3.6.2-1.el6ev.noarch
ovirt-engine-extension-aaa-jdbc-1.0.4-1.el6ev.noarch
rhevm-3.6.2-0.1.el6.noarch
ovirt-setup-lib-1.0.1-1.el6ev.noarch
ovirt-vmconsole-proxy-1.0.0-1.el6ev.noarch
ovirt-host-deploy-1.4.1-1.el6ev.noarch
ovirt-host-deploy-java-1.4.1-1.el6ev.noarch


# ifconfig
enp3s0f0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::a236:9fff:fe3b:167c  prefixlen 64  scopeid 0x20<link>
        ether a0:36:9f:3b:16:7c  txqueuelen 1000  (Ethernet)
        RX packets 9078584  bytes 12984493225 (12.0 GiB)
        RX errors 0  dropped 269  overruns 0  frame 0
        TX packets 3631873  bytes 1200662737 (1.1 GiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 0  (Local Loopback)
        RX packets 154269  bytes 95875019 (91.4 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 154269  bytes 95875019 (91.4 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ovirtmgmt: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.35.117.26  netmask 255.255.255.0  broadcast 10.35.117.255
        ether a0:36:9f:3b:16:7c  txqueuelen 0  (Ethernet)
        RX packets 5742808  bytes 12400659900 (11.5 GiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 2972063  bytes 1138258802 (1.0 GiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vnet0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::fc16:3eff:fe7b:b853  prefixlen 64  scopeid 0x20<link>
        ether fe:16:3e:7b:b8:53  txqueuelen 500  (Ethernet)
        RX packets 30616  bytes 17363543 (16.5 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 94578  bytes 20350038 (19.4 MiB)
        TX errors 0  dropped 730 overruns 0  carrier 0  collisions 0

vnet1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::fc16:3eff:fe7b:b853  prefixlen 64  scopeid 0x20<link>
        ether fe:16:3e:7b:b8:53  txqueuelen 500  (Ethernet)
        RX packets 2  bytes 874 (874.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 76  bytes 6151 (6.0 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Sosreports from engine and host are attached.

Comment 21 Red Hat Bugzilla Rules Engine 2016-01-13 18:00:47 UTC
Target release should be placed once a package build is known to fix a issue. Since this bug is not modified, the target version has been reset. Please use target milestone to plan a fix for a oVirt release.

Comment 22 Nikolai Sednev 2016-01-13 18:11:59 UTC
Created attachment 1114482 [details]
sosreport from HE-host

Comment 23 Nikolai Sednev 2016-01-13 18:16:24 UTC
Created attachment 1114483 [details]
sosreport from HE-VM

Comment 24 Yaniv Lavi 2016-01-14 08:56:24 UTC
Was the engine VM imported to the engine at the time of the test?

Comment 25 Nikolai Sednev 2016-01-14 12:44:48 UTC
(In reply to Yaniv Dary from comment #24)
> Was the engine VM imported to the engine at the time of the test?

No.

Comment 26 Yaniv Lavi 2016-01-14 13:53:08 UTC
(In reply to Nikolai Sednev from comment #25)
> (In reply to Yaniv Dary from comment #24)
> > Was the engine VM imported to the engine at the time of the test?
> 
> No.

This should be resolved once the VM is imported, can you retest?

Comment 27 Martin Mucha 2016-01-14 15:58:59 UTC
(In reply to Nikolai Sednev from comment #23)
> Created attachment 1114483 [details]
> sosreport from HE-VM

when I look into engine.log I see many errors, but none of them seems to be related to pool. Maybe these errors are irrelevant, but it does not feel as a log of healthy engine, which just has problem with macs...

Comment 28 Nikolai Sednev 2016-01-14 16:35:43 UTC
I've tried to upgrade to 3.6.2.5 from currently running rhevm-3.6.2-0.1.el6.noarch and hit this issue:
[ INFO  ] Cleaning async tasks and compensations
          The following system tasks have been found running in the system:
          The following commands have been found running in the system:
          The following compensations have been found running in the system:
          org.ovirt.engine.core.bll.storage.AddExistingFileStorageDomainCommand org.ovirt.engine.core.common.businessentities.StorageDomainStatic
          org.ovirt.engine.core.bll.storage.AddExistingFileStorageDomainCommand org.ovirt.engine.core.common.businessentities.StorageDomainDynamic
          org.ovirt.engine.core.bll.storage.AddExistingFileStorageDomainCommand org.ovirt.engine.core.common.businessentities.StorageDomainDynamic
          Would you like to try to wait for that?
          (Answering "no" will stop the upgrade (Yes, No) yes
          Waiting for the completion of 3 running tasks during the next 20 seconds.
          Press Ctrl+C to interrupt. 
          Waiting for the completion of 3 running tasks during the next 20 seconds.
          Press Ctrl+C to interrupt. 

I've tried to power off/on the engine and the host running it, did not helped.
Looks like some tasks are running on background and they prevent me from upgrading the engine. I'll try installing this environment from scratch and see if this bug is resolved.

Comment 29 Nikolai Sednev 2016-01-14 17:09:55 UTC
Redeployed over iSCSI on Red Hat Enterprise Virtualization Hypervisor (Beta) release 7.2 (20160107.0.el7ev) with following components:
Host:
libvirt-1.2.17-13.el7_2.2.x86_64
qemu-kvm-rhev-2.3.0-31.el7_2.4.x86_64
sanlock-3.2.4-1.el7.x86_64
mom-0.5.1-1.el7ev.noarch
vdsm-4.17.16-0.el7ev.noarch
ovirt-host-deploy-1.4.1-1.el7ev.noarch
ovirt-hosted-engine-setup-1.3.2.2-1.el7ev.noarch

Engine:
rhevm-3.6.2.5-0.1.el6.noarch

From host:
# ifconfig
lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 0  (Local Loopback)
        RX packets 57765  bytes 34752367 (33.1 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 57765  bytes 34752367 (33.1 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ovirtmgmt: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.35.117.22  netmask 255.255.255.0  broadcast 10.35.117.255
        ether a0:36:9f:3b:17:3c  txqueuelen 0  (Ethernet)
        RX packets 2924181  bytes 7478685887 (6.9 GiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 3514806  bytes 57485645298 (53.5 GiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

p1p1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::a236:9fff:fe3b:173c  prefixlen 64  scopeid 0x20<link>
        ether a0:36:9f:3b:17:3c  txqueuelen 1000  (Ethernet)
        RX packets 7474086  bytes 10618371453 (9.8 GiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 41895443  bytes 59668264863 (55.5 GiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vnet0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::fc16:3eff:fe7b:bbbb  prefixlen 64  scopeid 0x20<link>
        ether fe:16:3e:7b:bb:bb  txqueuelen 500  (Ethernet)
        RX packets 69482  bytes 12533313 (11.9 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 237038  bytes 943029018 (899.3 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vnet1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::fc16:3eff:fe7b:bbbb  prefixlen 64  scopeid 0x20<link>
        ether fe:16:3e:7b:bb:bb  txqueuelen 500  (Ethernet)
        RX packets 128  bytes 8091 (7.9 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 190  bytes 120623 (117.7 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Comment 30 Red Hat Bugzilla Rules Engine 2016-01-14 17:09:57 UTC
Target release should be placed once a package build is known to fix a issue. Since this bug is not modified, the target version has been reset. Please use target milestone to plan a fix for a oVirt release.

Comment 31 Nikolai Sednev 2016-01-14 17:18:10 UTC
Created attachment 1114886 [details]
sosreport from the engine

Comment 32 Nikolai Sednev 2016-01-14 17:22:54 UTC
Created attachment 1114888 [details]
sosreport from HE-host

Comment 33 Dan Kenigsberg 2016-01-17 10:11:52 UTC
As noted in comment 12, this bug should be retested only after auto import of HE VM (bug 1269768) is working.

Comment 34 Red Hat Bugzilla Rules Engine 2016-01-20 11:39:12 UTC
Bug tickets that are moved to testing must have target release set to make sure tester knows what to test. Please set the correct target release before moving to ON_QA.

Comment 35 Red Hat Bugzilla Rules Engine 2016-01-20 11:40:58 UTC
Bug tickets that are moved to testing must have target release set to make sure tester knows what to test. Please set the correct target release before moving to ON_QA.

Comment 36 Nikolai Sednev 2016-01-21 12:24:57 UTC
VM was created, but NIC was not attached to VM with the following error:
	
	
Operation Canceled
Error while executing action:

DUPHOSTEDENGINEVM:

    Cannot add Interface. Not enough MAC addresses left in MAC Address Pool.

Works for me on these components:
Host:
ovirt-vmconsole-1.0.0-1.el7ev.noarch
ovirt-hosted-engine-ha-1.3.3.7-1.el7ev.noarch
mom-0.5.1-1.el7ev.noarch
qemu-kvm-rhev-2.3.0-31.el7_2.6.x86_64
ovirt-host-deploy-1.4.1-1.el7ev.noarch
libvirt-client-1.2.17-13.el7_2.2.x86_64
ovirt-setup-lib-1.0.1-1.el7ev.noarch
vdsm-4.17.18-0.el7ev.noarch
ovirt-vmconsole-host-1.0.0-1.el7ev.noarch
ovirt-hosted-engine-setup-1.3.2.3-1.el7ev.noarch
sanlock-3.2.4-2.el7_2.x86_64
Linux version 3.10.0-327.8.1.el7.x86_64 (mockbuild.eng.bos.redhat.com) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-4) (GCC) ) #1 SMP Mon Jan 11 05:03:18 EST 2016

Engine:
ovirt-vmconsole-1.0.0-1.el6ev.noarch
ovirt-host-deploy-1.4.1-1.el6ev.noarch
ovirt-setup-lib-1.0.1-1.el6ev.noarch
ovirt-vmconsole-proxy-1.0.0-1.el6ev.noarch
ovirt-host-deploy-java-1.4.1-1.el6ev.noarch
ovirt-engine-extension-aaa-jdbc-1.0.5-1.el6ev.noarch
rhevm-3.6.2.6-0.1.el6.noarch
rhevm-dwh-setup-3.6.2-1.el6ev.noarch
rhevm-dwh-3.6.2-1.el6ev.noarch
rhevm-reports-setup-3.6.2.4-1.el6ev.noarch
rhevm-reports-3.6.2.4-1.el6ev.noarch
rhevm-guest-agent-common-1.0.11-2.el6ev.noarch
Linux version 2.6.32-573.8.1.el6.x86_64 
(mockbuild.eng.bos.redhat.com) (gcc version 4.4.7 20120313 (Red Hat 4.4.7-16) (GCC) ) #1 SMP Fri Sep 25 19:24:22 EDT 2015


Note You need to log in before you can comment on or make changes to this bug.