Bug 1613104 - The engine is generating domain XML for HE VM also if the cluster compatibility level doesn't allow it
Summary: The engine is generating domain XML for HE VM also if the cluster compatibili...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine
Version: 4.2.5
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: ovirt-4.2.6
: ---
Assignee: Arik
QA Contact: Nikolai Sednev
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-08-07 02:15 UTC by Germano Veit Michel
Modified: 2021-09-09 15:19 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-09-04 13:41:42 UTC
oVirt Team: Virt
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHV-43576 0 None None None 2021-09-09 15:19:45 UTC
Red Hat Knowledge Base (Solution) 3557201 0 None None None 2018-08-08 23:13:27 UTC
Red Hat Product Errata RHBA-2018:2623 0 None None None 2018-09-04 13:42:30 UTC
oVirt gerrit 93571 0 master MERGED he: ovf: skip engineXml if cluster doesn't support it 2021-02-11 12:47:28 UTC
oVirt gerrit 93761 0 ovirt-engine-4.2 MERGED he: ovf: skip engineXml if cluster doesn't support it 2021-02-11 12:47:28 UTC

Description Germano Veit Michel 2018-08-07 02:15:03 UTC
Description of problem:

Setting the Custom Emulated Machine to rhel7.2.0 or lower in RHV 4.2 prevents the VM from starting.

In 4.2, for KASLR, vmcore info was added, on the VM XML:

    <features>
        <acpi/>
        <vmcoreinfo/>
    </features>

But if the machine type is 7.2.0 or lower, it fails to start:

2018-08-07T01:51:39.111629Z qemu-kvm: -device vmcoreinfo: vmcoreinfo device requires fw_cfg with DMA (code=1) (vm:1683)

rhel7.3.0 and higher works, it seems fw_cfg with DMA was added on rhel7.3.0.

Version-Release number of selected component (if applicable):
ovirt-engine-4.2.5.2-0.1.el7ev.noarch

How reproducible:
100%

Steps to Reproduce:
1. Create a VM
2. Set custom machine type to rhel7.2.0
3. Start VM

Comment 1 Germano Veit Michel 2018-08-07 02:17:20 UTC
For completeness:
vdsm-4.20.32-1.el7ev.x86_64
qemu-kvm-rhev-2.10.0-21.el7_5.4.x86_64

Comment 4 Michal Skrivanek 2018-08-07 05:02:48 UTC
This is an advanced option not intended to work with every combination of machine type. You can disable KASLR debug info(vmcoreinfo) in engine-config globally iin this case

Not sure what’s the request, just do not touch that option:) Any reason to use custom emulated machine? Why for HE?

Comment 5 Germano Veit Michel 2018-08-07 06:07:34 UTC
(In reply to Michal Skrivanek from comment #4)
> This is an advanced option not intended to work with every combination of
> machine type. You can disable KASLR debug info(vmcoreinfo) in engine-config
> globally iin this case
> 
> Not sure what’s the request, just do not touch that option:) Any reason to
> use custom emulated machine? Why for HE?

The engine created a vmxml for the HE with 7.2.0 machine type and vmcoreinfo feature enabled. We don't know why yet as they are still running the steps to bring it up.

Shouldn't the engine disable vmcoreinfo if the user sets 7.2.0 machine type or if this machine type is inherited from something else like CL?

Or perhaps the actual bug here is that the HE machine type was never upgraded. I'll add more info when we get it. Keeping the needinfo.

Comment 6 Germano Veit Michel 2018-08-07 06:15:26 UTC
By the way...

1) The user cannot set the Emulated Machine type for HE: "There was an attempt to change Hosted Engine VM values that are locked."

2) In our labs, our HE was Machine Type of rhel-6.5.0 in the DB and rhel7.3.0 in vm.conf. This is an old setup being upgraded throughout the years.

So when we upgrade this to 4.2, the vmxml created by the engine will contain rhel-6.5.0 and we will hit a similar problem?

Looks like the HE machine type is not going up with its cluster level/upgrades.

Comment 7 Michal Skrivanek 2018-08-07 10:15:03 UTC
(In reply to Germano Veit Michel from comment #5)
> (In reply to Michal Skrivanek from comment #4)
> > This is an advanced option not intended to work with every combination of
> > machine type. You can disable KASLR debug info(vmcoreinfo) in engine-config
> > globally iin this case
> > 
> > Not sure what’s the request, just do not touch that option:) Any reason to
> > use custom emulated machine? Why for HE?
> 
> The engine created a vmxml for the HE with 7.2.0 machine type and vmcoreinfo
> feature enabled. We don't know why yet as they are still running the steps
> to bring it up.

there's no vmxml generation for HE. Depending on whether it is an older setup or it was deployed using the new ansible way it's either using the stored vdsm create verb configuration(vdsm<4.2 -like) or engine-generated OVF

> Shouldn't the engine disable vmcoreinfo if the user sets 7.2.0 machine type
> or if this machine type is inherited from something else like CL?

No, the custom machine type has no relation to anything else. That's why it mostly doesn't work as the rest of the features in engine assumes the default machine type for its corresponding cluster level. Is this an older cluster level perhaps? 

> Or perhaps the actual bug here is that the HE machine type was never
> upgraded. I'll add more info when we get it. Keeping the needinfo.

possible. We had that bug before but AFAIK it has been fixed quite some time ago.

Comment 8 Michal Skrivanek 2018-08-07 10:25:42 UTC
(In reply to Germano Veit Michel from comment #6)
> By the way...
> 
> 1) The user cannot set the Emulated Machine type for HE: "There was an
> attempt to change Hosted Engine VM values that are locked."

Some fields are not supposed to be editable. There's no real reason, but that's just the way it is specifically for HE right now it seems

> 2) In our labs, our HE was Machine Type of rhel-6.5.0 in the DB and
> rhel7.3.0 in vm.conf. This is an old setup being upgraded throughout the
> years.

ok. As I said in previous comment, we had this bug in ~4.2.2 times. It's possible it's still broken in certain corner case.
 
> So when we upgrade this to 4.2, the vmxml created by the engine will contain
> rhel-6.5.0 and we will hit a similar problem?
>
> Looks like the HE machine type is not going up with its cluster
> level/upgrades.

I suggest Integration team to review that setup and upgrade procedure. It should not be the case...unless there is a bug:)

Comment 9 Germano Veit Michel 2018-08-08 04:58:47 UTC
Hi Michal,

(In reply to Michal Skrivanek from comment #7)
> there's no vmxml generation for HE. Depending on whether it is an older
> setup or it was deployed using the new ansible way it's either using the
> stored vdsm create verb configuration(vdsm<4.2 -like) or engine-generated OVF

From what I understand the engine creates the HE XML and stores it on the OVF. The ha-agent extracts that OVF and writes it to vm.conf (xmlBase64), which is used to start the VM later.

https://github.com/oVirt/ovirt-hosted-engine-ha/blob/master/ovirt_hosted_engine_ha/lib/ovf/ovf2VmParams.py#L228

In comment #0 its clear the engine can create invalid XML by mixing 7.2.0 machine type and vmcoreinfo.

> Some fields are not supposed to be editable. There's no real reason, but
> that's just the way it is specifically for HE right now it seems

Right. I expressed myself incorrectly. I just meant the HE has a custom machine type set and no one set it. This is the same case in our labs HE, and I also just checked 2 other customers DB and they all have custom machine type set for the HE.

> I suggest Integration team to review that setup and upgrade procedure. It
> should not be the case...unless there is a bug:)

Doesn't seem anything special. I'm afraid we will hit the same in our labs once we finish the upgrade, see:

engine=# select vm_name,custom_emulated_machine from vm_static where vm_name = 'HostedEngine';
   vm_name    | custom_emulated_machine 
--------------+-------------------------
 HostedEngine | rhel6.5.0

So maybe we have a combination of two problems?
1) HE deploy sets a custom machine type for HE
2) engine generates incorrect XML/OVF if the machine type is rhel7.2.0 or lower, as it does not support vmcoreinfo feature and the engine adds it anyway.

Finally, I just deployed a fresh 4.2.5 and the HE comes up with no custom emulated machine set. So this looks like an old problem that only hits older setups as they age, the custom machine type remains constant and some day we enable a feature that particular machine type doesnt support.

Perhaps during upgrade engine-setup should unset that custom machine type for HE?

Comment 10 Michal Skrivanek 2018-08-08 05:09:31 UTC
(In reply to Germano Veit Michel from comment #9)
> Hi Michal,
> 
> (In reply to Michal Skrivanek from comment #7)
> > there's no vmxml generation for HE. Depending on whether it is an older
> > setup or it was deployed using the new ansible way it's either using the
> > stored vdsm create verb configuration(vdsm<4.2 -like) or engine-generated OVF
> 
> From what I understand the engine creates the HE XML and stores it on the
> OVF. The ha-agent extracts that OVF and writes it to vm.conf (xmlBase64),
> which is used to start the VM later.

Right, I tried to make it short but this is indeed the way it works now:) It’s not the case in <4.2 though

> https://github.com/oVirt/ovirt-hosted-engine-ha/blob/master/
> ovirt_hosted_engine_ha/lib/ovf/ovf2VmParams.py#L228
> 
> In comment #0 its clear the engine can create invalid XML by mixing 7.2.0
> machine type and vmcoreinfo.

Because custom machine type is not supposed to be used

> > Some fields are not supposed to be editable. There's no real reason, but
> > that's just the way it is specifically for HE right now it seems
> 
> Right. I expressed myself incorrectly. I just meant the HE has a custom
> machine type set and no one set it. This is the same case in our labs HE,
> and I also just checked 2 other customers DB and they all have custom
> machine type set for the HE.

Probably because of the old way of running HE from params generated by he agent. 

> > I suggest Integration team to review that setup and upgrade procedure. It
> > should not be the case...unless there is a bug:)
> 
> Doesn't seem anything special. I'm afraid we will hit the same in our labs
> once we finish the upgrade, see:
> 
> engine=# select vm_name,custom_emulated_machine from vm_static where vm_name
> = 'HostedEngine';
>    vm_name    | custom_emulated_machine 
> --------------+-------------------------
>  HostedEngine | rhel6.5.0
> 
> So maybe we have a combination of two problems?
> 1) HE deploy sets a custom machine type for HE
> 2) engine generates incorrect XML/OVF if the machine type is rhel7.2.0 or
> lower, as it does not support vmcoreinfo feature and the engine adds it
> anyway.
> 
> Finally, I just deployed a fresh 4.2.5 and the HE comes up with no custom
> emulated machine set. So this looks like an old problem that only hits older
> setups as they age, the custom machine type remains constant and some day we
> enable a feature that particular machine type doesnt support.
> 
> Perhaps during upgrade engine-setup should unset that custom machine type
> for HE?

Possibly. Those are all questions to Integration though. Simone?

Comment 11 Simone Tiraboschi 2018-08-08 07:55:25 UTC
(In reply to Michal Skrivanek from comment #10)
> (In reply to Germano Veit Michel from comment #9)
> > Hi Michal,
> > 
> > (In reply to Michal Skrivanek from comment #7)
> > > there's no vmxml generation for HE. Depending on whether it is an older
> > > setup or it was deployed using the new ansible way it's either using the
> > > stored vdsm create verb configuration(vdsm<4.2 -like) or engine-generated OVF
> > 
> > From what I understand the engine creates the HE XML and stores it on the
> > OVF. The ha-agent extracts that OVF and writes it to vm.conf (xmlBase64),
> > which is used to start the VM later.
> 
> Right, I tried to make it short but this is indeed the way it works now:)
> It’s not the case in <4.2 though

vm.conf is extracted from the OVF_STORE volume since 3.5.
If the engine is >= 4.2 and ovirt-ha-agent up to date, it will simply take the xml for libvirt as generated by the engine as it is.

If the engine is < 4.2, ovirt-ha-agent is going to extract ovf xml and convert it to a dictionary for vdsm but the source of truth is still the OVF_STORE volume.

So, in both the cases, emulatedMachine comes from the vm definition in the OVF store volume and so from the engine DB.

> > https://github.com/oVirt/ovirt-hosted-engine-ha/blob/master/
> > ovirt_hosted_engine_ha/lib/ovf/ovf2VmParams.py#L228
> > 
> > In comment #0 its clear the engine can create invalid XML by mixing 7.2.0
> > machine type and vmcoreinfo.
> 
> Because custom machine type is not supposed to be used
> 
> > > Some fields are not supposed to be editable. There's no real reason, but
> > > that's just the way it is specifically for HE right now it seems
> > 
> > Right. I expressed myself incorrectly. I just meant the HE has a custom
> > machine type set and no one set it. This is the same case in our labs HE,
> > and I also just checked 2 other customers DB and they all have custom
> > machine type set for the HE.
> 
> Probably because of the old way of running HE from params generated by he
> agent. 

Also in the 'old way' emulatedMachine comes from the engine DB:
https://github.com/oVirt/ovirt-hosted-engine-ha/blob/master/ovirt_hosted_engine_ha/lib/ovf/ovf2VmParams.py#L268

> > > I suggest Integration team to review that setup and upgrade procedure. It
> > > should not be the case...unless there is a bug:)
> > 
> > Doesn't seem anything special. I'm afraid we will hit the same in our labs
> > once we finish the upgrade, see:
> > 
> > engine=# select vm_name,custom_emulated_machine from vm_static where vm_name
> > = 'HostedEngine';
> >    vm_name    | custom_emulated_machine 
> > --------------+-------------------------
> >  HostedEngine | rhel6.5.0
> > 
> > So maybe we have a combination of two problems?
> > 1) HE deploy sets a custom machine type for HE

I don't think it was explicitly set anyhow but I fear it's just the result of the auto-import process on engine VM side.
If so, this could also affects other externally imported VMs.

> > 2) engine generates incorrect XML/OVF if the machine type is rhel7.2.0 or
> > lower, as it does not support vmcoreinfo feature and the engine adds it
> > anyway.

Yes, I think so.

> > Finally, I just deployed a fresh 4.2.5 and the HE comes up with no custom
> > emulated machine set. So this looks like an old problem that only hits older
> > setups as they age, the custom machine type remains constant and some day we
> > enable a feature that particular machine type doesnt support.

In 4.2 node zero deployment flow, the engine VM is directly created by the engine running on the bootstrap local machine so it's by far closer to a regular VM.

> > Perhaps during upgrade engine-setup should unset that custom machine type
> > for HE?

Yes, I think so.
But this will also requires a reboot cycle to make it effective.

Comment 12 Simone Tiraboschi 2018-08-08 08:20:02 UTC
(In reply to Germano Veit Michel from comment #6)
> 2) In our labs, our HE was Machine Type of rhel-6.5.0 in the DB and
> rhel7.3.0 in vm.conf. This is an old setup being upgraded throughout the
> years.


On a system of mine, started at 3.6 and upgraded to 4.2, I found:

[root@enginevm tmp]# sudo -u postgres scl enable rh-postgresql95 -- psql -d engine -c 'select vm_name, custom_emulated_machine from vms where origin=6'
   vm_name    | custom_emulated_machine 
--------------+-------------------------
 HostedEngine | pc
(1 row)


[root@host ~]# . /etc/ovirt-hosted-engine/hosted-engine.conf 
[root@host ~]# dd if=$(grep "OVF_STORE volume path" /var/log/ovirt-hosted-engine-ha/agent.log | tail -n1 | awk '{print $NF}') 2>/dev/null | tar -xv ${vmid}.ovf -O 2>/dev/null | xmllint --format - | grep CustomEmulatedMachine
    <CustomEmulatedMachine>pc-i440fx-rhel7.3.0</CustomEmulatedMachine>


We can definitively unset the custom_emulated_machine values on upgrades.
But now the point is on how the engine translates custom_emulated_machine into CustomEmulatedMachine

Comment 13 Michal Skrivanek 2018-08-08 08:30:15 UTC
(In reply to Simone Tiraboschi from comment #11)
> So, in both the cases, emulatedMachine comes from the vm definition in the
> OVF store volume and so from the engine DB.

I should really write longer explanations to be more accurate. Yes, exactly like that:)

> > > 1) HE deploy sets a custom machine type for HE
> 
> I don't think it was explicitly set anyhow but I fear it's just the result
> of the auto-import process on engine VM side.
> If so, this could also affects other externally imported VMs.

I didn't find any obvious place where this is done in the import flow. But it can't be ruled out...

> > > 2) engine generates incorrect XML/OVF if the machine type is rhel7.2.0 or
> > > lower, as it does not support vmcoreinfo feature and the engine adds it
> > > anyway.
> 
> Yes, I think so.

it is a bit weird though, as vmcoreinfo has only been added in 4.2 so I'd expect it to be relevant only for new deployments or when we some update/upgrade of older HE VM . So I'd expect either old/original values used (hence 7.2.0 we used in 3.6/4.0) and no vmcoreinfo, or up-to-date machine type according to cluster.

Oh, wait, it's an old cluster level, isn't it?
And the OVF writer is used for the older cluster levels too....and that is a problem because it's a 4.2+ code only. Hmm...

Comment 14 Simone Tiraboschi 2018-08-08 08:35:57 UTC
(In reply to Simone Tiraboschi from comment #12)
> But now the point is on how the engine translates custom_emulated_machine
> into CustomEmulatedMachine

And here the response:
https://github.com/oVirt/ovirt-engine/blob/master/backend/manager/modules/utils/src/main/java/org/ovirt/engine/core/utils/ovf/HostedEngineOvfWriter.java#L49

https://github.com/oVirt/ovirt-engine/blob/master/backend/manager/modules/bll/src/main/java/org/ovirt/engine/core/utils/ovf/OvfManager.java#L68

<CustomEmulatedMachine> is not set from custom_emulated_machine field for hosted-engine VM in the DB (it's completely ignored!) but just from the cluster machine type (cluster.getEmulatedMachine).

So now the bug seams just that the engine set the vmcore device on a VM of a cluster that shouldn't support it.

Germano, can you please double check the cluster definition in that specific customer case?

Comment 15 Simone Tiraboschi 2018-08-08 08:38:09 UTC
(In reply to Michal Skrivanek from comment #13)
> it is a bit weird though, as vmcoreinfo has only been added in 4.2 so I'd
> expect it to be relevant only for new deployments or when we some
> update/upgrade of older HE VM . So I'd expect either old/original values
> used (hence 7.2.0 we used in 3.6/4.0) and no vmcoreinfo, or up-to-date
> machine type according to cluster.

The issue is exactly that we are consuming machine type from the cluster mixing it with 4.2 devices.

Comment 16 Michal Skrivanek 2018-08-08 08:52:50 UTC
(In reply to Simone Tiraboschi from comment #15)
> (In reply to Michal Skrivanek from comment #13)
> > it is a bit weird though, as vmcoreinfo has only been added in 4.2 so I'd
> > expect it to be relevant only for new deployments or when we some
> > update/upgrade of older HE VM . So I'd expect either old/original values
> > used (hence 7.2.0 we used in 3.6/4.0) and no vmcoreinfo, or up-to-date
> > machine type according to cluster.
> 
> The issue is exactly that we are consuming machine type from the cluster
> mixing it with 4.2 devices.

yes, but why is the OVF XML used when the cluster is not 4.2 (at least it doesn't seem it is, waiting for confirmation). The he-agent code should use the old method for HE in <4.2 clusters as the 4.2+ XML contained in OVF is not usable with <4.2 cluster levels.

Comment 17 Simone Tiraboschi 2018-08-08 09:09:17 UTC
(In reply to Michal Skrivanek from comment #16)
> yes, but why is the OVF XML used when the cluster is not 4.2 (at least it
> doesn't seem it is, waiting for confirmation). The he-agent code should use
> the old method for HE in <4.2 clusters as the 4.2+ XML contained in OVF is
> not usable with <4.2 cluster levels.

ovirt-ha-agent is not going to query the cluster compatibility level over rest api or similar (how could it do that if the engine VM is down?).
If the xml for libvirt is in the OVF_STORE, ovirt-ha-agent will simply consume it.

Comment 18 Michal Skrivanek 2018-08-08 09:58:31 UTC
well, the current cluster version is in the OVF so you do have it at hand. But the alternative solution you've posted should also work.

Comment 19 Simone Tiraboschi 2018-08-08 10:18:29 UTC
(In reply to Michal Skrivanek from comment #18)
> well, the current cluster version is in the OVF so you do have it at hand.
> But the alternative solution you've posted should also work.

And maybe we have another bug there as well: I have engine 4.2.4.5-1.el7, my cluster is at 4.2, my datacanter is at 4.2 but in the OVF_STORE I have:

<ovf:Envelope xmlns:ovf="http://schemas.dmtf.org/ovf/envelope/1/" xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData" xmlns:vssd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_VirtualSystemSettingData" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" ovf:version="4.1.0.0">
...
    <ClusterCompatibilityVersion>4.2</ClusterCompatibilityVersion>
...
      <System>
        <vssd:VirtualSystemType>ENGINE 4.1.0.0</vssd:VirtualSystemType>
      </System>

Comment 20 Germano Veit Michel 2018-08-08 23:07:14 UTC
Bingo! You are right. It comes from the Cluster Level.

Customer has also rhel6.5.0 (like our BNE labs), but machine type on vm.conf was 7.2.0 (coming from CL 4.0).

engine=> select vm_name,custom_emulated_machine,cluster_compatibility_version from vms where vm_name ='HostedEngine';
   vm_name    | custom_emulated_machine | cluster_compatibility_version 
--------------+-------------------------+-------------------------------
 HostedEngine | rhel6.5.0               | 4.0                            


And I just checked the OVFs in our labs, they contain rhel7.3.0 (CL 4.1)

Thanks guys!

Comment 22 Polina 2018-08-22 12:07:45 UTC
Verified on
ovirt-release42-snapshot-4.2.6-0.3.rc3.20180821015011.git2aa33d5.el7.noarch

Steps for verification:
1. In System/Advanced Parameters set Custom Emulated Machine for the created VM.

c-i440fx-rhel7.2.0 - success
pc-i440fx-rhel7.1.0 - success
Use cluster default(pc-i440fx-rhel7.3.0) - success

pc-i440fx-rhel7.0.0 - failed
pc-q35-rhel7.5.0 - failed

error for failure : Exit message: unsupported configuration: IDE controllers are unsupported for this QEMU binary or machine type.

For HE VM the attempt to change the Custom Emulated Machine brings: There was an attempt to change Hosted Engine VM values that are locked.

please approve

Comment 23 Germano Veit Michel 2018-08-22 22:54:38 UTC
(In reply to Polina from comment #22)
> Verified on
> ovirt-release42-snapshot-4.2.6-0.3.rc3.20180821015011.git2aa33d5.el7.noarch
> 
> Steps for verification:
> 1. In System/Advanced Parameters set Custom Emulated Machine for the created
> VM.
> 
> c-i440fx-rhel7.2.0 - success
> pc-i440fx-rhel7.1.0 - success
> Use cluster default(pc-i440fx-rhel7.3.0) - success
> 
> pc-i440fx-rhel7.0.0 - failed
> pc-q35-rhel7.5.0 - failed
> 
> error for failure : Exit message: unsupported configuration: IDE controllers
> are unsupported for this QEMU binary or machine type.
> 
> For HE VM the attempt to change the Custom Emulated Machine brings: There
> was an attempt to change Hosted Engine VM values that are locked.
> 
> please approve

Sorry Polina, I'm not sure what you are testing or want approval.

The merged patch is to deal with a situation where the Cluster Level is 4.0 and one upgrades all the way to 4.2.

So start with a SHE setup on 4.0, and upgrade it all the way to 4.2, but dont bump the cluster level, keep it at 4.0. Then restart the HE VM, with the patch applied it should come up.

For the normal VMs, I believe Michal made it clear that RHV will not sanity check custom user configs that can be conflicting. So this is about HE only.

Comment 26 RHV bug bot 2018-08-28 18:32:40 UTC
WARN: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason:

[Found non-acked flags: '{'rhevm-4.2.z': '?'}', ]

For more info please contact: rhv-devops: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason:

[Found non-acked flags: '{'rhevm-4.2.z': '?'}', ]

For more info please contact: rhv-devops

Comment 27 Nikolai Sednev 2018-08-30 11:01:11 UTC
I've upgraded 3.6->4.0->4.1->4.2 successfully.
The engine was up and running without any restarts from 4.0 to 4.2 inclusive.
I see in 4.2 host that is still under 4.0 compatibility host cluster that the engine VM is emulated as expected:
cat /run/ovirt-hosted-engine-ha/vm.conf | grep rhel
emulatedMachine=pc-i440fx-rhel7.2.0

After upgrading the remaining hosts from 4.1 to 4.2, I've tried to restart the engine VM by following these steps:
1-Enabled global maintenance from UI.
2-From CLI on ha-host that ran HE-VM, used "hosted-engine --vm-shutdown".
3-From CLI on ha-host from step 2, used "hosted-engine --vm-start".
4-Engine got started on host.
5-Disabled global maintenance from UI.

Engine got started as expected.
Moving to verified.

Tested on engine:
ovirt-engine-setup-4.2.6.4-0.1.el7ev.noarch
Linux 3.10.0-514.6.2.el7.x86_64 #1 SMP Fri Feb 17 19:21:31 EST 2017 x86_64 x86_64 x86_64 GNU/Linux
Red Hat Enterprise Linux Server release 7.5 (Maipo)

Hosts:
vdsm-4.20.39-1.el7ev.x86_64
libvirt-3.9.0-14.el7_5.7.x86_64
sanlock-3.6.0-1.el7.x86_64
qemu-kvm-rhev-2.10.0-21.el7_5.7.x86_64
ovirt-hosted-engine-setup-2.2.26-1.el7ev.noarch
ovirt-hosted-engine-ha-2.2.16-1.el7ev.noarch
Linux 3.10.0-862.11.6.el7.x86_64 #1 SMP Fri Aug 10 16:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
Red Hat Enterprise Linux Server release 7.5 (Maipo)

Comment 29 errata-xmlrpc 2018-09-04 13:41:42 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:2623


Note You need to log in before you can comment on or make changes to this bug.