Bug 1950466 - Host installation failed
Summary: Host installation failed
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine
Version: 4.4.6
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ovirt-4.4.6
: 4.4.6
Assignee: Dana
QA Contact: Nikolai Sednev
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-04-16 16:56 UTC by Jiri Macku
Modified: 2021-06-01 13:23 UTC (History)
8 users (show)

Fixed In Version: ovirt-engine-4.4.6.5
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-06-01 13:23:01 UTC
oVirt Team: Infra
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
ansible log (37.05 KB, text/plain)
2021-04-29 15:10 UTC, Nikolai Sednev
no flags Details
sosreport from additional host alma04 (11.78 MB, application/x-xz)
2021-04-29 15:15 UTC, Nikolai Sednev
no flags Details
engine logs (15.92 MB, application/x-xz)
2021-04-29 15:17 UTC, Nikolai Sednev
no flags Details
sosreport-serval15-2021-05-05-fvxrlbh.tar.xz (15.61 MB, application/x-xz)
2021-05-05 13:51 UTC, Nikolai Sednev
no flags Details
sosreport from serval14 (15.35 MB, application/x-xz)
2021-05-05 13:55 UTC, Nikolai Sednev
no flags Details
ovirt-host-deploy-ansible-20210505162606-serval14.lab.eng.tlv2.redhat.com-dd0c526.log (931.00 KB, text/plain)
2021-05-05 13:56 UTC, Nikolai Sednev
no flags Details
sosreport from the engine (16.01 MB, application/x-xz)
2021-05-05 13:57 UTC, Nikolai Sednev
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2021:2179 0 None None None 2021-06-01 13:23:13 UTC
oVirt gerrit 114332 0 master MERGED engine: remove redundant ' in host deploy facts 2021-04-19 10:20:35 UTC

Description Jiri Macku 2021-04-16 16:56:01 UTC
Description of problem:
Installation of additional host to the hosted engine fails.

Version-Release number of selected component (if applicable):


How reproducible:
100%

Steps to Reproduce:
1. Install hosted engine.
2. Try to add additional host
3.

Actual results:
Installation fails.


Expected results:
Installation passes.


Additional info:
Hosted engine log:
2021-04-15 18:34:01,691+03 ERROR [org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand] (EE-ManagedThreadFactory-engine-Thread-31) [42844cfd] Host installation failed for host '2d14e04d-763f-48a1-8f23-43d3fcfff121', 'host_mixed_2': null
2021-04-15 18:34:01,694+03 INFO  [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (EE-ManagedThreadFactory-engine-Thread-31) [42844cfd] START, SetVdsStatusVDSCommand(HostName = host_mixed_2, SetVdsStatusVDSCommandParameters:{hostId='2d14e04d-763f-48a1-8f23-43d3fcfff121', status='InstallFailed', nonOperationalReason='NONE', stopSpmFailureLogged='false', maintenanceReason='null'}), log id: 2b65f060
2021-04-15 18:34:01,699+03 INFO  [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (EE-ManagedThreadFactory-engine-Thread-31) [42844cfd] FINISH, SetVdsStatusVDSCommand, return: , log id: 2b65f060
2021-04-15 18:34:01,703+03 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-31) [42844cfd] EVENT_ID: VDS_INSTALL_FAILED(505), Host host_mixed_2 installation failed. Please refer to /var/log/ovirt-engine/engine.log and log logs under /var/log/ovirt-engine/host-deploy/ for further details..

Comment 2 Michal Skrivanek 2021-04-17 16:23:02 UTC
please get the logs from that machine that failed (or the two of them). Including journal. 
What you have included are only logs from the HE that works and the deployment which is not enough to understand what's happening on those hosts
Thanks

Comment 4 Roni 2021-04-19 10:35:04 UTC
Look the same as https://bugzilla.redhat.com/show_bug.cgi?id=1937361
can some approve this?

Comment 5 Martin Perina 2021-04-19 11:30:15 UTC
(In reply to Roni from comment #4)
> Look the same as https://bugzilla.redhat.com/show_bug.cgi?id=1937361
> can some approve this?

No, BZ1937361 is different. According to the comments you probably have a server which is not able to reboot within 5 minute delay interval, so engine is trying to connect to the host too soon and that's why installation fail on reboot.

Comment 8 Nikolai Sednev 2021-04-29 15:07:59 UTC
Initially addition failed due to "Data Center Default compatibility version is 4.5, which is lower than latest available version 4.6. "Please upgrade your Data Center to latest version to successfully finish upgrade of your setup.", I checked the DC's and Host Cluster's compatibility version and they were 4.5, so I bumped them up to 4.6 and retried.

Works just fine on these components:
ovirt-engine-setup-4.4.6.6-0.10.el8ev.noarch
Red Hat Enterprise Linux release 8.4 (Ootpa)
Linux 4.18.0-304.el8.x86_64 #1 SMP Tue Apr 6 05:19:59 EDT 2021 x86_64 x86_64 x86_64 GNU/Linux

ovirt-hosted-engine-setup-2.5.0-2.el8ev.noarch
ovirt-hosted-engine-ha-2.4.6-1.el8ev.noarch
Red Hat Enterprise Linux release 8.4 (Ootpa)
Linux 4.18.0-304.el8.x86_64 #1 SMP Tue Apr 6 05:19:59 EDT 2021 x86_64 x86_64 x86_64 GNU/Linux

I've deployed on clean host HE and updated the engine to latest bits, then bumped up the compatibility level from 4.5 to 4.6 for both Host Cluster and the Data Center, then added fresh, additional ha-available host through engine's UI and now failed with:
"Ansible host-remove playbook execution failed on host alma04.qa.lab.tlv.redhat.com with message: Task Check if ovirt-provider-ovn-driver is installed failed to execute. Please check logs for more details: /var/log/ovirt-engine/ansible/ansible-20210429180340-ovirt-host-remove_yml-e88bf2e5-c09e-41bf-b25a-145d045b0480.log".

See log is attached.

Comment 9 Nikolai Sednev 2021-04-29 15:10:07 UTC
Created attachment 1777196 [details]
ansible log

Comment 10 Nikolai Sednev 2021-04-29 15:15:38 UTC
Created attachment 1777197 [details]
sosreport from additional host alma04

Comment 11 Nikolai Sednev 2021-04-29 15:17:20 UTC
Created attachment 1777198 [details]
engine logs

Comment 12 Michal Skrivanek 2021-04-30 06:43:05 UTC
please clarify "Initially addition failed due to"... so did it fail or not? Did you open a separate bug if it did? Default 4.6 level is tracked in bug 1950348 and is verified. Did you use that or a newer version?

for the second part, you can see the error in ansible log 
        "msg" : "Failed to install some of the specified packages",
        "failures" : [ "No package ovirt-provider-ovn-driver available." ],

and indeed you do not have that package available. This package is shipped in rhv-4.4-manager-for-rhel-8 channel (there's no new build in 4.4.6, you should use existing released RHBA-2021:0312-02)

Comment 13 Nikolai Sednev 2021-05-02 11:04:46 UTC
(In reply to Michal Skrivanek from comment #12)
> please clarify "Initially addition failed due to"... so did it fail or not?
It failed.
> Did you open a separate bug if it did? Default 4.6 level is tracked in bug
> 1950348 and is verified. 
No.
Did you use that or a newer version?
Both, it failed for 4.5 and for 4.6.
> for the second part, you can see the error in ansible log 
>         "msg" : "Failed to install some of the specified packages",
>         "failures" : [ "No package ovirt-provider-ovn-driver available." ],
> 
> and indeed you do not have that package available. This package is shipped
> in rhv-4.4-manager-for-rhel-8 channel (there's no new build in 4.4.6, you
> should use existing released RHBA-2021:0312-02)
I'm sticking to what we have in bob for years and never had issues whatsoever with this.

Comment 15 Michal Skrivanek 2021-05-03 06:43:28 UTC
(In reply to Nikolai Sednev from comment #13)
> (In reply to Michal Skrivanek from comment #12)
> > please clarify "Initially addition failed due to"... so did it fail or not?
> It failed.

ok, and is that a different problem? I may read this wrong...you say "initially it failed" which implies it succeeded later on. If it's not the case then you're only talking about the problem with ovirt-provider-ovn-driver?

> > Did you open a separate bug if it did? Default 4.6 level is tracked in bug
> > 1950348 and is verified. 
> No.
> Did you use that or a newer version?
> Both, it failed for 4.5 and for 4.6.
> > for the second part, you can see the error in ansible log 
> >         "msg" : "Failed to install some of the specified packages",
> >         "failures" : [ "No package ovirt-provider-ovn-driver available." ],
> > 
> > and indeed you do not have that package available. This package is shipped
> > in rhv-4.4-manager-for-rhel-8 channel (there's no new build in 4.4.6, you
> > should use existing released RHBA-2021:0312-02)
> I'm sticking to what we have in bob for years and never had issues
> whatsoever with this.

that may be, but it's unrelated to the actual product.

Comment 16 Nikolai Sednev 2021-05-03 08:27:07 UTC
Initially it failed, then it failed again, means it never worked for me. There were 2 issues for the first "initial" failure it was because of somehow host failed to get attached and go to active state, because of the DC's and Host Cluster's compatibility version were 4.5, and the failure was asking to have them bumped up to 4.6, which should not happen anyway, but it happened, so I bumped them up and retried adding the additional host and this time it failed again, but now because of what I already described above "Ansible host-remove playbook execution failed on host alma04.qa.lab.tlv.redhat.com with message: Task Check if ovirt-provider-ovn-driver is installed failed to execute. Please check logs for more details: /var/log/ovirt-engine/ansible/ansible-20210429180340-ovirt-host-remove_yml-e88bf2e5-c09e-41bf-b25a-145d045b0480.log".

The confusing part comes out of "Works just fine on these components:" which is a mistake of mine, as it never actually worked, so I moved it back to assigned and please leave it there until the issue is resolved.

Comment 20 Martin Perina 2021-05-04 04:47:23 UTC
This bug doesn't depend on either BZ1956413 or BZ1956487, this can be easily verified by adding a host to newly installed standalone or hosted engine

Comment 21 Nikolai Sednev 2021-05-04 07:28:05 UTC
(In reply to Martin Perina from comment #20)
> This bug doesn't depend on either BZ1956413 or BZ1956487, this can be easily
> verified by adding a host to newly installed standalone or hosted engine

Or HE means its blocked because you can't even deploy HE now due to BZ1956487. 
Steps to Reproduce:
1. Install hosted engine.
2. Try to add additional host
Since this bug clearly states to use HE for reproduction, I have no other choice here.

Comment 22 Dana 2021-05-04 07:40:46 UTC
The original bug wasn't related to HE environment. It was caused by a typo I made, and therefore could be reproduced in any environment

Comment 23 Nikolai Sednev 2021-05-04 08:07:10 UTC
Original bug clearly states in it's reproduction steps to use HE:
Steps to Reproduce:
1. Install hosted engine.
2. Try to add additional host
What good to verify on regular environment if last time it failed me on HE?

Comment 24 Dana 2021-05-04 08:30:50 UTC
When Jiri reported the bug, I asked to look at his machine.
I was able to see the error in ansible-runner-service.log - a reduntant ' made the flow inexecutable once it reached that point
Since this ' was in a task not related to HE, (and therefore I was able to reproduce it on my standalone env. as well) IMO it can be tested on a standalone env. too

Comment 25 Nikolai Sednev 2021-05-04 08:59:29 UTC
Due to environmental issues I can't test it on regular environment at the moment. Last time I tested it on HE it failed, but to be sure I need to retest it again on HE to see if its caused by the environmental issues or not.

Comment 26 Martin Perina 2021-05-04 09:07:39 UTC
This is not related to hosted or standalone engine, host deploy flow is the same for both hosted engine and standalone. If you take a look at the fix, it's clearly visibile that it can be verified on either hosted engine or standalone, no need for both or just a specific one.

Comment 27 Nikolai Sednev 2021-05-04 09:21:25 UTC
(In reply to Martin Perina from comment #26)
> This is not related to hosted or standalone engine, host deploy flow is the
> same for both hosted engine and standalone. If you take a look at the fix,
> it's clearly visibile that it can be verified on either hosted engine or
> standalone, no need for both or just a specific one.

Again, I can't verify on standalone because of what I wrote above, and HE have to be deployed first to be able to verify this bug and its blocked by 1956487.

Comment 28 Nikolai Sednev 2021-05-05 13:47:16 UTC
Addition of regular first host to standalone engine ovirt-engine-setup-4.4.6.6-0.10.el8ev.noarch fails too:
Host serval14.lab.eng.tlv2.redhat.com installation failed. Task Ensure Python3 is installed for CentOS/RHEL8 hosts failed to execute. Please check logs for more details: /var/log/ovirt-engine/host-deploy/ovirt-host-deploy-ansible-20210505162606-serval14.lab.eng.tlv2.redhat.com-dd0c526.log.
5/5/214:28:17 PM.

Here additional iteration on different host:

Host serval15.lab.eng.tlv2.redhat.com installation failed. Task Start and enable services failed to execute. Please check logs for more details: /var/log/ovirt-engine/host-deploy/ovirt-host-deploy-ansible-20210505163937-serval15.lab.eng.tlv2.redhat.com-644dddcc.log.
5/5/214:40:38 PM

Installing Host serval15.lab.eng.tlv2.redhat.com. Stop services.
5/5/214:40:38 PM

Installing Host serval15.lab.eng.tlv2.redhat.com. Stop services.
5/5/214:40:38 PM

Installing Host serval15.lab.eng.tlv2.redhat.com. populate service facts.
5/5/214:40:35 PM

Installing Host serval15.lab.eng.tlv2.redhat.com. Configure host for vdsm.
5/5/214:40:29 PM

Installing Host serval15.lab.eng.tlv2.redhat.com. Configure LVM filter.
5/5/214:40:20 PM

Installing Host serval15.lab.eng.tlv2.redhat.com. Remove temp file.
5/5/214:40:20 PM

Installing Host serval15.lab.eng.tlv2.redhat.com. Add QEMU client key file link.
5/5/214:40:17 PM

Installing Host serval15.lab.eng.tlv2.redhat.com. Add QEMU server key file.
5/5/214:40:17 PM

Installing Host serval15.lab.eng.tlv2.redhat.com. Set QEMU key path.
5/5/214:40:17 PM

Installing Host serval15.lab.eng.tlv2.redhat.com. Add vdsm key files.
5/5/214:40:17 PM

Installing Host serval15.lab.eng.tlv2.redhat.com. Set vdsm key path.
5/5/214:40:17 PM

Installing Host serval15.lab.eng.tlv2.redhat.com. Add QEMU client cert file link.
5/5/214:40:17 PM

Installing Host serval15.lab.eng.tlv2.redhat.com. Add QEMU server cert file.
5/5/214:40:14 PM

Installing Host serval15.lab.eng.tlv2.redhat.com. Add vdsm cert files.
5/5/214:40:14 PM

Installing Host serval15.lab.eng.tlv2.redhat.com. Add QEMU cacert file.
5/5/214:40:11 PM

Installing Host serval15.lab.eng.tlv2.redhat.com. Add vdsm cacert files.
5/5/214:40:11 PM

Installing Host serval15.lab.eng.tlv2.redhat.com. Prepare directories for vdsm certificate files.
5/5/214:40:08 PM

Installing Host serval15.lab.eng.tlv2.redhat.com. Run PKI enroll request for vdsm and QEMU.
5/5/214:40:05 PM

Installing Host serval15.lab.eng.tlv2.redhat.com. Copy vdsm and QEMU CSRs.
5/5/214:40:05 PM

Installing Host serval15.lab.eng.tlv2.redhat.com. Generate vdsm and QEMU CSRs.
5/5/214:40:02 PM

Installing Host serval15.lab.eng.tlv2.redhat.com. Create vdsm and QEMU key temporary files.
5/5/214:40:02 PM

Installing Host serval15.lab.eng.tlv2.redhat.com. Run vdsm-certificates role.
5/5/214:40:02 PM

Installing Host serval15.lab.eng.tlv2.redhat.com. Copy vdsm config prefix to vdsm.conf.
5/5/214:39:59 PM

Installing Host serval15.lab.eng.tlv2.redhat.com. Add adresses to vdsm.conf.
5/5/214:39:59 PM

Installing Host serval15.lab.eng.tlv2.redhat.com. Create vdsm.conf content.
5/5/214:39:59 PM

Installing Host serval15.lab.eng.tlv2.redhat.com. Fetch vdsm id.
5/5/214:39:59 PM

Installing Host serval15.lab.eng.tlv2.redhat.com. Store vdsm id.
5/5/214:39:59 PM

Installing Host serval15.lab.eng.tlv2.redhat.com. Set vdsm id for x86_64 or i686.
5/5/214:39:56 PM

Installing Host serval15.lab.eng.tlv2.redhat.com. Detect vdsm id for x86_64 or i686.
5/5/214:39:56 PM

Installing Host serval15.lab.eng.tlv2.redhat.com. Install dmidecode package.
5/5/214:39:56 PM

Installing Host serval15.lab.eng.tlv2.redhat.com. Check if vdsm id exists.
5/5/214:39:52 PM

Installing Host serval15.lab.eng.tlv2.redhat.com. Get packages.
5/5/214:39:52 PM

Installing Host serval15.lab.eng.tlv2.redhat.com. Install ovirt-host package.
5/5/214:39:49 PM

Installing Host serval15.lab.eng.tlv2.redhat.com. Install ovirt-hosted-engine-setup package.
5/5/214:39:49 PM

Installing Host serval15.lab.eng.tlv2.redhat.com. Include packages, vdsmid, pki, configure, and restart services tasks.
5/5/214:39:46 PM

Installing Host serval15.lab.eng.tlv2.redhat.com. Set facts.
5/5/214:39:46 PM

Installing Host serval15.lab.eng.tlv2.redhat.com. Ensure Python3 is installed for CentOS/RHEL8 hosts.
5/5/214:39:46 PM

Installing Host serval15.lab.eng.tlv2.redhat.com. Detect if host is a prebuilt image.
5/5/214:39:46 PM

Installing Host serval15.lab.eng.tlv2.redhat.com. Parse operating system release.
5/5/214:39:43 PM

Installing Host serval15.lab.eng.tlv2.redhat.com. Check if vdsm is preinstalled.
5/5/214:39:43 PM

Installing Host serval15.lab.eng.tlv2.redhat.com. Fetch installed packages.
5/5/214:39:43 PM

Installing Host serval15.lab.eng.tlv2.redhat.com. Detect host operating system.
5/5/214:39:43 PM

Installing Host serval15.lab.eng.tlv2.redhat.com. include_tasks.
5/5/214:39:40 PM

Installing Host serval15.lab.eng.tlv2.redhat.com. Gathering Facts.
5/5/214:39:40 PM

Ansible host-deploy playbook execution has started on host serval15.lab.eng.tlv2.redhat.com.
5/5/214:39:37 PM

Host serval15.lab.eng.tlv2.redhat.com was added by admin@internal-authz.
5/5/214:39:36 PM


serval15 ~]# rpm -qa | grep ansible
ovirt-ansible-collection-1.4.2-1.el8ev.noarch
ansible-2.9.18-1.el8ae.noarch

engine ~]#  rpm -qa | grep ansible
ansible-2.9.18-1.el8ae.noarch
python3-ansible-runner-1.4.6-2.el8ar.noarch
ansible-runner-service-1.0.7-1.el8ev.noarch
ovirt-ansible-collection-1.4.2-1.el8ev.noarch


Logs from engine and both hosts attached.
Moving back to assigned.

Comment 29 Nikolai Sednev 2021-05-05 13:51:48 UTC
Created attachment 1779817 [details]
sosreport-serval15-2021-05-05-fvxrlbh.tar.xz

Comment 30 Nikolai Sednev 2021-05-05 13:55:09 UTC
Created attachment 1779818 [details]
sosreport from serval14

Comment 31 Nikolai Sednev 2021-05-05 13:56:07 UTC
Created attachment 1779819 [details]
ovirt-host-deploy-ansible-20210505162606-serval14.lab.eng.tlv2.redhat.com-dd0c526.log

Comment 32 Nikolai Sednev 2021-05-05 13:57:08 UTC
Created attachment 1779820 [details]
sosreport from the engine

Comment 33 Nikolai Sednev 2021-05-05 14:08:06 UTC
On hosts I saw also:
May  5 16:52:44 serval14 su[13685]: (to vdsm) root on pts/0
May  5 16:52:44 serval14 kernel: L1TF CPU bug present and SMT on, data leak possible. See CVE-2018-3646 and https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/l1tf.html for details.
May  5 16:37:42 serval14 nm-dispatcher[1926]: Cannot open /run/vdsm/dhcp-monitor.sock socket, vdsmd is not running
May  5 16:37:42 serval14 nm-dispatcher[1926]: req:6 'dhcp4-change' [eno1], "/etc/NetworkManager/dispatcher.d/dhcp_moni
tor.py": complete: failed with Script '/etc/NetworkManager/dispatcher.d/dhcp_monitor.py' exited with error status 1.
May  5 16:37:42 serval14 NetworkManager[1677]: <warn>  [1620221862.7379] dispatcher: (6) /etc/NetworkManager/dispatche
r.d/dhcp_monitor.py failed (failed): Script '/etc/NetworkManager/dispatcher.d/dhcp_monitor.py' exited with error statu
s 1.
vdsm is not running on hosts and seems to failed to get started.

On hosts I see openvswitch2.11-2.11.0-5.el8fdp.x86_64. Might be this is related to https://bugzilla.redhat.com/show_bug.cgi?id=1956487#c9.
Trying to update to latest openvswitch version and retry.

Comment 34 Nikolai Sednev 2021-05-05 15:20:47 UTC
Tested now with openvswitch2.11.x86_64 2.11.3-87.el8fdp rhel-8-nightly-fdp.
Addition of regular host to standalone engine was successful as UI reported it here:
Ansible host-deploy playbook execution has successfully finished on host serval14.lab.eng.tlv2.redhat.com.
5/5/21 5:21:16 PM

But during addition the host getting rebooted and after it is online, it takes too much time for the engine to recognize that host is already up and running and change its status to online in UI. See here:
Status of host serval14.lab.eng.tlv2.redhat.com was set to Up.
5/5/21 5:31:51 PM

The host got online at:
serval14 ~]# uptime
17:33:51 up 9 min,  1 user,  load average: 0.08, 0.23, 0.19

Waiting time of ~13 minutes between host went up and engine recognized that.

I also tried to add second additional regular host alma03 and it worked just the same way as serval14. alma03 was restarted and addition took about the same time.
Ansible host-deploy playbook execution has successfully finished on host alma03.qa.lab.tlv.redhat.com.
5/5/21 6:07:24 PM

alma03 ~]# uptime 18:10:19 up 0 min,  1 user,  load average: 3.39, 0.86, 0.29

Status of host alma03.qa.lab.tlv.redhat.com was set to Up.
5/5/21 6:17:52 PM

Delay of 7 minutes between host went to up and engine recognized that it is up and running.


Works with these components on engine:
ovirt-engine-setup-4.4.6.6-0.10.el8ev.noarch
Linux 4.18.0-305.el8.x86_64 #1 SMP Thu Apr 29 08:54:30 EDT 2021 x86_64 x86_64 x86_64 GNU/Linux
Red Hat Enterprise Linux release 8.4 (Ootpa)


On hosts:
vdsm-4.40.60.6-1.el8ev.x86_64
libvirt-7.0.0-13.module+el8.4.0+10604+5608c2b4.x86_64
qemu-kvm-5.2.0-15.module+el8.4.0+10650+50781ca0.x86_64
sanlock-3.8.3-1.el8.x86_64
libvirt-lock-sanlock-7.0.0-13.module+el8.4.0+10604+5608c2b4.x86_64
Linux 4.18.0-305.el8.x86_64 #1 SMP Thu Apr 29 08:54:30 EDT 2021 x86_64 x86_64 x86_64 GNU/Linux
Red Hat Enterprise Linux release 8.4 (Ootpa)

Comment 38 errata-xmlrpc 2021-06-01 13:23:01 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: RHV Manager security update (ovirt-engine) [ovirt-4.4.6]), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:2179


Note You need to log in before you can comment on or make changes to this bug.