Bug 1843089
Summary: | virsh storage pools from HE deployments are not cleaned up | ||
---|---|---|---|
Product: | Red Hat Enterprise Virtualization Manager | Reporter: | amashah |
Component: | ovirt-hosted-engine-setup | Assignee: | Asaf Rachmani <arachman> |
Status: | CLOSED ERRATA | QA Contact: | Wei Wang <weiwang> |
Severity: | medium | Docs Contact: | |
Priority: | low | ||
Version: | 4.3.9 | CC: | arachman, emarcus, lsurette, michal.skrivanek, mtessun |
Target Milestone: | ovirt-4.4.1 | ||
Target Release: | 4.4.1 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | ovirt-ansible-hosted-engine-setup-1.1.5 | Doc Type: | Bug Fix |
Doc Text: |
Before this release, local storage pools were created but were not deleted during Self-Hosted Engine deployment, causing storage pool leftovers to remain.
In this release, the cleanup is performed properly following Self-Hosted Engine deployment, and there are no storage pool leftovers.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2020-08-04 13:23:51 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | Integration | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
amashah
2020-06-02 17:39:06 UTC
Is this the result of a failed hosted engine setup? (In reply to Sandro Bonazzola from comment #1) > Is this the result of a failed hosted engine setup? I *think* it happens whether it fails or succeeds, but the reason there are several is likely because of several failed deployment attempts. One of the ansible plays does a cleanup of the local VM dir [1], but it doesn't remove the defined storage pools. [1] https://github.com/oVirt/ovirt-ansible-hosted-engine-setup/blob/master/tasks/clean_localvm_dir.yml Test Version rhvh-4.3.10.1-0.20200513.0 cockpit-ovirt-dashboard-0.13.10-1.el7ev.noarch libvirt-4.5.0-33.el7_8.1.x86_64 Test Steps: 1. Deploy hosted engine with rhvh 2. Systemctl restart libvirtd 3. # grep autostart /var/log/messages Result: Jun 11 16:36:11 hp-dl388g9-04 libvirtd: 2020-06-11 08:36:11.193+0000: 20930: error : storageDriverAutostartCallback:209 : internal error: Failed to autostart storage pool '411445cd-76ac-4df7-b20c-5c1da1af8681': cannot open directory '/var/tmp/localvmsrsE67/images/411445cd-76ac-4df7-b20c-5c1da1af8681': No such file or directory Jun 11 16:36:11 hp-dl388g9-04 libvirtd: 2020-06-11 08:36:11.193+0000: 20930: error : storageDriverAutostartCallback:209 : internal error: Failed to autostart storage pool 'localvmsrsE67': cannot open directory '/var/tmp/localvmsrsE67': No such file or directory QE can reproduce this issue, ack+ QE will verify this issue until the new build is coming. Test with RHVH-4.4-20200701.0-RHVH-x86_64-dvd1.iso,ovirt-ansible-hosted-engine-setup-1.1.5-1.el8ev.noarch, bug is fixed, move it to "VERIFIED" Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (RHV Engine and Host Common Packages 4.4), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2020:3309 |