Bug 2228223
| Summary: | libvirt storage pool goes inactive | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 9 | Reporter: | schandle |
| Component: | libvirt | Assignee: | Peter Krempa <pkrempa> |
| libvirt sub component: | Storage | QA Contact: | Meina Li <meili> |
| Status: | POST --- | Docs Contact: | |
| Severity: | high | ||
| Priority: | unspecified | CC: | hreitz, lmen, pkrempa, vgoyal, virt-maint |
| Version: | 9.2 | Keywords: | Triaged |
| Target Milestone: | rc | ||
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | Type: | Bug | |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
schandle
2023-08-01 18:40:31 UTC
Peter, can you take a look at the logs and see what this might be about (or who might have an idea)? Can reproduce this bug on: libvirt-9.0.0-10.2.el9_2.x86_64 qemu-kvm-7.2.0-14.el9_2.3.x86_64 and libvirt-9.5.0-4.el9.x86_64 qemu-kvm-8.0.0-10.el9.x86_64 Test Steps: Just like the steps in description. Additional info: 1. Restart libvirtd/virtstoraged, the status of pool will become inactive. 2. Restart virtqemud, the status will not change. The issue is that the convenience directory for LVs of a VG in '/dev/' is not created for an empty VG. Since libvirt was first checking the directory it assumed the pool does not exist. I've posted a patch: https://listman.redhat.com/archives/libvir-list/2023-August/241150.html Fixed upstream:
commit fa1a54baa59d244289ce666f9dc52d9eabca47f1
Author: Peter Krempa <pkrempa>
Date: Tue Aug 8 15:53:53 2023 +0200
virStorageBackendLogicalCheckPool: Properly mark empty logical pools as active
The '/dev' filesystem convenience directory for a LVM volume group is
not created when the volume group is empty.
The logic in 'virStorageBackendLogicalCheckPool' which is used to see
whether a pool is active was first checking presence of the directory,
which failed for an empty VG.
Since the second step is virStorageBackendLogicalMatchPoolSource which
is checking mapping between configured PVs and the VG, we can simply
rely on the function to also check presence of the pool.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=2228223
Signed-off-by: Peter Krempa <pkrempa>
Reviewed-by: Ján Tomko <jtomko>
v9.6.0-26-gfa1a54baa5
|