Bug 2228223
Summary: | libvirt storage pool goes inactive | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 9 | Reporter: | schandle |
Component: | libvirt | Assignee: | Peter Krempa <pkrempa> |
libvirt sub component: | Storage | QA Contact: | Meina Li <meili> |
Status: | CLOSED MIGRATED | Docs Contact: | |
Severity: | high | ||
Priority: | unspecified | CC: | hreitz, jsuchane, lmen, pkrempa, vgoyal, virt-maint |
Version: | 9.2 | Keywords: | MigratedToJIRA, Triaged |
Target Milestone: | rc | ||
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2023-09-22 16:56:11 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
schandle
2023-08-01 18:40:31 UTC
Peter, can you take a look at the logs and see what this might be about (or who might have an idea)? Can reproduce this bug on: libvirt-9.0.0-10.2.el9_2.x86_64 qemu-kvm-7.2.0-14.el9_2.3.x86_64 and libvirt-9.5.0-4.el9.x86_64 qemu-kvm-8.0.0-10.el9.x86_64 Test Steps: Just like the steps in description. Additional info: 1. Restart libvirtd/virtstoraged, the status of pool will become inactive. 2. Restart virtqemud, the status will not change. The issue is that the convenience directory for LVs of a VG in '/dev/' is not created for an empty VG. Since libvirt was first checking the directory it assumed the pool does not exist. I've posted a patch: https://listman.redhat.com/archives/libvir-list/2023-August/241150.html Fixed upstream: commit fa1a54baa59d244289ce666f9dc52d9eabca47f1 Author: Peter Krempa <pkrempa> Date: Tue Aug 8 15:53:53 2023 +0200 virStorageBackendLogicalCheckPool: Properly mark empty logical pools as active The '/dev' filesystem convenience directory for a LVM volume group is not created when the volume group is empty. The logic in 'virStorageBackendLogicalCheckPool' which is used to see whether a pool is active was first checking presence of the directory, which failed for an empty VG. Since the second step is virStorageBackendLogicalMatchPoolSource which is checking mapping between configured PVs and the VG, we can simply rely on the function to also check presence of the pool. Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=2228223 Signed-off-by: Peter Krempa <pkrempa> Reviewed-by: Ján Tomko <jtomko> v9.6.0-26-gfa1a54baa5 Pre-verified Version: libvirt-9.7.0-1.fc37.x86_64 qemu-kvm-7.0.0-15.fc37.x86_64 Pre-verified Steps: 1. Define and start the logical pool. # virsh pool-define-as storage-pool logical --source-dev /dev/sdb --target=/dev/storage-pool Pool storage-pool defined # virsh pool-build storage-pool Pool storage-pool built # virsh pool-start storage-pool Pool storage-pool started # virsh pool-autostart storage-pool Pool storage-pool marked as autostarted 2. Check the pool status. # date; virsh pool-list --all Wed Aug 30 07:41:56 AM UTC 2023 Name State Autostart ------------------------------------ images active yes storage-pool active yes 3. After a while, check the pool status again. # date; virsh pool-list --all Wed Aug 30 07:49:07 AM UTC 2023 Name State Autostart ------------------------------------ images active yes storage-pool active yes 4. Restart the virtqemud/virtstoraged/libvirtd and check the pool status. # systemctl restart virtqemud # virsh pool-list --all Name State Autostart ------------------------------------ images active yes storage-pool active yes Issue migration from Bugzilla to Jira is in process at this time. This will be the last message in Jira copied from the Bugzilla bug. This BZ has been automatically migrated to the issues.redhat.com Red Hat Issue Tracker. All future work related to this report will be managed there. Due to differences in account names between systems, some fields were not replicated. Be sure to add yourself to Jira issue's "Watchers" field to continue receiving updates and add others to the "Need Info From" field to continue requesting information. To find the migrated issue, look in the "Links" section for a direct link to the new issue location. The issue key will have an icon of 2 footprints next to it, and begin with "RHEL-" followed by an integer. You can also find this issue by visiting https://issues.redhat.com/issues/?jql= and searching the "Bugzilla Bug" field for this BZ's number, e.g. a search like: "Bugzilla Bug" = 1234567 In the event you have trouble locating or viewing this issue, you can file an issue by sending mail to rh-issues. You can also visit https://access.redhat.com/articles/7032570 for general account information. |