This bug has been migrated to another issue tracking site. It has been closed here and may no longer be being monitored.

If you would like to get updates for this issue, or to participate in it, you may do so at Red Hat Issue Tracker .
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2228223 - libvirt storage pool goes inactive
Summary: libvirt storage pool goes inactive
Keywords:
Status: CLOSED MIGRATED
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: libvirt
Version: 9.2
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: rc
: ---
Assignee: Peter Krempa
QA Contact: Meina Li
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-08-01 18:40 UTC by schandle
Modified: 2023-09-27 16:50 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-09-22 16:56:11 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker   RHEL-7419 0 None Migrated None 2023-09-22 16:56:07 UTC
Red Hat Issue Tracker RHELPLAN-164064 0 None None None 2023-08-01 18:42:18 UTC
Red Hat Issue Tracker RHELPLAN-164065 0 None None None 2023-08-01 18:42:12 UTC

Description schandle 2023-08-01 18:40:31 UTC
Description of problem:
When configuring a logical storage pool virsh, the storage pool will automatically go into an "inactive" state after a short period of time (5-10 minutes).

Version-Release number of selected component (if applicable):
kernel-5.14.0-284.11.1.el9_2.x86_64                         Thu Jun 22 07:07:42 2023
libvirt-9.0.0-10.2.el9_2.x86_64
lvm2-2.03.17-7.el9.x86_64


How reproducible:
100% 

Steps to Reproduce:
1. install qemu-kvm libvirt virt-install virt-viewer
2. virsh pool-define-as storage-pool logical --source-dev /dev/sda --target=/dev/storage-pool
3. virsh pool-build storage-pool
4  virsh pool-start storage-pool
5. virsh pool-autostart storage-pool

Actual results:

# date ; virsh pool-list --all
Tue Aug  1 07:49:35 AM EDT 2023
 Name                            State    Autostart
-----------------------------------------------------
 storage-pool                    active   yes

# date ; virsh pool-list --all
Tue Aug  1 07:55:03 AM EDT 2023
 Name                            State      Autostart
-------------------------------------------------------
 storage-pool                    inactive   yes


Expected results:
For the storage pool to stay active or autostart if virtstoraged is restarted.  

Additional info:
It appears that virt*d.service are being restarted   
~~~
Aug  1 07:55:03 hostname systemd[1]: Starting Virtualization qemu daemon...
Aug  1 07:55:03 hostname systemd[1]: Started Virtualization qemu daemon.
Aug  1 07:55:03 hostname systemd[1]: Starting Virtualization storage daemon...
Aug  1 07:55:03 hostname systemd[1]: Started Virtualization storage daemon.
Aug  1 07:55:26 hostname systemd[1]: Starting Virtualization network daemon...
Aug  1 07:55:26 hostname systemd[1]: Started Virtualization network daemon.
Aug  1 07:55:27 hostname systemd[1]: Starting Virtualization nwfilter daemon...
Aug  1 07:55:27 hostname systemd[1]: Started Virtualization nwfilter daemon.
Aug  1 07:55:27 hostname systemd[1]: Starting Virtualization nodedev daemon...
Aug  1 07:55:27 hostname systemd[1]: Started Virtualization nodedev daemon.
Aug  1 07:55:37 hostname systemd[89815]: Starting Virtual filesystem service...
Aug  1 07:55:37 hostname systemd[89815]: Started Virtual filesystem service.
~~~

Comment 2 Hanna Czenczek 2023-08-02 08:38:17 UTC
Peter, can you take a look at the logs and see what this might be about (or who might have an idea)?

Comment 3 Meina Li 2023-08-03 06:50:51 UTC
Can reproduce this bug on:

libvirt-9.0.0-10.2.el9_2.x86_64
qemu-kvm-7.2.0-14.el9_2.3.x86_64
and
libvirt-9.5.0-4.el9.x86_64
qemu-kvm-8.0.0-10.el9.x86_64


Test Steps:
Just like the steps in description.

Additional info:
1. Restart libvirtd/virtstoraged, the status of pool will become inactive.
2. Restart virtqemud, the status will not change.

Comment 4 Peter Krempa 2023-08-08 14:01:08 UTC
The issue is that the convenience directory for LVs of a VG in '/dev/' is not created for an empty VG. Since libvirt was first checking the directory it assumed the pool does not exist.

I've posted a patch:

https://listman.redhat.com/archives/libvir-list/2023-August/241150.html

Comment 5 Peter Krempa 2023-08-17 11:58:53 UTC
Fixed upstream:

commit fa1a54baa59d244289ce666f9dc52d9eabca47f1
Author: Peter Krempa <pkrempa>
Date:   Tue Aug 8 15:53:53 2023 +0200

    virStorageBackendLogicalCheckPool: Properly mark empty logical pools as active
    
    The '/dev' filesystem convenience directory for a LVM volume group is
    not created when the volume group is empty.
    
    The logic in 'virStorageBackendLogicalCheckPool' which is used to see
    whether a pool is active was first checking presence of the directory,
    which failed for an empty VG.
    
    Since the second step is virStorageBackendLogicalMatchPoolSource which
    is checking mapping between configured PVs and the VG, we can simply
    rely on the function to also check presence of the pool.
    
    Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=2228223
    Signed-off-by: Peter Krempa <pkrempa>
    Reviewed-by: Ján Tomko <jtomko>

v9.6.0-26-gfa1a54baa5

Comment 6 Meina Li 2023-08-30 08:00:27 UTC
Pre-verified Version:
libvirt-9.7.0-1.fc37.x86_64
qemu-kvm-7.0.0-15.fc37.x86_64


Pre-verified Steps:
1. Define and start the logical pool.
# virsh pool-define-as storage-pool logical --source-dev /dev/sdb --target=/dev/storage-pool
Pool storage-pool defined
# virsh pool-build storage-pool
Pool storage-pool built
# virsh pool-start storage-pool
Pool storage-pool started
# virsh pool-autostart storage-pool
Pool storage-pool marked as autostarted
2. Check the pool status.
# date; virsh pool-list --all
Wed Aug 30 07:41:56 AM UTC 2023
 Name           State    Autostart
------------------------------------
 images         active   yes
 storage-pool   active   yes
3. After a while, check the pool status again.
# date; virsh pool-list --all
Wed Aug 30 07:49:07 AM UTC 2023
 Name           State    Autostart
------------------------------------
 images         active   yes
 storage-pool   active   yes
4. Restart the virtqemud/virtstoraged/libvirtd and check the pool status.
# systemctl restart virtqemud
# virsh pool-list --all
 Name           State    Autostart
------------------------------------
 images         active   yes
 storage-pool   active   yes

Comment 12 RHEL Program Management 2023-09-22 16:55:40 UTC
Issue migration from Bugzilla to Jira is in process at this time. This will be the last message in Jira copied from the Bugzilla bug.

Comment 13 RHEL Program Management 2023-09-22 16:56:11 UTC
This BZ has been automatically migrated to the issues.redhat.com Red Hat Issue Tracker. All future work related to this report will be managed there.

Due to differences in account names between systems, some fields were not replicated.  Be sure to add yourself to Jira issue's "Watchers" field to continue receiving updates and add others to the "Need Info From" field to continue requesting information.

To find the migrated issue, look in the "Links" section for a direct link to the new issue location. The issue key will have an icon of 2 footprints next to it, and begin with "RHEL-" followed by an integer.  You can also find this issue by visiting https://issues.redhat.com/issues/?jql= and searching the "Bugzilla Bug" field for this BZ's number, e.g. a search like:

"Bugzilla Bug" = 1234567

In the event you have trouble locating or viewing this issue, you can file an issue by sending mail to rh-issues. You can also visit https://access.redhat.com/articles/7032570 for general account information.


Note You need to log in before you can comment on or make changes to this bug.