RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2170890 - rbd storage pool does not start via autostart
Summary: rbd storage pool does not start via autostart
Keywords:
Status: CLOSED DEFERRED
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: libvirt
Version: 9.1
Hardware: All
OS: Linux
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Virtualization Maintenance
QA Contact: Meina Li
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-02-17 14:47 UTC by Oliver Freyermuth
Modified: 2023-03-01 14:36 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-03-01 14:35:59 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHELPLAN-149080 0 None None None 2023-02-17 14:49:15 UTC

Description Oliver Freyermuth 2023-02-17 14:47:22 UTC
Description of problem:
Creating a storage pool for Ceph RBD storages and setting it to "autostart" does not cause the pool to start automatically when the libvirtd service is started. 
It can still be started manually, though. 

Version-Release number of selected component (if applicable):
libvirt-8.0.0-10.1.module+el8.7.0+1125+fc135c6d

How reproducible:
always

Steps to Reproduce:
1. Install libvirt package, and an RBD storage cluster. 
2. Create a new pool named `rbd`, with the following XML (for example):
```
<pool type='rbd'>
  <name>rbd</name>
  <uuid>329f5c06-0bae-4dc1-8a72-e3b6ee1bd852</uuid>
  <capacity unit='bytes'>0</capacity>
  <allocation unit='bytes'>0</allocation>
  <available unit='bytes'>0</available>
  <source>
    <host name='mon001.example.de'/>
    <host name='mon002.example.de'/>
    <host name='mon003.example.de'/>
    <name>rbd</name>
    <auth type='ceph' username='libvirt'>
      <secret uuid='207613ac-c9e9-48ff-9c12-c618475e029d'/>
    </auth>
  </source>
</pool>
```
3. Note that a corresponding secret with the cephx auth key needs to exist, too. 
4. Configure that pool to start automatically, via `virsh pool-autostart rbd`. 
5. Confirm the pool can be started fine:
```
# virsh pool-start rbd
Pool rbd started
# virsh pool-list --all
 Name   State    Autostart
----------------------------
 rbd    active   yes
```
6. Reboot the machine or call `systemctl restart libvirtd`. 

Actual results:
Pool stays inactive:
```
# virsh pool-list --all
 Name   State      Autostart
------------------------------
 rbd    inactive   yes
```
However, it can be started manually right away:
```
# virsh pool-start rbd
Pool rbd started
# virsh pool-list --all
 Name   State    Autostart
----------------------------
 rbd    active   yes
```

Expected results:
Pool autostarts, i.e.:
```
# virsh pool-list --all
 Name   State    Autostart
----------------------------
 rbd    active   yes
```

Additional info:
Same config worked fine with RedHat 7.9.

Comment 1 Oliver Freyermuth 2023-02-18 18:39:12 UTC
While the pool staying inactive does not seem to prevent starting VMs, it causes tooling such as Foreman via fog-libvirt to be unable to query the volumes and their allocation. 
I have also reported the issue upstream with libvirt at:
https://gitlab.com/libvirt/libvirt/-/issues/448

Comment 2 Michal Privoznik 2023-02-22 13:38:18 UTC
I am gonna repeat what I said there, for completeness. Autostart of objects (pools, domains, etc.) is specifically disabled for 'systemctl restart $daemon' case. And after reboot objects should autostart (though, this may not work well with socket activation). What we should do here, is treat running objects as shutdown inhibitors. We already do that for QEMU domains (virtqemud), but not for virtstoraged. As an optimization, we may ignore pools, that are automatically running (e.g. type='dir').

Comment 3 Peter Krempa 2023-03-01 14:35:59 UTC
Since this is an upstream issue, it will get to RHEL eventually once it's fixed upstream.


Note You need to log in before you can comment on or make changes to this bug.