Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
.The volume list remains empty when no `ceph-osd` container is found and `cephvolumescan` actor no longer fails
Previously, if Ceph containers ran collocated with other containers without a `ceph-osd` container present among them, the process would try to retrieve the volume list from one non-Ceph container which would not work. Due to this, `cephvolumescan` actor would fail and the upgrade would not complete.
With this fix, if no `ceph-osd` container is found, the volume list will remain empty and the `cephvolumescan` actor does not fail.
DescriptionSergii Golovatiuk
2022-11-09 17:33:04 UTC
Description of problem:
cephvolumescan actor fails with the following message
===========================================================
ERRORS
============================================================
2022-11-08 20:48:00.373161 [ERROR] Actor: cephvolumescan
Message: Could not retrieve the ceph volumes list
Summary:
Details: An exception raised while retrieving ceph volumes Command ['podman', 'exec', 'neutron-dnsmasq-qdhcp-6de658ad-7a51-4b5f-b3a5-2e0bcd945356', 'ceph-volume', 'lvm', 'list', '--format', 'json'] failed with exit code 127.
============================================================
END OF ERRORS
============================================================
How reproducible:
Always
Steps to Reproduce:
1. Run leapp upgrade or preupgrade where ceph containers are mixed with some other containers.
Actual results:
Failed cephvolumescan actor
Expected results:
Passed cephvolumescan actor
Additional info:
Our environment has a lot of containers that are not ceph related. Some of them even have ceph binary installed and ceph.conf to do backups /etc/ceph/ceph.conf
as well as /usr/bin/ceph but it doesn't have /usr/sbin/ceph-volume as it doesn't need it.
cephvolumescan should scan only when ceph-volume is present in container and host
The upstream PR has been merged. waiting for the QA ack to be able to deliver the build in 8.4 with the fix. For all other rhel releases, builds with the fix will be delivered for CTC2.
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory (leapp-repository bug fix and enhancement update), and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://access.redhat.com/errata/RHBA-2023:2839
Description of problem: cephvolumescan actor fails with the following message =========================================================== ERRORS ============================================================ 2022-11-08 20:48:00.373161 [ERROR] Actor: cephvolumescan Message: Could not retrieve the ceph volumes list Summary: Details: An exception raised while retrieving ceph volumes Command ['podman', 'exec', 'neutron-dnsmasq-qdhcp-6de658ad-7a51-4b5f-b3a5-2e0bcd945356', 'ceph-volume', 'lvm', 'list', '--format', 'json'] failed with exit code 127. ============================================================ END OF ERRORS ============================================================ How reproducible: Always Steps to Reproduce: 1. Run leapp upgrade or preupgrade where ceph containers are mixed with some other containers. Actual results: Failed cephvolumescan actor Expected results: Passed cephvolumescan actor Additional info: Our environment has a lot of containers that are not ceph related. Some of them even have ceph binary installed and ceph.conf to do backups /etc/ceph/ceph.conf as well as /usr/bin/ceph but it doesn't have /usr/sbin/ceph-volume as it doesn't need it. cephvolumescan should scan only when ceph-volume is present in container and host