Description of problem: cephvolumescan actor fails with the following message =========================================================== ERRORS ============================================================ 2022-11-08 20:48:00.373161 [ERROR] Actor: cephvolumescan Message: Could not retrieve the ceph volumes list Summary: Details: An exception raised while retrieving ceph volumes Command ['podman', 'exec', 'neutron-dnsmasq-qdhcp-6de658ad-7a51-4b5f-b3a5-2e0bcd945356', 'ceph-volume', 'lvm', 'list', '--format', 'json'] failed with exit code 127. ============================================================ END OF ERRORS ============================================================ How reproducible: Always Steps to Reproduce: 1. Run leapp upgrade or preupgrade where ceph containers are mixed with some other containers. Actual results: Failed cephvolumescan actor Expected results: Passed cephvolumescan actor Additional info: Our environment has a lot of containers that are not ceph related. Some of them even have ceph binary installed and ceph.conf to do backups /etc/ceph/ceph.conf as well as /usr/bin/ceph but it doesn't have /usr/sbin/ceph-volume as it doesn't need it. cephvolumescan should scan only when ceph-volume is present in container and host
The upstream PR has been merged. waiting for the QA ack to be able to deliver the build in 8.4 with the fix. For all other rhel releases, builds with the fix will be delivered for CTC2.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (leapp-repository bug fix and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2023:2839