Description of problem:
After runnign docker_to_podman playbook against the pre-existing RHCS3 cluster, rgw instances don't come back up.
systemd unit assumes /var/lib/ceph/radosgw/ceph-rgw.overcloud-controller-0/EnvironmentFile to exist, but it doesn't for pre-existing RHCS3 clusters
Unfortunately creating an empty file is not sufficient, the process is launched:
/usr/bin/radosgw --cluster ceph --setuser ceph --setgroup ceph -d -n client.rgw.overcloud-controller-0. -/var/lib/ceph/radosgw/ceph-rgw.overcloud-controller-0./keyring
But nothing binds on the host:port set in /etc/ceph/ceph.conf
rgw frontends = civetweb port=10.40.0.24:8080 num_threads=512
Version-Release number of selected component (if applicable):
ceph-ansible-4.0.25-1.el8cp.noarch
Comment 1RHEL Program Management
2020-07-02 18:04:46 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory (Moderate: Red Hat Ceph Storage 3.3 security and bug fix update), and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://access.redhat.com/errata/RHSA-2020:3504