During internal Ceph deployment using a 17.1 compose and ceph-6.0-rhel-9-containers-candidate-72754-20230119204646 `openstack overcloud deploy` command fails with: FATAL | Create pool(s) | controller-0 | item={'name': 'vms', 'rule_name': 'replicated_rule', 'applicatio n': 'rbd'} | error={"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "-v", "/etc/ceph:/etc/ceph:z", "-v", "/var/lib/ceph/:/var/lib/ceph/:z", "-v", "/var/log/ceph/:/var/log/ceph/:z", "--en trypoint=ceph", "undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhceph:ceph-6.0-rhel-9-containers-candidate-72754-20230119204646", "-n", "client.admin", "-k", "/etc/ceph/ceph.client.admin.keyring", "--cluster", "ceph", "osd", "po ol", "create", "vms", "replicated", "replicated_rule", "--expected_num_objects", "0", "--autoscale-mode", "on"], "delta": "0:00:00.921538", "end": "2023-02-21 20:38:52.229793", "item": {"application": "rbd", "name": "vms", "rule_n ame": "replicated_rule"}, "rc": 1, "start": "2023-02-21 20:38:51.308255", "stderr": "Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)')", "stderr_lines": ["Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)')"], "stdout": "", "stdout_lines": []}
cephadm-17.2.5-67.el9cp.noarch
Root cause and suggested fix: When `openstack overcloud deploy` uses a deployed ceph cluster, it calls the ceph_pool ansible module [1] to create pools (e.g. vms, volumes, etc). This module concatenates a podman command like this: podman run --rm --net=host -v /etc/ceph:/etc/ceph:z -v /var/lib/ceph/:/var/lib/ceph/:z -v /var/log/ceph/:/var/log/ceph/:z --entrypoint=ceph undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhceph:ceph-6.0-rhel-9-containers-candidate-72754-20230119204646 -n client.admin -k /etc/ceph/ceph.client.admin.keyring --cluster ceph osd pool create vms replicated replicated_rule --expected_num_objects 0 --autoscale-mode on When this command is run with the new RHCSv6 container it fails because it's unable to read /etc/ceph in the container (this worked with RHCSv5). We can easily avoid this issue by modifying the first volume argument passed. replace -v /etc/ceph:/etc/ceph:z with -v /var/lib/ceph/584464a9-c4de-5b49-a95f-c9b795f025a2/config:/etc/ceph:z We are then able to create volumes with this modification of the original command: [tripleo-admin@controller-0 ~]$ sudo podman run --rm --net=host -v /var/lib/ceph/584464a9-c4de-5b49-a95f-c9b795f025a2/config:/etc/ceph:z -v /var/lib/ceph/:/var/lib/ceph/:z -v /var/log/ceph/:/var/log/ceph/:z --entrypoint=ceph undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhceph:ceph-6.0-rhel-9-containers-candidate-72754-20230119204646 -n client.admin -k /etc/ceph/ceph.client.admin.keyring --cluster ceph osd pool create foo replicated replicated_rule --expected_num_objects 0 --autoscale-mode on pool 'foo' created [tripleo-admin@controller-0 ~]$ Because the ansible module has hard coded /etc/ceph/ [2] it should be modified to use /var/lib/ceph/$FSID/config/ Other commands which podman run the ceph container should be adjusted accordingly too, e.g. [3]. [1] https://github.com/openstack/tripleo-ansible/blob/master/tripleo_ansible/roles/tripleo_cephadm/tasks/pools.yaml#L20-L41 [2] https://github.com/openstack/tripleo-ansible/blob/master/tripleo_ansible/ansible_plugins/modules/ceph_pool.py#L571 [3] https://github.com/openstack/tripleo-ansible/blob/master/tripleo_ansible/roles/tripleo_cephadm/tasks/ceph_cli.yaml#L27
Doc update looks good to me.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Release of components for Red Hat OpenStack Platform 17.1 (Wallaby)), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2023:4577