Bug 2172582 - Internal Ceph deployment fails during pool creation with RHCSv6.0
Summary: Internal Ceph deployment fails during pool creation with RHCSv6.0
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: tripleo-ansible
Version: 17.1 (Wallaby)
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: beta
: 17.1
Assignee: Manoj Katari
QA Contact: Alfredo
URL:
Whiteboard:
Depends On:
Blocks: 2111528
TreeView+ depends on / blocked
 
Reported: 2023-02-22 15:31 UTC by John Fulton
Modified: 2023-08-16 01:14 UTC (History)
4 users (show)

Fixed In Version: tripleo-ansible-3.3.1-1.20230323220827.7480374.el9ost
Doc Type: Bug Fix
Doc Text:
Before this update, the `create pool` operation failed because the podman command used `/etc/ceph` as the volume argument. This argument does not work for Red Hat Ceph Storage version 6 containers. With this update, the podman command uses `/var/lib/ceph/$FSID/config/` as the first volume argument and `create pool` operations are successful.
Clone Of:
Environment:
Last Closed: 2023-08-16 01:13:59 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
OpenStack gerrit 875060 0 None MERGED Fix ceph pool creation failure 2023-03-10 15:49:06 UTC
OpenStack gerrit 877018 0 None MERGED Fix ceph pool creation failure 2023-03-13 11:33:45 UTC
OpenStack gerrit 877154 0 None MERGED Fix ceph pool creation failure 2023-03-23 07:21:39 UTC
Red Hat Issue Tracker OSP-22606 0 None None None 2023-02-22 15:33:38 UTC
Red Hat Product Errata RHEA-2023:4577 0 None None None 2023-08-16 01:14:23 UTC

Description John Fulton 2023-02-22 15:31:50 UTC
During internal Ceph deployment using a 17.1 compose and ceph-6.0-rhel-9-containers-candidate-72754-20230119204646 `openstack overcloud deploy` command fails with:

 FATAL | Create pool(s) | controller-0 | item={'name': 'vms', 'rule_name': 'replicated_rule', 'applicatio
n': 'rbd'} | error={"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "-v", "/etc/ceph:/etc/ceph:z", "-v", "/var/lib/ceph/:/var/lib/ceph/:z", "-v", "/var/log/ceph/:/var/log/ceph/:z", "--en
trypoint=ceph", "undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhceph:ceph-6.0-rhel-9-containers-candidate-72754-20230119204646", "-n", "client.admin", "-k", "/etc/ceph/ceph.client.admin.keyring", "--cluster", "ceph", "osd", "po
ol", "create", "vms", "replicated", "replicated_rule", "--expected_num_objects", "0", "--autoscale-mode", "on"], "delta": "0:00:00.921538", "end": "2023-02-21 20:38:52.229793", "item": {"application": "rbd", "name": "vms", "rule_n
ame": "replicated_rule"}, "rc": 1, "start": "2023-02-21 20:38:51.308255", "stderr": "Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)')", "stderr_lines": ["Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)')"], "stdout": "", "stdout_lines": []}

Comment 1 John Fulton 2023-02-22 15:41:12 UTC
cephadm-17.2.5-67.el9cp.noarch

Comment 2 John Fulton 2023-02-22 16:19:15 UTC
Root cause and suggested fix:

When `openstack overcloud deploy` uses a deployed ceph cluster, it calls the ceph_pool ansible module [1] to create pools (e.g. vms, volumes, etc).

This module concatenates a podman command like this:

podman run --rm --net=host -v /etc/ceph:/etc/ceph:z -v /var/lib/ceph/:/var/lib/ceph/:z -v /var/log/ceph/:/var/log/ceph/:z --entrypoint=ceph undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhceph:ceph-6.0-rhel-9-containers-candidate-72754-20230119204646 -n client.admin -k /etc/ceph/ceph.client.admin.keyring --cluster ceph osd pool create vms replicated replicated_rule --expected_num_objects 0 --autoscale-mode on

When this command is run with the new RHCSv6 container it fails because it's unable to read /etc/ceph in the container (this worked with RHCSv5). We can easily avoid this issue by modifying the first volume argument passed.

  replace -v /etc/ceph:/etc/ceph:z with -v /var/lib/ceph/584464a9-c4de-5b49-a95f-c9b795f025a2/config:/etc/ceph:z

We are then able to create volumes with this modification of the original command:

[tripleo-admin@controller-0 ~]$ sudo podman run --rm --net=host -v /var/lib/ceph/584464a9-c4de-5b49-a95f-c9b795f025a2/config:/etc/ceph:z -v /var/lib/ceph/:/var/lib/ceph/:z -v /var/log/ceph/:/var/log/ceph/:z --entrypoint=ceph undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhceph:ceph-6.0-rhel-9-containers-candidate-72754-20230119204646 -n client.admin -k /etc/ceph/ceph.client.admin.keyring --cluster ceph osd pool create foo replicated replicated_rule --expected_num_objects 0 --autoscale-mode on
pool 'foo' created
[tripleo-admin@controller-0 ~]$

Because the ansible module has hard coded /etc/ceph/ [2] it should be modified to use /var/lib/ceph/$FSID/config/ 

Other commands which podman run the ceph container should be adjusted accordingly too, e.g. [3].  


[1] https://github.com/openstack/tripleo-ansible/blob/master/tripleo_ansible/roles/tripleo_cephadm/tasks/pools.yaml#L20-L41
[2] https://github.com/openstack/tripleo-ansible/blob/master/tripleo_ansible/ansible_plugins/modules/ceph_pool.py#L571
[3] https://github.com/openstack/tripleo-ansible/blob/master/tripleo_ansible/roles/tripleo_cephadm/tasks/ceph_cli.yaml#L27

Comment 13 Manoj Katari 2023-08-07 05:27:28 UTC
Doc update looks good to me.

Comment 18 errata-xmlrpc 2023-08-16 01:13:59 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Release of components for Red Hat OpenStack Platform 17.1 (Wallaby)), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2023:4577


Note You need to log in before you can comment on or make changes to this bug.