Bug 2059839 - [cee/sd][cephadm] cephadm bootstrap is not applying the configuration which passed via the option "--apply-spec"
Summary: [cee/sd][cephadm] cephadm bootstrap is not applying the configuration which p...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Cephadm
Version: 5.0
Hardware: All
OS: All
unspecified
medium
Target Milestone: ---
: 7.0
Assignee: Guillaume Abrioux
QA Contact: Vinayak Papnoi
Rivka Pollack
URL:
Whiteboard:
Depends On:
Blocks: 2237662
TreeView+ depends on / blocked
 
Reported: 2022-03-02 07:39 UTC by Geo Jose
Modified: 2023-12-13 15:18 UTC (History)
9 users (show)

Fixed In Version: ceph-18.2.0-1
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-12-13 15:18:46 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github ceph ceph pull 43450 0 None Merged cephadm: shell --mount shouldnt enforce ':z' option 2023-10-02 07:15:05 UTC
Red Hat Issue Tracker RHCEPH-3615 0 None None None 2022-03-02 07:41:20 UTC
Red Hat Knowledge Base (Solution) 6964618 0 None None None 2022-06-24 00:10:19 UTC
Red Hat Product Errata RHBA-2023:7780 0 None None None 2023-12-13 15:18:49 UTC

Description Geo Jose 2022-03-02 07:39:43 UTC
Description of problem:
 cephadm bootstrap is not applying the configuration which passed via the option "--apply-spec"

Version-Release number of selected component (if applicable):
  ceph version 16.2.0-152.el8cp

Steps to Reproduce:
1. Create a spec file with host, alertmanager, crash, grafana, mgr, mon, node-exporter, prometheus and osd.
2. Bootstrap the cluster using cephadm with the option "--apply-spec".


Actual results:
Not applying the configuration after bootstrap.

Expected results:
The configuration should be apply after bootstrap the cluster.

Comment 5 Geo Jose 2022-03-02 08:09:07 UTC
Getting the below error:

# cephadm --verbose bootstrap --mon-ip <IP> --registry-json <registry_details> --apply-spec <service.yaml> --allow-fqdn-hostname

[...]

Running command: /usr/bin/podman run --rm --ipc=host --authfile=/etc/ceph/podman-auth.json --net=host --entrypoint /usr/bin/ceph --init -e CONTAINER_IMAGE=registry.redhat.io/rhceph/rhceph-5-rhel8:latest -e NODE_NAME=rhcs51.sample.com -e CEPH_USE_RANDOM_NONCE=1 -v /var/log/ceph/5acfc0f4-996b-11ec-bb1b-001a4a00055c:/var/log/ceph:z -v /tmp/ceph-tmpjrl7v51u:/etc/ceph/ceph.client.admin.keyring:z -v /tmp/ceph-tmpfv5vddti:/etc/ceph/ceph.conf:z -v /usr/share/cephadm-ansible/service.yaml:/tmp/spec.yml:z registry.redhat.io/rhceph/rhceph-5-rhel8:latest orch apply -i /tmp/spec.yml
/usr/bin/ceph: Error: relabeling content in /usr is not allowed
Non-zero exit code 126 from /usr/bin/podman run --rm --ipc=host --authfile=/etc/ceph/podman-auth.json --net=host --entrypoint /usr/bin/ceph --init -e CONTAINER_IMAGE=registry.redhat.io/rhceph/rhceph-5-rhel8:latest -e NODE_NAME=rhcs51.sample.com -e CEPH_USE_RANDOM_NONCE=1 -v /var/log/ceph/5acfc0f4-996b-11ec-bb1b-001a4a00055c:/var/log/ceph:z -v /tmp/ceph-tmpjrl7v51u:/etc/ceph/ceph.client.admin.keyring:z -v /tmp/ceph-tmpfv5vddti:/etc/ceph/ceph.conf:z -v /usr/share/cephadm-ansible/service.yaml:/tmp/spec.yml:z registry.redhat.io/rhceph/rhceph-5-rhel8:latest orch apply -i /tmp/spec.yml
/usr/bin/ceph: stderr Error: relabeling content in /usr is not allowed
Traceback (most recent call last):
  File "/usr/sbin/cephadm", line 8140, in <module>
    main()
  File "/usr/sbin/cephadm", line 8128, in main
    r = ctx.func(ctx)
  File "/usr/sbin/cephadm", line 1730, in _default_image
    return func(ctx)
  File "/usr/sbin/cephadm", line 4156, in command_bootstrap
    out = cli(['orch', 'apply', '-i', '/tmp/spec.yml'], extra_mounts=mounts)
  File "/usr/sbin/cephadm", line 4050, in cli
    ).run(timeout=timeout)
  File "/usr/sbin/cephadm", line 3286, in run
    desc=self.entrypoint, timeout=timeout)
  File "/usr/sbin/cephadm", line 1424, in call_throws
    raise RuntimeError('Failed command: %s' % ' '.join(command))
RuntimeError: Failed command: /usr/bin/podman run --rm --ipc=host --authfile=/etc/ceph/podman-auth.json --net=host --entrypoint /usr/bin/ceph --init -e CONTAINER_IMAGE=registry.redhat.io/rhceph/rhceph-5-rhel8:latest -e NODE_NAME=rhcs51.sample.com -e CEPH_USE_RANDOM_NONCE=1 -v /var/log/ceph/5acfc0f4-996b-11ec-bb1b-001a4a00055c:/var/log/ceph:z -v /tmp/ceph-tmpjrl7v51u:/etc/ceph/ceph.client.admin.keyring:z -v /tmp/ceph-tmpfv5vddti:/etc/ceph/ceph.conf:z -v /usr/share/cephadm-ansible/service.yaml:/tmp/spec.yml:z registry.redhat.io/rhceph/rhceph-5-rhel8:latest orch apply -i /tmp/spec.yml

Comment 27 errata-xmlrpc 2023-12-13 15:18:46 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 7.0 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:7780


Note You need to log in before you can comment on or make changes to this bug.