Cause: the ceph_volume module in ceph-ansible bindmounts /var/lib/ceph with ':z' option.
Consequence: when cephadm runs iscsi containers, it bindmounts /var/lib/ceph/<fsid>/<svc_id>/configfs without selinux flag (:z). It prevents any other containers from bindmounting /var/lib/ceph with ':z' option so the container can't be started.
Fix: when the ceph_volume module in ceph-ansible is called in order to list osds (action: list), this is a read-only operation, there's no need to use the ':z' option on the /var/lib/ceph bind-mount
Result: the ceph_volume module can be called without issues even when collocating osds with iscsi daemons.
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory (Red Hat Ceph Storage 5.0 Bug Fix update), and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://access.redhat.com/errata/RHBA-2021:5020
Description of problem: TASK [get osd list] ************************************************************************************************************************************************************************************************************************** task path: /usr/share/ceph-ansible/infrastructure-playbooks/cephadm-adopt.yml:713 Monday 29 November 2021 08:28:11 -0500 (0:00:00.038) 0:03:07.936 ******* Using module file /usr/share/ceph-ansible/library/ceph_volume.py Pipelining is enabled. fatal: [ceph-ameenasuhani-4fs3bq-node5]: FAILED! => changed=true cmd: - podman - run - --rm - --privileged - --net=host - --ipc=host - -v - /run/lock/lvm:/run/lock/lvm:z - -v - /var/run/udev/:/var/run/udev/:z - -v - /dev:/dev - -v - /etc/ceph:/etc/ceph:z - -v - /run/lvm/:/run/lvm/ - -v - /var/lib/ceph/:/var/lib/ceph/:z - -v - /var/log/ceph/:/var/log/ceph/:z - --entrypoint=ceph-volume - registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.1-rhel-8-containers-candidate-39855-20211124175723 - --cluster - ceph - lvm - list - --format=json delta: '0:00:00.233913' end: '2021-11-29 08:28:11.779386' invocation: module_args: action: list batch_devices: [] block_db_devices: [] block_db_size: '-1' cluster: ceph crush_device_class: null data: null data_vg: null db: null db_vg: null destroy: true dmcrypt: false journal: null journal_devices: [] journal_size: '5120' journal_vg: null objectstore: bluestore osd_fsid: null osd_id: null osds_per_device: 1 report: false wal: null wal_devices: [] wal_vg: null msg: non-zero return code rc: 126 start: '2021-11-29 08:28:11.545473' stderr: 'Error: lsetxattr /var/lib/ceph/6126c064-6a9e-4092-8a64-977930df0843/iscsi.rbd.ceph-ameenasuhani-4fs3bq-node5.vomtqb/configfs: operation not supported' stderr_lines: <omitted> stdout: '' stdout_lines: <omitted> Version-Release number of selected component (if applicable): ansible-2.9.27-1.el8ae.noarch ceph-ansible-6.0.19-1.el8cp.noarch How reproducible: 2/2 Steps to Reproduce: 1.install rhcs4 2.upgrade to rhcs5 3.run cephadm-adopt playbook Actual results: the playbook fails at above task Expected results: The playbook should pass and adopt to cephadm