Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
This project is now read‑only. Starting Monday, February 2, please use https://ibm-ceph.atlassian.net/ for all bug tracking management.

Bug 2027411

Summary: [cephadm-ansible]: cephadm-adopt fails at task TASK [get osd list]
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Ameena Suhani S H <amsyedha>
Component: Ceph-AnsibleAssignee: Guillaume Abrioux <gabrioux>
Status: CLOSED ERRATA QA Contact: Ameena Suhani S H <amsyedha>
Severity: urgent Docs Contact:
Priority: unspecified    
Version: 5.0CC: adking, aschoen, ceph-eng-bugs, ceph-qe-bugs, gabrioux, gmeno, mmurthy, nthomas, sewagner, tserlin, ykaul
Target Milestone: ---Keywords: TestBlocker, UpgradeBlocker
Target Release: 5.0z2   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ceph-ansible-6.0.20-1.el8cp Doc Type: Bug Fix
Doc Text:
Cause: the ceph_volume module in ceph-ansible bindmounts /var/lib/ceph with ':z' option. Consequence: when cephadm runs iscsi containers, it bindmounts /var/lib/ceph/<fsid>/<svc_id>/configfs without selinux flag (:z). It prevents any other containers from bindmounting /var/lib/ceph with ':z' option so the container can't be started. Fix: when the ceph_volume module in ceph-ansible is called in order to list osds (action: list), this is a read-only operation, there's no need to use the ':z' option on the /var/lib/ceph bind-mount Result: the ceph_volume module can be called without issues even when collocating osds with iscsi daemons.
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-12-08 13:57:04 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 2026861    

Description Ameena Suhani S H 2021-11-29 15:07:00 UTC
Description of problem:

TASK [get osd list] **************************************************************************************************************************************************************************************************************************
task path: /usr/share/ceph-ansible/infrastructure-playbooks/cephadm-adopt.yml:713
Monday 29 November 2021  08:28:11 -0500 (0:00:00.038)       0:03:07.936 ******* 
Using module file /usr/share/ceph-ansible/library/ceph_volume.py
Pipelining is enabled.
fatal: [ceph-ameenasuhani-4fs3bq-node5]: FAILED! => changed=true 
  cmd:
  - podman
  - run
  - --rm
  - --privileged
  - --net=host
  - --ipc=host
  - -v
  - /run/lock/lvm:/run/lock/lvm:z
  - -v
  - /var/run/udev/:/var/run/udev/:z
  - -v
  - /dev:/dev
  - -v
  - /etc/ceph:/etc/ceph:z
  - -v
  - /run/lvm/:/run/lvm/
  - -v
  - /var/lib/ceph/:/var/lib/ceph/:z
  - -v
  - /var/log/ceph/:/var/log/ceph/:z
  - --entrypoint=ceph-volume
  - registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.1-rhel-8-containers-candidate-39855-20211124175723
  - --cluster
  - ceph
  - lvm
  - list
  - --format=json
  delta: '0:00:00.233913'
  end: '2021-11-29 08:28:11.779386'
  invocation:
    module_args:
      action: list
      batch_devices: []
      block_db_devices: []
      block_db_size: '-1'
      cluster: ceph
      crush_device_class: null
      data: null
      data_vg: null
      db: null
      db_vg: null
      destroy: true
      dmcrypt: false
      journal: null
      journal_devices: []
      journal_size: '5120'
      journal_vg: null
      objectstore: bluestore
      osd_fsid: null
      osd_id: null
      osds_per_device: 1
      report: false
      wal: null
      wal_devices: []
      wal_vg: null
  msg: non-zero return code
  rc: 126
  start: '2021-11-29 08:28:11.545473'
  stderr: 'Error: lsetxattr /var/lib/ceph/6126c064-6a9e-4092-8a64-977930df0843/iscsi.rbd.ceph-ameenasuhani-4fs3bq-node5.vomtqb/configfs: operation not supported'
  stderr_lines: <omitted>
  stdout: ''
  stdout_lines: <omitted>


Version-Release number of selected component (if applicable):
ansible-2.9.27-1.el8ae.noarch
ceph-ansible-6.0.19-1.el8cp.noarch


How reproducible:
2/2

Steps to Reproduce:
1.install rhcs4
2.upgrade to rhcs5
3.run cephadm-adopt playbook

Actual results:
the playbook fails at above task

Expected results:
The playbook should pass and adopt to cephadm

Comment 11 Ameena Suhani S H 2021-12-02 18:34:59 UTC
Verified using 
# rpm -qa|grep cephadm
cephadm-16.2.0-146.el8cp.noarch
# rpm -qa|grep ceph-ansible
ceph-ansible-6.0.20-1.el8cp.noarch

steps 
1. installed 4.x in containers
2. upgraded to 5.x (ceph-ansible)
3. ran cephadm-adopt

Result: Cluster got successfully adopted to cephadm

Comment 13 errata-xmlrpc 2021-12-08 13:57:04 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 5.0 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:5020