Description of problem:[ceph-ansible] 5.0 - ceph-ansible adoption playbook doesn't support collocated daemons Version-Release number of selected component (if applicable): [ceph: root@ceph-monitor-1 /]# ceph version ceph version 16.2.0-143.el8cp (0e2c6f9639c37a03e55885fb922dc0cb1b5173cb) pacific (stable) How reproducible: Steps to Reproduce: 1. Install 4.2 build on fresh cluster 2. Configure iscsi configuration on the cluster with 34 clients 3. Upgrade to 5.0 from 4.2 baremetal 4. Convert the storage cluster daemons to run cephadm. 5. Check the status Actual results: ceph-adopt playbook fails with the following error 'Error: error creating container storage: the container name "cephadm" is already in use by "fed70fef19da0fbae1f01afc2412217211eab009941bdac6f7b6c26d2c96c13b". You have to remove that container to be able to reuse that name.: that name is already in use' complete output of playbook is pasted in the below http://pastebin.test.redhat.com/1010942 Expected results: We should not see any error when daemons are collocated and ceph-adopt playbook is performed Additional info: inventory files ## db-[99:101]-node.example.com [grafana-server] ceph-dashboard [mons] ceph-monitor-1 ceph-monitor-2 ceph-monitor-3 [mgrs] ceph-monitor-1 ceph-monitor-2 ceph-monitor-3 [osds] #ceph-osd-1 lvm_volumes="[{'data':'/dev/sdb'},{'data':'/dev/sdc'},{'data':'/dev/sdd'}]" osd_scenario="lvm" osd_objectstore="bluestore" #ceph-osd-2 lvm_volumes="[{'data':'/dev/sdb'},{'data':'/dev/sdc'},{'data':'/dev/sdd'}]" osd_scenario="lvm" osd_objectstore="bluestore" #ceph-osd-3 lvm_volumes="[{'data':'/dev/sdb'},{'data':'/dev/sdc'},{'data':'/dev/sdd'}]" osd_scenario="lvm" osd_objectstore="bluestore" #ceph-osd-4 lvm_volumes="[{'data':'/dev/sdb'},{'data':'/dev/sdc'},{'data':'/dev/sdd'}]" osd_scenario="lvm" osd_objectstore="bluestore" oncilla10.lab.eng.tlv2.redhat.com lvm_volumes="[{'data':'/dev/sda'},{'data':'/dev/sdb'},{'data':'/dev/sdc'},{'data':'/dev/sdd'}]" osd_scenario="lvm" osd_objectstore="bluestore" oncilla11.lab.eng.tlv2.redhat.com lvm_volumes="[{'data':'/dev/sda'},{'data':'/dev/sdb'},{'data':'/dev/sdc'},{'data':'/dev/sdd'}]" osd_scenario="lvm" osd_objectstore="bluestore" oncilla12.lab.eng.tlv2.redhat.com lvm_volumes="[{'data':'/dev/sda'},{'data':'/dev/sdb'},{'data':'/dev/sdc'},{'data':'/dev/sdd'}]" osd_scenario="lvm" osd_objectstore="bluestore" [iscsigws] ceph-osd-1 ceph-osd-2 oncilla11 oncilla12 [mdss] ceph-monitor-1 ceph-monitor-2
After discussing with Guillaume Abrioux, Targeting this BZ to 5.0Z2 as fix is already there in the latest 5.0Z2 build.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 5.0 Bug Fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:5020