Bug 2026639 - [ceph-ansible] 5.0 - ceph-ansible adoption playbook doesn't support collocated daemons
Summary: [ceph-ansible] 5.0 - ceph-ansible adoption playbook doesn't support collocate...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Ansible
Version: 5.0
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 5.0z2
Assignee: Guillaume Abrioux
QA Contact: Ameena Suhani S H
URL:
Whiteboard:
Depends On: 2026861
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-11-25 11:30 UTC by Preethi
Modified: 2021-12-08 13:57 UTC (History)
9 users (show)

Fixed In Version: ceph-ansible-6.0.20-1.el8cp
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-12-08 13:57:04 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github ceph ceph-ansible pull 6767 0 None Merged [skip ci] cephadm-adopt: configure repository for cephadm installation 2021-12-03 07:51:48 UTC
Red Hat Issue Tracker RHCEPH-2415 0 None None None 2021-11-25 11:32:38 UTC
Red Hat Product Errata RHBA-2021:5020 0 None None None 2021-12-08 13:57:18 UTC

Description Preethi 2021-11-25 11:30:31 UTC
Description of problem:[ceph-ansible] 5.0 - ceph-ansible adoption playbook doesn't support collocated daemons 


Version-Release number of selected component (if applicable):
[ceph: root@ceph-monitor-1 /]# ceph version
ceph version 16.2.0-143.el8cp (0e2c6f9639c37a03e55885fb922dc0cb1b5173cb) pacific (stable)

How reproducible:


Steps to Reproduce:
 1. Install 4.2 build on fresh cluster
2. Configure iscsi configuration on the cluster with 34 clients
3. Upgrade to 5.0 from 4.2 baremetal 
4. Convert the storage cluster daemons to run cephadm.
5. Check the status
Actual results: ceph-adopt playbook fails with the following error

'Error: error creating container storage: the container name "cephadm" is already in use by "fed70fef19da0fbae1f01afc2412217211eab009941bdac6f7b6c26d2c96c13b". You have to remove that container to be able to reuse that name.: that name is already in use'


complete output of playbook is pasted in the below

http://pastebin.test.redhat.com/1010942


Expected results:
We should not see any error when daemons are collocated and ceph-adopt playbook is performed

Additional info:


inventory files
## db-[99:101]-node.example.com
[grafana-server]
ceph-dashboard

[mons]
ceph-monitor-1
ceph-monitor-2
ceph-monitor-3

[mgrs]
ceph-monitor-1
ceph-monitor-2
ceph-monitor-3

[osds]
#ceph-osd-1 lvm_volumes="[{'data':'/dev/sdb'},{'data':'/dev/sdc'},{'data':'/dev/sdd'}]" osd_scenario="lvm" osd_objectstore="bluestore"
#ceph-osd-2 lvm_volumes="[{'data':'/dev/sdb'},{'data':'/dev/sdc'},{'data':'/dev/sdd'}]" osd_scenario="lvm" osd_objectstore="bluestore"
#ceph-osd-3 lvm_volumes="[{'data':'/dev/sdb'},{'data':'/dev/sdc'},{'data':'/dev/sdd'}]" osd_scenario="lvm" osd_objectstore="bluestore"
#ceph-osd-4 lvm_volumes="[{'data':'/dev/sdb'},{'data':'/dev/sdc'},{'data':'/dev/sdd'}]" osd_scenario="lvm" osd_objectstore="bluestore"
oncilla10.lab.eng.tlv2.redhat.com lvm_volumes="[{'data':'/dev/sda'},{'data':'/dev/sdb'},{'data':'/dev/sdc'},{'data':'/dev/sdd'}]" osd_scenario="lvm" osd_objectstore="bluestore"
oncilla11.lab.eng.tlv2.redhat.com lvm_volumes="[{'data':'/dev/sda'},{'data':'/dev/sdb'},{'data':'/dev/sdc'},{'data':'/dev/sdd'}]" osd_scenario="lvm" osd_objectstore="bluestore"
oncilla12.lab.eng.tlv2.redhat.com lvm_volumes="[{'data':'/dev/sda'},{'data':'/dev/sdb'},{'data':'/dev/sdc'},{'data':'/dev/sdd'}]" osd_scenario="lvm" osd_objectstore="bluestore"

[iscsigws]
ceph-osd-1
ceph-osd-2
oncilla11
oncilla12

[mdss]
ceph-monitor-1
ceph-monitor-2

Comment 1 Preethi 2021-11-26 05:06:10 UTC
After discussing with Guillaume Abrioux, Targeting this BZ to 5.0Z2 as fix is already there in the latest 5.0Z2 build.

Comment 8 errata-xmlrpc 2021-12-08 13:57:04 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 5.0 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:5020


Note You need to log in before you can comment on or make changes to this bug.