With OSP15, if you follow the process to extract [1] data from the control-plane site, then you extract the CephFSID which was generated by the TripleO client. If you then deploy two or more ceph clusters with that extracted data as in put, the TripleO client will generate a new CephFSID but the old one overrides it. Evidence of this can be found in the deployment plan [2]. While it's possible to avoid the FSID during extraction as a workaround, it would be better if the tripleo client could regenerate this data if a different deployment plan is used. This problem is even worse than the FSID, it extends to every ceph parameter, e.g. the keys etc. [3] Another option is to make the extraction process [1] only pull out what is needed and not pull in everything. [1] https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/features/distributed_compute_node.html#deploying-a-dcn-site [2] openstack overcloud plan export central openstack overcloud plan export edge0 openstack overcloud plan export edge1 ... (undercloud) [stack@undercloud plans]$ grep -i ceph edge{0,1}/plan-environment.yaml | grep -i fsid edge0/plan-environment.yaml: CephClusterFSID: d0e4577a-efac-11e9-a51f-244253215215 edge0/plan-environment.yaml: CephClusterFSID: e9bbd0ae-efb4-11e9-a51f-244253215215 edge1/plan-environment.yaml: CephClusterFSID: d0e4577a-efac-11e9-a51f-244253215215 edge1/plan-environment.yaml: CephClusterFSID: cbfd1f22-f00a-11e9-a51f-244253215215 (undercloud) [stack@undercloud plans]$ grep -i ceph central/plan-environment.yaml | grep -i fsid CephClusterFSID: d0e4577a-efac-11e9-a51f-244253215215 (undercloud) [stack@undercloud plans]$ [3] (undercloud) [stack@undercloud plans]$ grep -i ceph edge{0,1}/plan-environment.yaml | grep CephAdminKey edge0/plan-environment.yaml: CephAdminKey: AQDAZaZdAAAAABAAcRdeoDrbseIJT9gvYfcWfA== edge0/plan-environment.yaml: CephAdminKey: AQBWc6ZdAAAAABAAYNcA2MfaApL8vT4NAPtkHA== edge1/plan-environment.yaml: CephAdminKey: AQDAZaZdAAAAABAAcRdeoDrbseIJT9gvYfcWfA== edge1/plan-environment.yaml: CephAdminKey: AQBtA6ddAAAAABAAcpoElIlwC3lLg0LMi8x7QA== (undercloud) [stack@undercloud plans]$
we need something similar also to extract from edge zones the ceph cluster info to be fed back into controlplane for glance/multistore
Perhaps we just need an update on https://review.opendev.org/#/c/672070
WORKAROUND: When doing your extraction of passwords remove all strings matching Ceph openstack object save control-plane plan-environment.yaml python3 -c "import yaml; data=yaml.safe_load(open('plan-environment.yaml').read()); print(yaml.dump(dict(parameter_defaults=data['passwords'])))" > $DIR/passwords.yaml sed -i '/Ceph/d' passwords.yaml This works for me (undercloud) [stack@undercloud deployment]$ ansible -i edge0/config-download/inventory.yaml mons --limit edge0-distributedcomputehci-0 -m shell -b -a "podman exec ceph-mon-edge0-distributedcomputehci-0 ceph -s" edge0-distributedcomputehci-0 | CHANGED | rc=0 >> cluster: id: dc2d9eea-f11c-11e9-a51f-244253215215 health: HEALTH_OK services: mon: 3 daemons, quorum edge0-distributedcomputehci-1,edge0-distributedcomputehci-2,edge0-distributedcomputehci-0 (age 17h) mgr: edge0-distributedcomputehci-2(active, since 16h), standbys: edge0-distributedcomputehci-0, edge0-distributedcomputehci-1 osd: 36 osds: 36 up (since 16h), 36 in (since 17h) data: pools: 4 pools, 1024 pgs objects: 0 objects, 0 B usage: 37 GiB used, 1.7 TiB / 1.7 TiB avail pgs: 1024 active+clean (undercloud) [stack@undercloud deployment]$ ansible -i edge1/config-download/inventory.yaml mons --limit edge1-distributedcomputehci-0 -m shell -b -a "podman exec ceph-mon-edge1-distributedcomputehci-0 ceph -s" edge1-distributedcomputehci-0 | CHANGED | rc=0 >> cluster: id: 18bf210c-f1a0-11e9-a51f-244253215215 health: HEALTH_OK services: mon: 3 daemons, quorum edge1-distributedcomputehci-0,edge1-distributedcomputehci-1,edge1-distributedcomputehci-2 (age 97m) mgr: edge1-distributedcomputehci-1(active, since 76m), standbys: edge1-distributedcomputehci-0, edge1-distributedcomputehci-2 osd: 36 osds: 36 up (since 77m), 36 in (since 102m) data: pools: 4 pools, 1024 pgs objects: 0 objects, 0 B usage: 37 GiB used, 1.7 TiB / 1.7 TiB avail pgs: 1024 active+clean (undercloud) [stack@undercloud deployment]$
https://review.opendev.org/#/c/691938
Marking this as a duplicate of BZ 1766711 in hopes that it will provide a single command which does the exclusion of ceph variables on its own. *** This bug has been marked as a duplicate of bug 1766711 ***