Description of problem: take-over-existing-cluster.yml does not support existing RGW nodes takeover and overwrites with new config if rgw nodes are enabled in /etc/ansible/hosts cat /etc/ansible/hosts [mons] mon1 mon2 [osds] osd1 osd2 [rgws] node1 node2 [client.rgw.node1] host = node1 keyring = /var/lib/ceph/radosgw/ceph-rgw.node1/keyring rgw socket path = /tmp/radosgw-node1.sock log file = /var/log/ceph/ceph-rgw-node1.log rgw data = /var/lib/ceph/radosgw/ceph-rgw.node1 rgw frontends = civetweb port=192.168.24.74:8080 num_threads=50 [client.rgw.node2] host = node2 keyring = /var/lib/ceph/radosgw/ceph-rgw.node2/keyring rgw socket path = /tmp/radosgw-node2.sock log file = /var/log/ceph/ceph-rgw-node2.log rgw data = /var/lib/ceph/radosgw/ceph-rgw.node2 rgw frontends = civetweb port=192.168.24.75:8080 num_threads=50 Original ceph.conf has Keystone and other tunables. For more details please check RHBZ: https://bugzilla.redhat.com/show_bug.cgi?id=1459350 Version-Release number of selected component (if applicable):
Version-Release number of selected component (if applicable): $ cat installed-rpms | grep ansible ansible-2.2.1.0-1.el7.noarch ceph-ansible-2.1.9-1.el7scon.noarch
were the keystone options configured with ceph-ansible or not?
(In reply to seb from comment #2) > were the keystone options configured with ceph-ansible or not? This was RHCS 1.3.z to 2.y upgrade. Keystone was configured maybe manually in RHCS 1.3.z.
Well if the ceph.conf was edited manually outside of ansible this is normal that ansible overwrote it. If that's the case, this issue should be closed.
(In reply to seb from comment #5) > Well if the ceph.conf was edited manually outside of Ansible this is normal > that ansible overwrote it. > If that's the case, this issue should be closed. Here the issue is when we take over a ceph cluster to ansible it means that that cluster was never managed by Ansible and it is true in 1.3.z because all 1.3.z clusters are managed by ceph-deploy. Now customers are expecting when they take over the cluster to ansible they should have the same ceph.conf how it was before and we are able to do that for Monitor and OSD nodes for more details are here: https://bugzilla.redhat.com/show_bug.cgi?id=1459350 but not for RGW nodes. This bug is to track the support for RGW nodes when taking over the cluster in Ansible.
If this BZ is a better version of https://bugzilla.redhat.com/show_bug.cgi?id=1459350 then let's close the other one as dup.
Running take-over-existing-cluster is valid in 2.x as we changed from ceph-deploy (in 1.3) to ceph-ansible in 2.0 and had to provide a way for customer to make sure the existing cluster is brought under ceph-ansible control to handle management tasks like add/remove osds. In 3.0, the assumption is that either cluster is installed newly or upgraded from 2.x. In both cases, take over use case is not applicable in 3.0.