Description of problem: Remove custom cluster name Version-Release number of selected component (if applicable): RHCS 3 As per this: https://bugzilla.redhat.com/show_bug.cgi?id=1266828 Custom cluster names are not supported.
In group_vars/all.yml.sample # The 'cluster' variable determines the name of the cluster. # Changing the default value to something else means that you will # need to change all the command line calls as well, for example if # your cluster name is 'foo': # "ceph health" will become "ceph --cluster foo health" # # An easier way to handle this is to use the environment variable CEPH_ARGS # So run: "export CEPH_ARGS="--cluster foo" # With that you will be able to run "ceph health" normally #cluster: ceph
I believe this will impact OSPd heavily. The main reason to support arbitrary cluster names in ceph-ansible is to be able to deployment of RBDMirror daemons in two different clusters and configure them with the necessary informations about the other instance. Another reason is that some OpenStack services might need to connect to multiple Ceph clusters and consume each as a different backend, this is especially important for Glance in the edge. What are the reasons to remove this functionality instead? and how could the above be addressed otherwise?
You can run edge computing without having different cluster names because the mons run on the edge and not the control plane. This would mean deploying multi ceph clusters from the undercloud and then push the different cluster configs into the overcloud control plane. That's is not implement in ceph-ansible at the moment. For rbd-mirror it's a bit tricky, we need to have symlinks of each ceph.conf, this requires manual intervention or again Ansible work. I'm targeting this for 3.2z1 since we won't have the time to add the ceph-ansible support to cover the case mentioned above. Also resetting the status to NEW.
Thanks Giulio and leseb. As per my understanding in both RGW Multi-site or RBD Mirror we do not need different cluster names because both sites are independent to each other and RBD mirror just needs secondary site ceph.conf and as leseb said it can be a symlink or different name in /etc/ceph.
To clarify, the cluster name in the config file for ceph clients is not changing, so things like rbd-mirror and openstack do not need to change. All it is on the client side is a shortcut to avoid specifying the full path to a config file. It's custom cluster name support for the osds, monitors, mgrs, and mds that is no longer supported upstream - this was more complex in terms of systemd units and ceph-disk support.
Thanks for helping, in response to comments #6 and #7: I think that in both the scenarios (rbdmirror and multiple ceph backends for cinder/glance), we'll need to provision on the nodes multiple cluster .conf files and the matching keyrings. We do not need to deploy multiple instances of the daemons on the same node instead so it is my understanding that it'd be sufficient to keep in ceph-ansible the functionality to deal with custom cluster names in the client role (and its deps)?
(In reply to Giulio Fidente from comment #8) > Thanks for helping, in response to comments #6 and #7: > > I think that in both the scenarios (rbdmirror and multiple ceph backends for > cinder/glance), we'll need to provision on the nodes multiple cluster .conf > files and the matching keyrings. > > We do not need to deploy multiple instances of the daemons on the same node > instead so it is my understanding that it'd be sufficient to keep in > ceph-ansible the functionality to deal with custom cluster names in the > client role (and its deps)? Yes, that matches my understanding.
Would deprecating this as we go from 3.1 to 3.2 break existing OSP 13 installs that used the cluster naming?
Indeed we can't remove the ceph-ansible parameter entirely but the "only" functionalities we need to preserve for the parameter are: 1) create a .conf file matching the custom cluster name 2) provision keyrings for the various clusters using the appropriate name and locations I am not sure what is the plan to deprecate use of multiple clusters in the ceph daemons (probably ignore the .conf filename?) but clients, as per comment #8, might still need to connect to multiple clusters ... and the existing implementation with multiple .conf files and keyrings seems still good to me; if not the only solution?
I think we need to move this change to 4.0, and leave thins as they are in 3.x -- any concerns or objections?
(In reply to Federico Lucifredi from comment #12) > I think we need to move this change to 4.0, and leave thins as they are in > 3.x -- any concerns or objections? Thanks, Federico. Sounds good to me.
Based on comment 12, setting the target release as 4.0. If anyone disagrees, please let us know.
Brain dump: does ceph-volume handle custom cluster name? How to handle existing clusters that have custom cluster names that are looking at expanding using ceph-volume.
*** Bug 1459861 has been marked as a duplicate of this bug. ***
Updating the QA Contact to a Hemant. Hemant will be rerouting them to the appropriate QE Associate. Regards, Giri