For USM to integrate with ceph-installer, there should be a provision configure the custom name for ceph cluster. This is not provided
I'm going to push back a little here. There are several solutions that don't require a change to the installer. Storage console can map FSID to cluster name. Storage console can show clusters names with part of the FSID in it e.g. ceph(12beaf5) Nishanth would you please help me understand what barrier to integration exists?
Couple of this
1. MVP-007, allow specify and edit the cluster name. Also affect currently implemented flow as per UX. Please see https://docs.google.com/a/redhat.com/presentation/d/1MAgpVG2Fi2UtBYUhuyMScO8zObAYvhiHLp2HQ3_YpMI/edit?usp=sharing for details(slide 12). This is because, as per your suggestion we need to auto generate the cluster name not allow the admin to change it 2. There will be a mismatch in the cluster name(between USM and ceph cluster). Any requests send to ceph cluster, we append the cluster name(--cluster 'cluster-name'). Now with the above approach, we need to re-look at that and hard code it to ceph in all outgoing requests 3. Most of the events generated from calamari/skynet carries cluster name. We will have tough time re-mapping it to the cluster name which configured in USM. These are few things which comes into my mind at this point. There may be other cases too.So my point is that if we go with this approach and miss some cases will end up in problem. So I am more inclined to fix this problem in the installer itself, so that we will have same name in both USM and ceph cluster
two questions: What are the technical limitation of many clusters tracked by Storage Console all being named ceph? Or is it just bad UX? Alternatively would you please share all the placed you want to show cluster name and don't have the FSID to do the correlation.
As we discussed today's meeting, it will be a good amount of work to figure out pieces of code where we use cluster name and make translations. Also the UI flow needs to changed. Again at some point I am sure that this will come back and then we need revert all these changes back.
moving to to 2.1 until we negotiate it back in 2.0 This is 3 weeks of effort for the ceph-installer to make this change
Support for this in ceph-ansible is currently in the 1.0.3 version. Support for this in ceph-installer was merged with https://github.com/ceph/ceph-installer/pull/129, but does not exist in a built version of ceph-installer yet.
Checked with ceph-installer-1.0.11-1.el7scon.noarch and it is possible to create cluster with user defined name. -> Verified
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2016:1754