Bug 1635924 - Remove cluster name configuration "cluster: ceph" from Ceph Ansible all.yml as cluster names are not supported now only default name ceph is supported.
Summary: Remove cluster name configuration "cluster: ceph" from Ceph Ansible all.yml a...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Ansible
Version: 3.0
Hardware: Unspecified
OS: Unspecified
medium
low
Target Milestone: rc
: 5.0
Assignee: Guillaume Abrioux
QA Contact: Vasishta
URL:
Whiteboard:
: 1459861 (view as bug list)
Depends On:
Blocks: 1502021 1507943
TreeView+ depends on / blocked
 
Reported: 2018-10-04 00:24 UTC by Vikhyat Umrao
Modified: 2023-10-06 18:00 UTC (History)
17 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-06-15 13:57:48 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github ceph ceph-ansible pull 3194 0 'None' closed ceph: remove custom cluster name support 2020-10-28 11:01:29 UTC
Red Hat Issue Tracker RHCEPH-7626 0 None None None 2023-10-06 18:00:38 UTC

Description Vikhyat Umrao 2018-10-04 00:24:56 UTC
Description of problem:
Remove custom cluster name

Version-Release number of selected component (if applicable):
RHCS 3

As per this:
https://bugzilla.redhat.com/show_bug.cgi?id=1266828
Custom cluster names are not supported.

Comment 1 Vikhyat Umrao 2018-10-04 00:25:35 UTC
In group_vars/all.yml.sample

# The 'cluster' variable determines the name of the cluster.
# Changing the default value to something else means that you will
# need to change all the command line calls as well, for example if
# your cluster name is 'foo':
# "ceph health" will become "ceph --cluster foo health"
#
# An easier way to handle this is to use the environment variable CEPH_ARGS
# So run: "export CEPH_ARGS="--cluster foo"
# With that you will be able to run "ceph health" normally
#cluster: ceph

Comment 4 Giulio Fidente 2018-10-04 09:14:21 UTC
I believe this will impact OSPd heavily. The main reason to support arbitrary cluster names in ceph-ansible is to be able to deployment of RBDMirror daemons in two different clusters and configure them with the necessary informations about the other instance.

Another reason is that some OpenStack services might need to connect to multiple Ceph clusters and consume each as a different backend, this is especially important for Glance in the edge.

What are the reasons to remove this functionality instead? and how could the above be addressed otherwise?

Comment 5 Sébastien Han 2018-10-04 14:29:37 UTC
You can run edge computing without having different cluster names because the mons run on the edge and not the control plane. This would mean deploying multi ceph clusters from the undercloud and then push the different cluster configs into the overcloud control plane. That's is not implement in ceph-ansible at the moment.

For rbd-mirror it's a bit tricky, we need to have symlinks of each ceph.conf, this requires manual intervention or again Ansible work.

I'm targeting this for 3.2z1 since we won't have the time to add the ceph-ansible support to cover the case mentioned above.
Also resetting the status to NEW.

Comment 6 Vikhyat Umrao 2018-10-04 16:10:11 UTC
Thanks Giulio and leseb. As per my understanding in both RGW Multi-site or RBD Mirror we do not need different cluster names because both sites are independent to each other and RBD mirror just needs secondary site ceph.conf and as leseb said it can be a symlink or different name in /etc/ceph.

Comment 7 Josh Durgin 2018-10-04 18:53:28 UTC
To clarify, the cluster name in the config file for ceph clients is not changing, so things like rbd-mirror and openstack do not need to change. All it is on the client side is a shortcut to avoid specifying the full path to a config file.

It's custom cluster name support for the osds, monitors, mgrs, and mds that is no longer supported upstream - this was more complex in terms of systemd units and ceph-disk support.

Comment 8 Giulio Fidente 2018-10-05 11:59:44 UTC
Thanks for helping, in response to comments #6 and #7:

I think that in both the scenarios (rbdmirror and multiple ceph backends for cinder/glance), we'll need to provision on the nodes multiple cluster .conf files and the matching keyrings.

We do not need to deploy multiple instances of the daemons on the same node instead so it is my understanding that it'd be sufficient to keep in ceph-ansible the functionality to deal with custom cluster names in the client role (and its deps)?

Comment 9 Josh Durgin 2018-10-05 14:09:29 UTC
(In reply to Giulio Fidente from comment #8)
> Thanks for helping, in response to comments #6 and #7:
> 
> I think that in both the scenarios (rbdmirror and multiple ceph backends for
> cinder/glance), we'll need to provision on the nodes multiple cluster .conf
> files and the matching keyrings.
> 
> We do not need to deploy multiple instances of the daemons on the same node
> instead so it is my understanding that it'd be sufficient to keep in
> ceph-ansible the functionality to deal with custom cluster names in the
> client role (and its deps)?

Yes, that matches my understanding.

Comment 10 Federico Lucifredi 2018-10-19 22:30:24 UTC
Would deprecating this as we go from 3.1 to 3.2 break existing OSP 13 installs that used the cluster naming?

Comment 11 Giulio Fidente 2018-10-20 00:12:23 UTC
Indeed we can't remove the ceph-ansible parameter entirely but the "only" functionalities we need to preserve for the parameter are:

1) create a .conf file matching the custom cluster name
2) provision keyrings for the various clusters using the appropriate name and locations

I am not sure what is the plan to deprecate use of multiple clusters in the ceph daemons (probably ignore the .conf filename?) but clients, as per comment #8, might still need to connect to multiple clusters ... and the existing implementation with multiple .conf files and keyrings seems still good to me; if not the only solution?

Comment 12 Federico Lucifredi 2018-10-22 20:29:01 UTC
I think we need to move this change to 4.0, and leave thins as they are in 3.x -- any concerns or objections?

Comment 13 Vikhyat Umrao 2018-10-22 23:05:20 UTC
(In reply to Federico Lucifredi from comment #12)
> I think we need to move this change to 4.0, and leave thins as they are in
> 3.x -- any concerns or objections?

Thanks, Federico. Sounds good to me.

Comment 14 Harish NV Rao 2018-10-29 08:44:28 UTC
Based on comment 12, setting the target release as 4.0. If anyone disagrees, please let us know.

Comment 15 Sébastien Han 2018-11-05 15:14:09 UTC
Brain dump: does ceph-volume handle custom cluster name? How to handle existing clusters that have custom cluster names that are looking at expanding using ceph-volume.

Comment 16 Sébastien Han 2019-01-10 16:22:49 UTC
*** Bug 1459861 has been marked as a duplicate of this bug. ***

Comment 18 Giridhar Ramaraju 2019-08-05 13:11:16 UTC
Updating the QA Contact to a Hemant. Hemant will be rerouting them to the appropriate QE Associate. 

Regards,
Giri

Comment 19 Giridhar Ramaraju 2019-08-05 13:12:17 UTC
Updating the QA Contact to a Hemant. Hemant will be rerouting them to the appropriate QE Associate. 

Regards,
Giri


Note You need to log in before you can comment on or make changes to this bug.