.ceph-volume does not break custom named clusters
When using a custom storage cluster name other than `ceph`, the OSDs could not start after a reboot. With this update, `ceph-volume` provisions OSDs in a way that allows them to boot properly when a custom name is used.
IMPORTANT: Despite this fix, Red Hat does not support clusters with custom names. This is because the upstream Ceph project removed support for custom names in the Ceph OSD, Monitor, Manager, and Metadata server daemons. The Ceph project removed this support because it added complexities to systemd unit files. This fix was created before the decision to remove support for custom cluster names was made.
Created attachment 1478339[details]
ceph-volume log
Description of problem:
If custom cluster name is used and OSD node is rebooted, the hosts OSDs created by ceph-volume they will fail to start.
Version-Release number of selected component (if applicable):
ceph version 12.2.5-23redhat1xenial - ubuntu
How reproducible: always
Steps to Reproduce:
1. Install cluster with ceph-volume based filestore osds, with custom cluster name.
2. Reboot the osd node containing ceph-volume based osds
Actual results:
OSDs does not come back up after reboot.
Expected results:
OSDs should be up after reboot.
Additional info:
Workaround:
1. Create Symlink from custom cluster configuration file to ceph.conf
$sudo ln -s /etc/ceph/<custom-name>.conf /etc/ceph/ceph.conf
2. Reboot the node.
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://access.redhat.com/errata/RHBA-2019:0020
Updating Doc Text to reflect that Ceph does not actually support custom cluster names. This came up in https://bugzilla.redhat.com/show_bug.cgi?id=1722394#c3
I am updating the Release Notes manually, not using CoRN, since an older version of CoRN than I have installed was used and double checking all the new formatting changes to ensure the actual content doesn't change is tedious. I'm updating the Doc Text in case someone does update the Release Notes later using CoRN.
Created attachment 1478339 [details] ceph-volume log Description of problem: If custom cluster name is used and OSD node is rebooted, the hosts OSDs created by ceph-volume they will fail to start. Version-Release number of selected component (if applicable): ceph version 12.2.5-23redhat1xenial - ubuntu How reproducible: always Steps to Reproduce: 1. Install cluster with ceph-volume based filestore osds, with custom cluster name. 2. Reboot the osd node containing ceph-volume based osds Actual results: OSDs does not come back up after reboot. Expected results: OSDs should be up after reboot. Additional info: Workaround: 1. Create Symlink from custom cluster configuration file to ceph.conf $sudo ln -s /etc/ceph/<custom-name>.conf /etc/ceph/ceph.conf 2. Reboot the node.