Created attachment 1478339 [details]
Description of problem:
If custom cluster name is used and OSD node is rebooted, the hosts OSDs created by ceph-volume they will fail to start.
Version-Release number of selected component (if applicable):
ceph version 12.2.5-23redhat1xenial - ubuntu
How reproducible: always
Steps to Reproduce:
1. Install cluster with ceph-volume based filestore osds, with custom cluster name.
2. Reboot the osd node containing ceph-volume based osds
OSDs does not come back up after reboot.
OSDs should be up after reboot.
1. Create Symlink from custom cluster configuration file to ceph.conf
$sudo ln -s /etc/ceph/<custom-name>.conf /etc/ceph/ceph.conf
2. Reboot the node.
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
Updating Doc Text to reflect that Ceph does not actually support custom cluster names. This came up in https://bugzilla.redhat.com/show_bug.cgi?id=1722394#c3
I am updating the Release Notes manually, not using CoRN, since an older version of CoRN than I have installed was used and double checking all the new formatting changes to ensure the actual content doesn't change is tedious. I'm updating the Doc Text in case someone does update the Release Notes later using CoRN.
Changes are published: