Bug 1621901 - [ceph-volume]: After reboot, ceph-volume based filestore osds fails to start on cluster with custom name
Summary: [ceph-volume]: After reboot, ceph-volume based filestore osds fails to start ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat
Component: Ceph-Volume
Version: 3.1
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: rc
: 3.2
Assignee: Alfredo Deza
QA Contact: Parikshith
Bara Ancincova
URL:
Whiteboard:
Depends On:
Blocks: 1629656 1584264
TreeView+ depends on / blocked
 
Reported: 2018-08-23 20:18 UTC by Parikshith
Modified: 2019-06-26 15:06 UTC (History)
10 users (show)

Fixed In Version: RHEL: ceph-12.2.8-17.el7cp Ubuntu: ceph_12.2.8-15redhat1
Doc Type: Bug Fix
Doc Text:
.ceph-volume does not break custom named clusters When using a custom storage cluster name other than `ceph`, the OSDs could not start after a reboot. With this update, `ceph-volume` provisions OSDs in a way that allows them to boot properly when a custom name is used. IMPORTANT: Despite this fix, Red Hat does not support clusters with custom names. This is because the upstream Ceph project removed support for custom names in the Ceph OSD, Monitor, Manager, and Metadata server daemons. The Ceph project removed this support because it added complexities to systemd unit files. This fix was created before the decision to remove support for custom cluster names was made.
Clone Of:
Environment:
Last Closed: 2019-01-03 19:01:53 UTC


Attachments (Terms of Use)
ceph-volume log (512.25 KB, text/plain)
2018-08-23 20:18 UTC, Parikshith
no flags Details


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:0020 None None None 2019-01-03 19:02:06 UTC

Description Parikshith 2018-08-23 20:18:27 UTC
Created attachment 1478339 [details]
ceph-volume log

Description of problem:
If custom cluster name is used and OSD node is rebooted, the hosts OSDs created by ceph-volume they will fail to start.

Version-Release number of selected component (if applicable):
ceph version 12.2.5-23redhat1xenial - ubuntu

How reproducible: always


Steps to Reproduce:
1. Install cluster with ceph-volume based filestore osds, with custom cluster name.
2. Reboot the osd node containing ceph-volume based osds

Actual results: 
OSDs does not come back up after reboot.


Expected results:
OSDs should be up after reboot.

Additional info:

Workaround: 
1. Create Symlink from custom cluster configuration file to ceph.conf
$sudo ln -s /etc/ceph/<custom-name>.conf /etc/ceph/ceph.conf

2. Reboot the node.

Comment 20 errata-xmlrpc 2019-01-03 19:01:53 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0020

Comment 21 John Brier 2019-06-26 14:22:35 UTC
Updating Doc Text to reflect that Ceph does not actually support custom cluster names. This came up in https://bugzilla.redhat.com/show_bug.cgi?id=1722394#c3

I am updating the Release Notes manually, not using CoRN, since an older version of CoRN than I have installed was used and double checking all the new formatting changes to ensure the actual content doesn't change is tedious. I'm updating the Doc Text in case someone does update the Release Notes later using CoRN.


Note You need to log in before you can comment on or make changes to this bug.