Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
This project is now read‑only. Starting Monday, February 2, please use https://ibm-ceph.atlassian.net/ for all bug tracking management.

Bug 1621901

Summary: [ceph-volume]: After reboot, ceph-volume based filestore osds fails to start on cluster with custom name
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Parikshith <pbyregow>
Component: Ceph-VolumeAssignee: Alfredo Deza <adeza>
Status: CLOSED ERRATA QA Contact: Parikshith <pbyregow>
Severity: high Docs Contact: Bara Ancincova <bancinco>
Priority: high    
Version: 3.1CC: adeza, agunn, ceph-eng-bugs, ceph-qe-bugs, edonnell, gmeno, hnallurv, jbrier, kdreyer, tserlin
Target Milestone: rc   
Target Release: 3.2   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: RHEL: ceph-12.2.8-17.el7cp Ubuntu: ceph_12.2.8-15redhat1 Doc Type: Bug Fix
Doc Text:
.ceph-volume does not break custom named clusters When using a custom storage cluster name other than `ceph`, the OSDs could not start after a reboot. With this update, `ceph-volume` provisions OSDs in a way that allows them to boot properly when a custom name is used. IMPORTANT: Despite this fix, Red Hat does not support clusters with custom names. This is because the upstream Ceph project removed support for custom names in the Ceph OSD, Monitor, Manager, and Metadata server daemons. The Ceph project removed this support because it added complexities to systemd unit files. This fix was created before the decision to remove support for custom cluster names was made.
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-01-03 19:01:53 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1584264, 1629656    
Attachments:
Description Flags
ceph-volume log none

Description Parikshith 2018-08-23 20:18:27 UTC
Created attachment 1478339 [details]
ceph-volume log

Description of problem:
If custom cluster name is used and OSD node is rebooted, the hosts OSDs created by ceph-volume they will fail to start.

Version-Release number of selected component (if applicable):
ceph version 12.2.5-23redhat1xenial - ubuntu

How reproducible: always


Steps to Reproduce:
1. Install cluster with ceph-volume based filestore osds, with custom cluster name.
2. Reboot the osd node containing ceph-volume based osds

Actual results: 
OSDs does not come back up after reboot.


Expected results:
OSDs should be up after reboot.

Additional info:

Workaround: 
1. Create Symlink from custom cluster configuration file to ceph.conf
$sudo ln -s /etc/ceph/<custom-name>.conf /etc/ceph/ceph.conf

2. Reboot the node.

Comment 20 errata-xmlrpc 2019-01-03 19:01:53 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0020

Comment 21 John Brier 2019-06-26 14:22:35 UTC
Updating Doc Text to reflect that Ceph does not actually support custom cluster names. This came up in https://bugzilla.redhat.com/show_bug.cgi?id=1722394#c3

I am updating the Release Notes manually, not using CoRN, since an older version of CoRN than I have installed was used and double checking all the new formatting changes to ensure the actual content doesn't change is tedious. I'm updating the Doc Text in case someone does update the Release Notes later using CoRN.