Bugzilla will be upgraded to version 5.0 on a still to be determined date in the near future. The original upgrade date has been delayed.
Bug 1621901 - [ceph-volume]: After reboot, ceph-volume based filestore osds fails to start on cluster with custom name
[ceph-volume]: After reboot, ceph-volume based filestore osds fails to start ...
Status: VERIFIED
Product: Red Hat Ceph Storage
Classification: Red Hat
Component: Ceph-Volume (Show other bugs)
3.1
Unspecified Unspecified
high Severity high
: rc
: 3.2
Assigned To: Alfredo Deza
Parikshith
Bara Ancincova
:
Depends On:
Blocks: 1629656 1584264
  Show dependency treegraph
 
Reported: 2018-08-23 16:18 EDT by Parikshith
Modified: 2018-10-29 07:10 EDT (History)
9 users (show)

See Also:
Fixed In Version: RHEL: ceph-12.2.8-17.el7cp Ubuntu: ceph_12.2.8-15redhat1
Doc Type: Bug Fix
Doc Text:
.Using custom storage cluster name is now supported When using a custom storage cluster name other than `ceph`, the OSDs could not start after a reboot. With this update, using custom cluster names is supported, and rebooting OSDs works as expected in this case.
Story Points: ---
Clone Of:
Environment:
Last Closed:
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
ceph-volume log (512.25 KB, text/plain)
2018-08-23 16:18 EDT, Parikshith
no flags Details

  None (edit)
Description Parikshith 2018-08-23 16:18:27 EDT
Created attachment 1478339 [details]
ceph-volume log

Description of problem:
If custom cluster name is used and OSD node is rebooted, the hosts OSDs created by ceph-volume they will fail to start.

Version-Release number of selected component (if applicable):
ceph version 12.2.5-23redhat1xenial - ubuntu

How reproducible: always


Steps to Reproduce:
1. Install cluster with ceph-volume based filestore osds, with custom cluster name.
2. Reboot the osd node containing ceph-volume based osds

Actual results: 
OSDs does not come back up after reboot.


Expected results:
OSDs should be up after reboot.

Additional info:

Workaround: 
1. Create Symlink from custom cluster configuration file to ceph.conf
$sudo ln -s /etc/ceph/<custom-name>.conf /etc/ceph/ceph.conf

2. Reboot the node.

Note You need to log in before you can comment on or make changes to this bug.