Bug 1372804
| Summary: | rhel-osp-director: The ceph OSD deamon is not activated with ext4 file system | ||
|---|---|---|---|
| Product: | Red Hat OpenStack | Reporter: | Alexander Chuzhoy <sasha> |
| Component: | puppet-ceph | Assignee: | Giulio Fidente <gfidente> |
| Status: | CLOSED ERRATA | QA Contact: | Yogev Rabl <yrabl> |
| Severity: | unspecified | Docs Contact: | |
| Priority: | high | ||
| Version: | 10.0 (Newton) | CC: | ahirshbe, aschultz, dbecker, dnavale, gfidente, jjoyce, johfulto, jschluet, mburns, morazi, rhel-osp-director-maint, sasha, slinaber, srevivo, tvignaud, yrabl |
| Target Milestone: | rc | Keywords: | Triaged |
| Target Release: | 10.0 (Newton) | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | puppet-ceph-2.1.0-0.20160926220714.c764ef8.el7ost | Doc Type: | Known Issue |
| Doc Text: |
Previously, the Ceph Storage nodes use the local filesystem formatted with `ext4` as the back end for the `ceph-osd` service.
Note: Some `overcloud-full` images for Red Hat OpenStack Platform 9 (Mitaka) were created using `ext4` instead of `xfs`.
With the Jewel release, `ceph-osd` checks the maximum file name length allowed by the back end and refuses to start if the limit is lower than the one configured for Ceph itself. As a workaround, it is possible to verify the filesystem in use for `ceph-osd` by logging on the Ceph Storage nodes and using the following command:
# df -l --output=fstype /var/lib/ceph/osd/ceph-$ID
Here, $ID is the OSD ID, for example:
# df -l --output=fstype /var/lib/ceph/osd/ceph-0
Note: A single Ceph Storage node might host multiple `ceph-osd` instances, in which case there will be multiple subdirectories in `/var/lib/ceph/osd/ for each instance.
If *any* of the OSD instances is backed by an `ext4` filesystem, it is necessary to configure Ceph to use shorter file names, which is possible by deploying/upgrading with an additional environment file, containing the following:
parameter_defaults:
ExtraConfig:
ceph::profile::params::osd_max_object_name_len: 256
ceph::profile::params::osd_max_object_namespace_len: 64
As a result, you can now verify if each and every `ceph-osd` instance is up and running after an upgrade from Red Hat OpenStack Platform 9 to Red Hat OpenStack Platform 10.
|
Story Points: | --- |
| Clone Of: | Environment: | ||
| Last Closed: | 2016-12-14 15:56:24 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
This is seen when deploying a ceph OSD backed by a local filesystem formatted with ext4, not if the local filesystem is formatted with xfs and not when using dedicated disks. The bug affects ceph jewel and is tracked via http://tracker.ceph.com/issues/16187 The default rados object max length in Ceph is greater than that allowed by the filesystem. As I see it, the options we have are: 1. lower the default ceph object max length in tripleo so that it always fits, this will eventually change a default in ceph which we don't want and is also unsafe to do with radosgw as it will fail to serve requests for which the object name requested by the user is longer than the one we set this would be achieved by https://review.openstack.org/#/c/365210/ https://review.openstack.org/#/c/358029/ 2. change the default filesystems we use for the overcloud image to xfs, as recommended in http://docs.ceph.com/docs/jewel/rados/configuration/filesystem-recommendations/ 3. provide an option to and document how to set the object max name length at deployment time, for those using ext4 this is currently possible with an environment file like the following: parameter_defaults: ExtraConfig: ceph::conf::args: global/osd_max_object_name_len: value: 256 global/osd max_object_namespace_len: value: 64 Tried the last option from comment #2. Here's the generated ceph.conf on the storage node: [global] osd_max_object_name_len = 256 osd max_object_namespace_len = 64 osd_pool_default_min_size = 1 auth_service_required = cephx mon_initial_members = overcloud-controller-0,overcloud-controller-1,overcloud-controller-2 fsid = c849942e-7465-11e6-871b-525400230552 cluster_network = 192.168.120.19/24 auth_supported = cephx auth_cluster_required = cephx mon_host = 192.168.110.21,192.168.110.18,192.168.110.10 auth_client_required = cephx public_network = 192.168.110.20/24 Was able to successfully deploy the overcloud and even use the ceph storage. Once the ceph node was rebooted - wasn't able to start the ceph-osd: Sep 06 20:52:06 overcloud-cephstorage-0.localdomain systemd[1]: Starting Ceph object storage daemon... Sep 06 20:52:07 overcloud-cephstorage-0.localdomain ceph-osd-prestart.sh[3078]: 2016-09-06 20:52:07.079244 7fa98c63a700 0 -- :/3878752885 >> 192.168.110.10:6789/0 pipe(0x7fa9880611e0 sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0x7fa98805f170).fault Sep 06 20:52:10 overcloud-cephstorage-0.localdomain ceph-osd-prestart.sh[3078]: 2016-09-06 20:52:10.079497 7fa98c539700 0 -- :/3878752885 >> 192.168.110.21:6789/0 pipe(0x7fa97c000c80 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7fa97c001f90).fault Sep 06 20:52:13 overcloud-cephstorage-0.localdomain ceph-osd-prestart.sh[3078]: 2016-09-06 20:52:13.080485 7fa98c63a700 0 -- :/3878752885 >> 192.168.110.18:6789/0 pipe(0x7fa97c005270 sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0x7fa97c006530).fault Sep 06 20:52:16 overcloud-cephstorage-0.localdomain ceph-osd-prestart.sh[3078]: 2016-09-06 20:52:16.080162 7fa98c539700 0 -- :/3878752885 >> 192.168.110.10:6789/0 pipe(0x7fa97c000c80 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7fa97c002410).fault Sep 06 20:52:19 overcloud-cephstorage-0.localdomain ceph-osd-prestart.sh[3078]: 2016-09-06 20:52:19.080424 7fa98c63a700 0 -- :/3878752885 >> 192.168.110.18:6789/0 pipe(0x7fa97c005270 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7fa97c002f20).fault Sep 06 20:52:22 overcloud-cephstorage-0.localdomain ceph-osd-prestart.sh[3078]: 2016-09-06 20:52:22.081550 7fa98c539700 0 -- :/3878752885 >> 192.168.110.21:6789/0 pipe(0x7fa97c000c80 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7fa97c0035e0).fault Sep 06 20:52:25 overcloud-cephstorage-0.localdomain ceph-osd-prestart.sh[3078]: 2016-09-06 20:52:25.080984 7fa98c63a700 0 -- :/3878752885 >> 192.168.110.10:6789/0 pipe(0x7fa97c005270 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7fa97c004040).fault Sep 06 20:52:28 overcloud-cephstorage-0.localdomain ceph-osd-prestart.sh[3078]: 2016-09-06 20:52:28.081322 7fa98c539700 0 -- :/3878752885 >> 192.168.110.21:6789/0 pipe(0x7fa97c008e50 sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0x7fa97c000d70).fault Sep 06 20:52:31 overcloud-cephstorage-0.localdomain ceph-osd-prestart.sh[3078]: 2016-09-06 20:52:31.081894 7fa98c63a700 0 -- :/3878752885 >> 192.168.110.10:6789/0 pipe(0x7fa97c005270 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7fa97c007950).fault Sep 06 20:52:34 overcloud-cephstorage-0.localdomain ceph-osd-prestart.sh[3078]: 2016-09-06 20:52:34.084586 7fa98c539700 0 -- :/3878752885 >> 192.168.110.18:6789/0 pipe(0x7fa97c008e50 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7fa97c0086e0).fault Sep 06 20:52:37 overcloud-cephstorage-0.localdomain ceph-osd-prestart.sh[3078]: 2016-09-06 20:52:37.082377 7fa98c63a700 0 -- :/3878752885 >> 192.168.110.21:6789/0 pipe(0x7fa97c005270 sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0x7fa97c00b660).fault Sep 06 20:52:40 overcloud-cephstorage-0.localdomain ceph-osd-prestart.sh[3078]: 2016-09-06 20:52:40.082604 7fa98c539700 0 -- :/3878752885 >> 192.168.110.18:6789/0 pipe(0x7fa97c008e50 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7fa97c001010).fault Sep 06 20:52:43 overcloud-cephstorage-0.localdomain ceph-osd-prestart.sh[3078]: 2016-09-06 20:52:43.083734 7fa98c63a700 0 -- :/3878752885 >> 192.168.110.10:6789/0 pipe(0x7fa97c005270 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7fa97c001a20).fault Sep 06 20:52:46 overcloud-cephstorage-0.localdomain ceph-osd-prestart.sh[3078]: 2016-09-06 20:52:46.083257 7fa98c539700 0 -- :/3878752885 >> 192.168.110.18:6789/0 pipe(0x7fa97c008e50 sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0x7fa97c00cfd0).fault Sep 06 20:52:49 overcloud-cephstorage-0.localdomain ceph-osd-prestart.sh[3078]: 2016-09-06 20:52:49.084638 7fa98c63a700 0 -- :/3878752885 >> 192.168.110.21:6789/0 pipe(0x7fa97c005270 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7fa97c00be40).fault Sep 06 20:52:52 overcloud-cephstorage-0.localdomain ceph-osd-prestart.sh[3078]: 2016-09-06 20:52:52.083885 7fa98c539700 0 -- :/3878752885 >> 192.168.110.18:6789/0 pipe(0x7fa97c008e50 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7fa97c00a210).fault Sep 06 20:52:55 overcloud-cephstorage-0.localdomain ceph-osd-prestart.sh[3078]: 2016-09-06 20:52:55.084976 7fa98c63a700 0 -- :/3878752885 >> 192.168.110.10:6789/0 pipe(0x7fa97c005270 sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0x7fa97c00a7d0).fault Sep 06 20:52:58 overcloud-cephstorage-0.localdomain ceph-osd-prestart.sh[3078]: 2016-09-06 20:52:58.084527 7fa98c539700 0 -- :/3878752885 >> 192.168.110.21:6789/0 pipe(0x7fa97c008e50 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7fa97c00e1c0).fault Sep 06 20:53:01 overcloud-cephstorage-0.localdomain ceph-osd-prestart.sh[3078]: 2016-09-06 20:53:01.084814 7fa98c63a700 0 -- :/3878752885 >> 192.168.110.10:6789/0 pipe(0x7fa97c005270 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7fa97c00e590).fault Sep 06 20:53:04 overcloud-cephstorage-0.localdomain ceph-osd-prestart.sh[3078]: 2016-09-06 20:53:04.086163 7fa98c539700 0 -- :/3878752885 >> 192.168.110.18:6789/0 pipe(0x7fa97c008e50 sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0x7fa97c00f2e0).fault Sep 06 20:53:07 overcloud-cephstorage-0.localdomain ceph-osd-prestart.sh[3078]: 2016-09-06 20:53:07.085352 7fa98c63a700 0 -- :/3878752885 >> 192.168.110.21:6789/0 pipe(0x7fa97c005270 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7fa97c00fd50).fault Sep 06 20:53:10 overcloud-cephstorage-0.localdomain ceph-osd-prestart.sh[3078]: 2016-09-06 20:53:10.085659 7fa98c539700 0 -- :/3878752885 >> 192.168.110.10:6789/0 pipe(0x7fa97c008e50 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7fa97c0103e0).fault Sep 06 20:53:13 overcloud-cephstorage-0.localdomain ceph-osd-prestart.sh[3078]: 2016-09-06 20:53:13.085919 7fa98c63a700 0 -- :/3878752885 >> 192.168.110.18:6789/0 pipe(0x7fa97c005270 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7fa97c006530).fault Sep 06 20:53:16 overcloud-cephstorage-0.localdomain ceph-osd-prestart.sh[3078]: 2016-09-06 20:53:16.086241 7fa98c539700 0 -- :/3878752885 >> 192.168.110.10:6789/0 pipe(0x7fa97c008e50 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7fa97c0070d0).fault Sep 06 20:53:19 overcloud-cephstorage-0.localdomain ceph-osd-prestart.sh[3078]: 2016-09-06 20:53:19.086610 7fa98c63a700 0 -- :/3878752885 >> 192.168.110.18:6789/0 pipe(0x7fa97c005270 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7fa97c006530).fault Sep 06 20:53:22 overcloud-cephstorage-0.localdomain ceph-osd-prestart.sh[3078]: 2016-09-06 20:53:22.086874 7fa98c539700 0 -- :/3878752885 >> 192.168.110.10:6789/0 pipe(0x7fa97c008e50 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7fa97c00d260).fault Sep 06 20:53:25 overcloud-cephstorage-0.localdomain ceph-osd-prestart.sh[3078]: 2016-09-06 20:53:25.088020 7fa98c63a700 0 -- :/3878752885 >> 192.168.110.21:6789/0 pipe(0x7fa97c005270 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7fa97c006530).fault Sep 06 20:53:28 overcloud-cephstorage-0.localdomain ceph-osd-prestart.sh[3078]: 2016-09-06 20:53:28.087886 7fa98c539700 0 -- :/3878752885 >> 192.168.110.18:6789/0 pipe(0x7fa97c008e50 sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0x7fa97c00dc40).fault Sep 06 20:53:31 overcloud-cephstorage-0.localdomain ceph-osd-prestart.sh[3078]: 2016-09-06 20:53:31.088054 7fa98c63a700 0 -- :/3878752885 >> 192.168.110.21:6789/0 pipe(0x7fa97c005270 sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0x7fa97c006530).fault Sep 06 20:53:34 overcloud-cephstorage-0.localdomain ceph-osd-prestart.sh[3078]: 2016-09-06 20:53:34.104016 7fa98c539700 0 -- :/3878752885 >> 192.168.110.10:6789/0 pipe(0x7fa97c008e50 sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0x7fa97c004c20).fault Sep 06 20:53:37 overcloud-cephstorage-0.localdomain systemd[1]: ceph-osd start-pre operation timed out. Terminating. Sep 06 20:53:37 overcloud-cephstorage-0.localdomain systemd[1]: Failed to start Ceph object storage daemon. Sep 06 20:53:37 overcloud-cephstorage-0.localdomain systemd[1]: Unit ceph-osd entered failed state. Sep 06 20:53:37 overcloud-cephstorage-0.localdomain systemd[1]: ceph-osd failed. Sep 06 20:53:37 overcloud-cephstorage-0.localdomain systemd[1]: ceph-osd holdoff time over, scheduling restart. Sep 06 20:53:37 overcloud-cephstorage-0.localdomain systemd[1]: start request repeated too quickly for ceph-osd Sep 06 20:53:37 overcloud-cephstorage-0.localdomain systemd[1]: Failed to start Ceph object storage daemon. Sep 06 20:53:37 overcloud-cephstorage-0.localdomain systemd[1]: Unit ceph-osd entered failed state. Sep 06 20:53:37 overcloud-cephstorage-0.localdomain systemd[1]: ceph-osd failed. Just an FYI there's a typo in the item from #2 as it should be 'global/osd_max_object_namespace_len' not 'global/ods max_object_namespace_len'. That may be why it didn't work for you again
parameter_defaults:
ExtraConfig:
ceph::conf::args:
global/osd_max_object_name_len:
value: 256
global/osd_max_object_namespace_len:
value: 64
Thanks for looking Alex. The missing underscore was indeed the culprit. Works now. Despite one successful result starting ceph after reboot on one setup, I still see the same issue reproduces:
I have the following in the included network-environment.yaml:
parameter_defaults:
ExtraConfig:
ceph::conf::args:
global/osd_max_object_name_len:
value: 256
global/osd_max_object_namespace_len:
value: 64
Here's the generated ceph.conf on the ceph node:
[global]
osd_max_object_namespace_len = 64
osd_max_object_name_len = 256
osd_pool_default_min_size = 1
auth_service_required = cephx
mon_initial_members = overcloud-controller-0,overcloud-controller-1,overcloud-controller-2
fsid = cb02152a-75d6-11e6-b48d-5254001d15a4
cluster_network = 192.168.120.19/24
auth_supported = cephx
auth_cluster_required = cephx
mon_host = 192.168.110.13,192.168.110.17,192.168.110.21
auth_client_required = cephx
public_network = 192.168.110.20/24
Hi Giulio, That's right. I actually filed a separate BZ for what's described in comment #6: https://bugzilla.redhat.com/show_bug.cgi?id=1374465. Thanks. With the inclusion of https://review.openstack.org/358029 in the puddle, the workaround to deploy an OSD on ext4 (which we should document in the release notes) becomes: parameter_defaults: ExtraConfig: ceph::profile::params::osd_max_object_name_len: 256 ceph::profile::params::osd_max_object_namespace_len: 64 *** Bug 1374985 has been marked as a duplicate of this bug. *** verified on puppet-ceph-2.2.1-3.el7ost.noarch Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHEA-2016-2948.html |
rhel-osp-director: The ceph OSD deamon is not activated with ext4 file system Environment: instack-undercloud-5.0.0-0.20160818065636.41ef775.el7ost.noarch openstack-puppet-modules-9.0.0-0.20160802183056.8c758d6.el7ost.noarch openstack-tripleo-heat-templates-5.0.0-0.20160823140311.72404b.1.el7ost.noarch puppet-ceph-2.0.0-0.20160823145734.4e36628.el7ost.noarch libcephfs1-10.2.2-38.el7cp.x86_64 ceph-radosgw-10.2.2-38.el7cp.x86_64 ceph-mon-10.2.2-38.el7cp.x86_64 ceph-common-10.2.2-38.el7cp.x86_64 python-cephfs-10.2.2-38.el7cp.x86_64 ceph-selinux-10.2.2-38.el7cp.x86_64 ceph-base-10.2.2-38.el7cp.x86_64 ceph-osd-10.2.2-38.el7cp.x86_64 Steps to reproduce: Deploy overcloud with: 1. openstack overcloud deploy --templates --control-scale 3 --compute-scale 1 --neutron-network-type vxlan --neutron-tunnel-types vxlan --ntp-server clock.redhat.com --timeout 90 -e /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e network-environment.yaml --ceph-storage-scale 1 2. Check the osd status on the ceph node: [root@overcloud-cephstorage-0 ~]# ceph osd tree ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY -1 0.03749 root default -2 0.03749 host overcloud-cephstorage-0 0 0.03749 osd.0 down 0 1.00000 [root@overcloud-cephstorage-0 ~]# journalctl -u ceph-osd|tail Sep 02 16:43:26 overcloud-cephstorage-0.localdomain systemd[1]: start request repeated too quickly for ceph-osd Sep 02 16:43:26 overcloud-cephstorage-0.localdomain systemd[1]: Failed to start Ceph object storage daemon. Sep 02 16:43:26 overcloud-cephstorage-0.localdomain systemd[1]: Unit ceph-osd entered failed state. Sep 02 16:43:26 overcloud-cephstorage-0.localdomain systemd[1]: ceph-osd failed. Sep 02 17:05:28 overcloud-cephstorage-0.localdomain systemd[1]: start request repeated too quickly for ceph-osd Sep 02 17:05:28 overcloud-cephstorage-0.localdomain systemd[1]: Failed to start Ceph object storage daemon. Sep 02 17:05:28 overcloud-cephstorage-0.localdomain systemd[1]: ceph-osd failed. Sep 02 17:05:38 overcloud-cephstorage-0.localdomain systemd[1]: start request repeated too quickly for ceph-osd Sep 02 17:05:38 overcloud-cephstorage-0.localdomain systemd[1]: Failed to start Ceph object storage daemon. Sep 02 17:05:38 overcloud-cephstorage-0.localdomain systemd[1]: ceph-osd failed. Sep 02 16:43:23 overcloud-cephstorage-0.localdomain systemd[1]: [/usr/lib/systemd/system/ceph-osd@.service:18] Unknown lvalue 'TasksMax' in section 'Service' Sep 02 16:43:23 overcloud-cephstorage-0.localdomain systemd[1]: [/usr/lib/systemd/system/ceph-osd@.service:18] Unknown lvalue 'TasksMax' in section 'Service' Sep 02 16:43:24 overcloud-cephstorage-0.localdomain systemd[1]: Starting Ceph object storage daemon... Sep 02 16:43:24 overcloud-cephstorage-0.localdomain ceph-osd-prestart.sh[17528]: create-or-move updating item name 'osd.0' weight 0.0375 at location {host=overcloud-cephstorage-0,root=default} to crush map Sep 02 16:43:24 overcloud-cephstorage-0.localdomain systemd[1]: Started Ceph object storage daemon. Sep 02 16:43:25 overcloud-cephstorage-0.localdomain ceph-osd[17576]: starting osd.0 at :/0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal Sep 02 16:43:25 overcloud-cephstorage-0.localdomain ceph-osd[17576]: 2016-09-02 16:43:25.056115 7fdb4b9af800 -1 filestore(/var/lib/ceph/osd/ceph-0) WARNING: max attr value size (1024) is smaller than osd_max_objec Sep 02 16:43:25 overcloud-cephstorage-0.localdomain ceph-osd[17576]: 2016-09-02 16:43:25.140627 7fdb4b9af800 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use Sep 02 16:43:25 overcloud-cephstorage-0.localdomain ceph-osd[17576]: 2016-09-02 16:43:25.141471 7fdb4b9af800 -1 osd.0 0 backend (filestore) is unable to support max object name[space] len Sep 02 16:43:25 overcloud-cephstorage-0.localdomain ceph-osd[17576]: 2016-09-02 16:43:25.141477 7fdb4b9af800 -1 osd.0 0 osd max object name len = 2048 Sep 02 16:43:25 overcloud-cephstorage-0.localdomain ceph-osd[17576]: 2016-09-02 16:43:25.141478 7fdb4b9af800 -1 osd.0 0 osd max object namespace len = 256 Sep 02 16:43:25 overcloud-cephstorage-0.localdomain ceph-osd[17576]: 2016-09-02 16:43:25.141479 7fdb4b9af800 -1 osd.0 0 (36) File name too long Sep 02 16:43:25 overcloud-cephstorage-0.localdomain ceph-osd[17576]: 2016-09-02 16:43:25.147516 7fdb4b9af800 -1 ** ERROR: osd init failed: (36) File name too long There's a ceph bug: http://tracker.ceph.com/issues/16187 I was able to work around the issue after: 1. adding the following 2 lines to /etc/ceph/ceph.conf on the ceph node: osd max object name len = 256 osd max object namespace len = 64 2. systemctl start ceph-osd Expected result: shipping with a shorter max name set in ceph.conf