Back to bug 1372804

Who When What Removed Added
Alexander Chuzhoy 2016-09-02 18:21:44 UTC Priority unspecified high
Target Release --- 10.0 (Newton)
Target Milestone --- ga
Giulio Fidente 2016-09-03 06:55:22 UTC CC gfidente
Alex Schultz 2016-09-07 15:17:33 UTC CC aschultz
John Fulton 2016-09-12 20:54:41 UTC CC johfulto
Giulio Fidente 2016-09-13 14:45:45 UTC Status NEW POST
CC srevivo
Component rhel-osp-director openstack-puppet-modules
Link ID OpenStack gerrit 358029
QA Contact ohochman achernet
Giulio Fidente 2016-09-13 15:02:06 UTC Assignee athomas gfidente
Jon Schlueter 2016-09-14 13:57:06 UTC CC jschluet
Assignee gfidente emacchi
Jon Schlueter 2016-09-14 14:26:29 UTC Assignee emacchi gfidente
John Fulton 2016-09-15 13:38:57 UTC CC ahirshbe
Arik Chernetsky 2016-09-18 13:48:32 UTC QA Contact achernet yrabl
Jason Joyce 2016-09-22 16:30:04 UTC CC jjoyce
Giulio Fidente 2016-10-03 13:18:04 UTC CC slinaber, tvignaud
Component openstack-puppet-modules puppet-ceph
QA Contact yrabl nlevinki
Giulio Fidente 2016-10-03 13:57:46 UTC Status POST MODIFIED
Fixed In Version puppet-ceph-2.1.0-0.20160926220714.c764ef8.el7ost
Jon Schlueter 2016-10-03 14:14:00 UTC Target Milestone ga rc
errata-xmlrpc 2016-10-03 14:20:44 UTC Status MODIFIED ON_QA
Yogev Rabl 2016-10-05 10:29:24 UTC CC yrabl
QA Contact nlevinki yrabl
Jon Schlueter 2016-10-31 11:52:39 UTC Keywords Triaged
Yogev Rabl 2016-11-13 19:27:23 UTC Status ON_QA VERIFIED
Giulio Fidente 2016-12-13 12:31:29 UTC CC sasha
Doc Text Cause:
The cephstorage node uses an local filesystem formatted using ext4 as backend for the ceph-osd service. Some overcloud-full images for OSP9 where incorrectly using ext4 instead of xfs.

Consequence:
From the Jewel release, ceph-osd will check the max file name length allowed by the backend and refuse to start if the limit is the lower than the one configured for Ceph itself.

Workaround (if any):
It should be possible to verify what filesystem is in use on the cephstorage nodes using a command like:

df -l --output=fstype /var/lib/ceph/osd/ceph-$ID

where $ID has to be replaced with the OSD id, for example:

df -l --output=fstype /var/lib/ceph/osd/ceph-0

NOTE: a single cephstorage node might be hosting more than a single ceph-osd instance, in which case there will be multiple subdirectories in /var/lib/ceph/osd/ for each instance.

If *any* of the OSD instances is backed by ext4, it is necessary to configure Ceph to use shorter file names, which is possible by deploying/upgrading with an additional environment file containing the following:

parameter_defaults:
ExtraConfig:
ceph::profile::params::osd_max_object_name_len: 256
ceph::profile::params::osd_max_object_namespace_len: 64

Result:
Verify that each and every ceph-osd instance is up and running after the upgrade from OSP9 to OSP10
Doc Type If docs needed, set a value Known Issue
Flags needinfo?(sasha)
Giulio Fidente 2016-12-13 12:35:54 UTC Doc Text Cause:
The cephstorage node uses an local filesystem formatted using ext4 as backend for the ceph-osd service. Some overcloud-full images for OSP9 where incorrectly using ext4 instead of xfs.

Consequence:
From the Jewel release, ceph-osd will check the max file name length allowed by the backend and refuse to start if the limit is the lower than the one configured for Ceph itself.

Workaround (if any):
It should be possible to verify what filesystem is in use on the cephstorage nodes using a command like:

df -l --output=fstype /var/lib/ceph/osd/ceph-$ID

where $ID has to be replaced with the OSD id, for example:

df -l --output=fstype /var/lib/ceph/osd/ceph-0

NOTE: a single cephstorage node might be hosting more than a single ceph-osd instance, in which case there will be multiple subdirectories in /var/lib/ceph/osd/ for each instance.

If *any* of the OSD instances is backed by ext4, it is necessary to configure Ceph to use shorter file names, which is possible by deploying/upgrading with an additional environment file containing the following:

parameter_defaults:
ExtraConfig:
ceph::profile::params::osd_max_object_name_len: 256
ceph::profile::params::osd_max_object_namespace_len: 64

Result:
Verify that each and every ceph-osd instance is up and running after the upgrade from OSP9 to OSP10
Cause:
The cephstorage nodes use a local filesystem formatted with ext4 as backend for the ceph-osd service. NOTE: some overcloud-full images for OSP9 where created using ext4 instead of xfs.

Consequence:
In the Jewel release, 'ceph-osd' will check the max file name length allowed by the backend and refuse to start if the limit is lower than the one configured for Ceph itself.

Workaround (if any):
It should be possible to verify what filesystem is in use for 'ceph-osd' logging on the cephstorage nodes and using a command like:

df -l --output=fstype /var/lib/ceph/osd/ceph-$ID

where $ID is the OSD id, for example:

df -l --output=fstype /var/lib/ceph/osd/ceph-0

NOTE: a single cephstorage node might host multiple 'ceph-osd' instances, in which case there will be multiple subdirectories in /var/lib/ceph/osd/ for each instance.

If *any* of the OSD instances is backed by ext4, it is necessary to configure Ceph to use shorter file names, which is possible by deploying/upgrading with an additional environment file, containing the following:

parameter_defaults:
ExtraConfig:
ceph::profile::params::osd_max_object_name_len: 256
ceph::profile::params::osd_max_object_namespace_len: 64

Result:
Verify that each and every 'ceph-osd' instance is up and running after the upgrade from OSP9 to OSP10
Alexander Chuzhoy 2016-12-13 19:41:33 UTC Flags needinfo?(sasha) needinfo?(mburns)
Giulio Fidente 2016-12-13 19:59:04 UTC Doc Text Cause:
The cephstorage nodes use a local filesystem formatted with ext4 as backend for the ceph-osd service. NOTE: some overcloud-full images for OSP9 where created using ext4 instead of xfs.

Consequence:
In the Jewel release, 'ceph-osd' will check the max file name length allowed by the backend and refuse to start if the limit is lower than the one configured for Ceph itself.

Workaround (if any):
It should be possible to verify what filesystem is in use for 'ceph-osd' logging on the cephstorage nodes and using a command like:

df -l --output=fstype /var/lib/ceph/osd/ceph-$ID

where $ID is the OSD id, for example:

df -l --output=fstype /var/lib/ceph/osd/ceph-0

NOTE: a single cephstorage node might host multiple 'ceph-osd' instances, in which case there will be multiple subdirectories in /var/lib/ceph/osd/ for each instance.

If *any* of the OSD instances is backed by ext4, it is necessary to configure Ceph to use shorter file names, which is possible by deploying/upgrading with an additional environment file, containing the following:

parameter_defaults:
ExtraConfig:
ceph::profile::params::osd_max_object_name_len: 256
ceph::profile::params::osd_max_object_namespace_len: 64

Result:
Verify that each and every 'ceph-osd' instance is up and running after the upgrade from OSP9 to OSP10
Cause:
The cephstorage nodes use a local filesystem formatted with ext4 as backend for the ceph-osd service. NOTE: some overcloud-full images for OSP9 were created using ext4 instead of xfs.

Consequence:
In the Jewel release, 'ceph-osd' will check the max file name length allowed by the backend and refuse to start if the limit is lower than the one configured for Ceph itself.

Workaround (if any):
It should be possible to verify what filesystem is in use for 'ceph-osd' by logging on the cephstorage nodes and use a command like:

# df -l --output=fstype /var/lib/ceph/osd/ceph-$ID

where $ID is the OSD id, for example:

# df -l --output=fstype /var/lib/ceph/osd/ceph-0

NOTE: a single cephstorage node might host multiple 'ceph-osd' instances, in which case there will be multiple subdirectories in /var/lib/ceph/osd/ for each instance.

If *any* of the OSD instances is backed by an ext4 filesystem, it is necessary to configure Ceph to use shorter file names, which is possible by deploying/upgrading with an additional environment file, containing the following:

parameter_defaults:
ExtraConfig:
ceph::profile::params::osd_max_object_name_len: 256
ceph::profile::params::osd_max_object_namespace_len: 64

Result:
Verify that each and every 'ceph-osd' instance is up and running after the upgrade from OSP9 to OSP10
Deepti Navale 2016-12-14 02:53:02 UTC CC dnavale
Doc Text Cause:
The cephstorage nodes use a local filesystem formatted with ext4 as backend for the ceph-osd service. NOTE: some overcloud-full images for OSP9 were created using ext4 instead of xfs.

Consequence:
In the Jewel release, 'ceph-osd' will check the max file name length allowed by the backend and refuse to start if the limit is lower than the one configured for Ceph itself.

Workaround (if any):
It should be possible to verify what filesystem is in use for 'ceph-osd' by logging on the cephstorage nodes and use a command like:

# df -l --output=fstype /var/lib/ceph/osd/ceph-$ID

where $ID is the OSD id, for example:

# df -l --output=fstype /var/lib/ceph/osd/ceph-0

NOTE: a single cephstorage node might host multiple 'ceph-osd' instances, in which case there will be multiple subdirectories in /var/lib/ceph/osd/ for each instance.

If *any* of the OSD instances is backed by an ext4 filesystem, it is necessary to configure Ceph to use shorter file names, which is possible by deploying/upgrading with an additional environment file, containing the following:

parameter_defaults:
ExtraConfig:
ceph::profile::params::osd_max_object_name_len: 256
ceph::profile::params::osd_max_object_namespace_len: 64

Result:
Verify that each and every 'ceph-osd' instance is up and running after the upgrade from OSP9 to OSP10
Previously, the Ceph Storage nodes use the local filesystem formatted with `ext4` as the back end for the `ceph-osd` service.

Note: Some `overcloud-full` images for Red Hat OpenStack Platform 9 (Mitaka) were created using `ext4` instead of `xfs`.

With the Jewel release, `ceph-osd` checks the maximum file name length allowed by the back end and refuses to start if the limit is lower than the one configured for Ceph itself. As a workaround, it is possible to verify the filesystem in use for `ceph-osd` by logging on the Ceph Storage nodes and using the following command:

# df -l --output=fstype /var/lib/ceph/osd/ceph-$ID

Here, $ID is the OSD ID, for example:

# df -l --output=fstype /var/lib/ceph/osd/ceph-0

Note: A single Ceph Storage node might host multiple `ceph-osd` instances, in which case there will be multiple subdirectories in `/var/lib/ceph/osd/ for each instance.

If *any* of the OSD instances is backed by an `ext4` filesystem, it is necessary to configure Ceph to use shorter file names, which is possible by deploying/upgrading with an additional environment file, containing the following:

parameter_defaults:
ExtraConfig:
ceph::profile::params::osd_max_object_name_len: 256
ceph::profile::params::osd_max_object_namespace_len: 64

As a result, you can now verify if each and every `ceph-osd` instance is up and running after an upgrade from Red Hat OpenStack Platform 9 to Red Hat OpenStack Platform 10.
errata-xmlrpc 2016-12-14 13:38:29 UTC Status VERIFIED RELEASE_PENDING
errata-xmlrpc 2016-12-14 15:56:24 UTC Status RELEASE_PENDING CLOSED
Resolution --- ERRATA
Last Closed 2016-12-14 10:56:24 UTC
Mike Burns 2017-04-17 13:50:17 UTC Flags needinfo?(mburns)

Back to bug 1372804