Red Hat Bugzilla – Bug 1299308
disable filestore_xfs_extsize by default
Last modified: 2017-07-30 11:21:28 EDT
Description of problem:
filestore_xfs_extsize defaults to "true" in Hammer. This option is designed to reduce fragmentation on the OSDs.
Later tests have found that disabling filestore_xfs_extsize in upstream hammer improves large sequential write performance by about 20% (Ben Turner's cluster), and in some other tests by lesser amounts. This brings us closer to the large sequential write performance of Firefly.
Version-Release number of selected component (if applicable):
ceph-0.94.5-1 and earlier ceph-0.94.z versions
Steps to Reproduce:
1. Leave filestore_xfs_extsize unset (currently defaults to "true"),
2. Run large sequential writes tests via CBT, note performance numbers,
3. Set filestore_xfs_extsize to "false",
4. Re-run large sequential writes in CBT, note performance numbers.
The default filestore_xfs_extsize setting results in a large sequential write performance degradation.
The default filestore_xfs_extsize setting should not result in a large sequential write performance degradation.
The proposed fix is to set filestore_xfs_extsize back to "false" in src/common/config_opts.h.
This change comes with a trade-off, because it introduces fragmentation on the OSDs. To address this, we should introduce documentation which explains the fragmentation cost and suggest that customers who have a large sequential read use-cases (object storage/CDN) toggle the value to reduce the fragmentation impact over time.
Upstream PR @ https://github.com/ceph/ceph/pull/7265 - Mark, could you please review it?
PR was merged upstream; need to cherry-pick to ceph-1.3.1-rhel-patches in Gerrit.
Ubuntu build with this patch is ceph_0.94.3.3-1redhat1trusty
(In reply to Ken Dreyer (Red Hat) from comment #5)
> Ubuntu build with this patch is ceph_0.94.3.3-1redhat1trusty
I had to bump the version number, so it's ceph_0.94.3.3-2redhat1trusty
Marking this Bug as Verified as this was tested part of 1.3.1 Async Release.
Confirmed this Fix was also part of 1.3.2 code base.
ceph --admin-daemon /var/run/ceph/ceph-osd.1.asok config get filestore_xfs_extsize
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.