Bug 1554963
| Summary: | Pool create cmd's expected_num_objects is not properly interpreted | |||
|---|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Mike Hackett <mhackett> | |
| Component: | RADOS | Assignee: | Kefu Chai <kchai> | |
| Status: | CLOSED ERRATA | QA Contact: | Vasishta <vashastr> | |
| Severity: | high | Docs Contact: | Aron Gunn <agunn> | |
| Priority: | high | |||
| Version: | 2.5 | CC: | agunn, ceph-eng-bugs, ceph-qe-bugs, dzafman, hnallurv, jdurgin, kchai, tserlin, vashastr, vumrao | |
| Target Milestone: | z1 | |||
| Target Release: | 2.5 | |||
| Hardware: | x86_64 | |||
| OS: | All | |||
| Whiteboard: | ||||
| Fixed In Version: | RHEL: ceph-10.2.10-19.el7cp Ubuntu: ceph_10.2.10-19redhat1 | Doc Type: | Bug Fix | |
| Doc Text: |
.The 'expected_num_objects' option was not working as expected
Previously, when using the 'ceph osd pool create' command with 'expected_num_objects' option, placement group (PG) directories were not pre-created at pool creation time as expected, resulting in performance drops when filestore splitting occurred. With this update, the 'expected_num_objects' parameter is now passed through to filestore correctly, and PG directories for the expected number of objects are pre-created at pool creation time.
|
Story Points: | --- | |
| Clone Of: | ||||
| : | 1554970 1579039 (view as bug list) | Environment: | ||
| Last Closed: | 2018-07-26 18:06:41 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
| Bug Depends On: | ||||
| Bug Blocks: | 1536401, 1554970, 1579039 | |||
|
Description
Mike Hackett
2018-03-13 16:16:48 UTC
Reproducer:
- RHEL 7.4
- RHCS 2.5.1 (10.2.10-17.el7cp)
3 OSD nodes
9 OSDs
3 Monitors
- Merge was disabled on all 9 osd's via global config option:
filestore_merge_threshold = -10
[root@ceph-node2 ceph]# ceph --admin-daemon ceph-osd.0.asok config show |grep filestore_merge
"filestore_merge_threshold": "-10",
- Pool test1 created with 64 PG's and expected_num_objects 1000000. No prior pools existed in the cluster prior to pool creation.
[root@ceph-node1 ceph-ansible]# ceph osd pool create test 64 64 replicated replicated_ruleset 1000000
pool 'test' created
- Validated on OSD.0 in newly created PG that directory structure was not precreated:
[root@ceph-node2 1.10_head]# pwd
/var/lib/ceph/osd/ceph-0/current/1.10_head
[root@ceph-node2 1.10_head]# tree /var/lib/ceph/osd/ceph-0/current/1.10_head/ | more
/var/lib/ceph/osd/ceph-0/current/1.10_head/
└── __head_00000010__1
0 directories, 1 file
- Pool info:
[root@ceph-node2 1.10_head]# ceph osd dump |grep pool
pool 1 'test' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 45 flags hashpspool stripe_width 0
Issue is reproduced -
Version - 10.2.10-22.el7cp
Container image - ceph-2-rhel-7-containers-candidate-79858-20180629034923
---------------------------------------------
$ sudo docker exec ceph-mon-<mon-node> ceph daemon mon.<mon-node> config show |grep filestore_merge_threshold
"filestore_merge_threshold": "-10"
$ sudo docker exec ceph-mon-<mon-node> ceph osd pool create test_1 64 64 replicated replicated_ruleset 100000
pool 'test_1' created
$ sudo docker exec ceph-mon-<mon-node> ceph df|grep test_1
test_1 11 0 0 2633G 0
$ sudo docker exec ceph-mon-<mon-node> ceph pg map 11.0
osdmap e43 pg 11.0 (11.0) -> up [1,3,5] acting [1,3,5]
$ sudo docker exec ceph-osd-<osd-node>-sdd ceph-disk list |grep osd.1
/dev/sdd1 ceph data (dmcrypt LUKS /dev/dm-0), cluster ceph, osd.1, journal /dev/sdd2
# tree /var/lib/ceph/osd/ceph-1/current/11.0_head/
/var/lib/ceph/osd/ceph-1/current/11.0_head/
`-- __head_00000000__b
0 directories, 1 file
-------------------------------------------------------
Based on Comment 4, I think issue was seen in ceph-10.2.10-19.el7cp also.
I'm moving the bug to ASSIGNED state, Please let me know if I have missed anything.
Regards,
Vasishta Shatsry
QE, Ceph
(In reply to Vasishta from comment #21) > Issue is reproduced - > > Version - 10.2.10-22.el7cp > Container image - ceph-2-rhel-7-containers-candidate-79858-20180629034923 > > --------------------------------------------- > $ sudo docker exec ceph-mon-<mon-node> ceph daemon mon.<mon-node> config > show |grep filestore_merge_threshold > "filestore_merge_threshold": "-10" This is an osd config setting - what's on the monitor is irrelevant - was it set on the osds? If it wasn't the directories would not be split. Hi Josh,
I'm really sorry, I was confused.
Checked for both EC pool and replicated pool, Working fine, moving to VERIFIED state.
# tree /var/lib/ceph/osd/ceph-1/current/15.0s2_head/
/var/lib/ceph/osd/ceph-1/current/15.0s2_head/
`-- DIR_0
|-- DIR_0
| `-- __head_00000000__f_ffffffffffffffff_2
|-- DIR_4
|-- DIR_8
`-- DIR_C
# tree /var/lib/ceph/osd/ceph-4/current/13.0_head/
/var/lib/ceph/osd/ceph-4/current/13.0_head/
`-- DIR_0
|-- DIR_0
| |-- DIR_0
| | `-- __head_00000000__d
| |-- DIR_1
| |-- DIR_2
| |-- DIR_3
| |-- DIR_4
| |-- DIR_5
Regards,
Vasishta
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2018:2261 |