Description of problem: When using pool create command with expected_num_objects PG directories do not get pre-split when created as expected. Pool created using: # ceph osd pool create <name> <pg count> <pg count> <pool type> <ruleset> <expected_num_objects> Documented here: https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/2/html-single/storage_strategies_guide/#create_a_pool Version-Release number of selected component (if applicable): RHCS 2.5 How reproducible: Constant Steps to Reproduce: 1. Disable merge on OSD's 2. Restart OSD's to push disabling Merge change 3. Create new pool on Ceph cluster with no existing pools. 4. Use expected_num_objects parameter when creating the pool and set high number of objects (ex: 1000000). 5. tree the PG directory to validate no precreated directories. Actual results: We do not precreate the directories as expected. Expected results: We should precreate the directories to prevent future directory splitting. Additional info: upstream tracker: tracker.ceph.com/issues/22530 Upstream PR: https://github.com/ceph/ceph/pull/19651
Reproducer: - RHEL 7.4 - RHCS 2.5.1 (10.2.10-17.el7cp) 3 OSD nodes 9 OSDs 3 Monitors - Merge was disabled on all 9 osd's via global config option: filestore_merge_threshold = -10 [root@ceph-node2 ceph]# ceph --admin-daemon ceph-osd.0.asok config show |grep filestore_merge "filestore_merge_threshold": "-10", - Pool test1 created with 64 PG's and expected_num_objects 1000000. No prior pools existed in the cluster prior to pool creation. [root@ceph-node1 ceph-ansible]# ceph osd pool create test 64 64 replicated replicated_ruleset 1000000 pool 'test' created - Validated on OSD.0 in newly created PG that directory structure was not precreated: [root@ceph-node2 1.10_head]# pwd /var/lib/ceph/osd/ceph-0/current/1.10_head [root@ceph-node2 1.10_head]# tree /var/lib/ceph/osd/ceph-0/current/1.10_head/ | more /var/lib/ceph/osd/ceph-0/current/1.10_head/ └── __head_00000010__1 0 directories, 1 file - Pool info: [root@ceph-node2 1.10_head]# ceph osd dump |grep pool pool 1 'test' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 45 flags hashpspool stripe_width 0
Issue is reproduced - Version - 10.2.10-22.el7cp Container image - ceph-2-rhel-7-containers-candidate-79858-20180629034923 --------------------------------------------- $ sudo docker exec ceph-mon-<mon-node> ceph daemon mon.<mon-node> config show |grep filestore_merge_threshold "filestore_merge_threshold": "-10" $ sudo docker exec ceph-mon-<mon-node> ceph osd pool create test_1 64 64 replicated replicated_ruleset 100000 pool 'test_1' created $ sudo docker exec ceph-mon-<mon-node> ceph df|grep test_1 test_1 11 0 0 2633G 0 $ sudo docker exec ceph-mon-<mon-node> ceph pg map 11.0 osdmap e43 pg 11.0 (11.0) -> up [1,3,5] acting [1,3,5] $ sudo docker exec ceph-osd-<osd-node>-sdd ceph-disk list |grep osd.1 /dev/sdd1 ceph data (dmcrypt LUKS /dev/dm-0), cluster ceph, osd.1, journal /dev/sdd2 # tree /var/lib/ceph/osd/ceph-1/current/11.0_head/ /var/lib/ceph/osd/ceph-1/current/11.0_head/ `-- __head_00000000__b 0 directories, 1 file ------------------------------------------------------- Based on Comment 4, I think issue was seen in ceph-10.2.10-19.el7cp also. I'm moving the bug to ASSIGNED state, Please let me know if I have missed anything. Regards, Vasishta Shatsry QE, Ceph
(In reply to Vasishta from comment #21) > Issue is reproduced - > > Version - 10.2.10-22.el7cp > Container image - ceph-2-rhel-7-containers-candidate-79858-20180629034923 > > --------------------------------------------- > $ sudo docker exec ceph-mon-<mon-node> ceph daemon mon.<mon-node> config > show |grep filestore_merge_threshold > "filestore_merge_threshold": "-10" This is an osd config setting - what's on the monitor is irrelevant - was it set on the osds? If it wasn't the directories would not be split.
Hi Josh, I'm really sorry, I was confused. Checked for both EC pool and replicated pool, Working fine, moving to VERIFIED state. # tree /var/lib/ceph/osd/ceph-1/current/15.0s2_head/ /var/lib/ceph/osd/ceph-1/current/15.0s2_head/ `-- DIR_0 |-- DIR_0 | `-- __head_00000000__f_ffffffffffffffff_2 |-- DIR_4 |-- DIR_8 `-- DIR_C # tree /var/lib/ceph/osd/ceph-4/current/13.0_head/ /var/lib/ceph/osd/ceph-4/current/13.0_head/ `-- DIR_0 |-- DIR_0 | |-- DIR_0 | | `-- __head_00000000__d | |-- DIR_1 | |-- DIR_2 | |-- DIR_3 | |-- DIR_4 | |-- DIR_5 Regards, Vasishta
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2018:2261