Bug 1554963 - Pool create cmd's expected_num_objects is not properly interpreted
Summary: Pool create cmd's expected_num_objects is not properly interpreted
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: RADOS
Version: 2.5
Hardware: x86_64
OS: All
high
high
Target Milestone: z1
: 2.5
Assignee: Kefu Chai
QA Contact: Vasishta
Aron Gunn
URL:
Whiteboard:
Depends On:
Blocks: 1536401 1554970 1579039
TreeView+ depends on / blocked
 
Reported: 2018-03-13 16:16 UTC by Mike Hackett
Modified: 2021-09-09 13:27 UTC (History)
10 users (show)

Fixed In Version: RHEL: ceph-10.2.10-19.el7cp Ubuntu: ceph_10.2.10-19redhat1
Doc Type: Bug Fix
Doc Text:
.The 'expected_num_objects' option was not working as expected Previously, when using the 'ceph osd pool create' command with 'expected_num_objects' option, placement group (PG) directories were not pre-created at pool creation time as expected, resulting in performance drops when filestore splitting occurred. With this update, the 'expected_num_objects' parameter is now passed through to filestore correctly, and PG directories for the expected number of objects are pre-created at pool creation time.
Clone Of:
: 1554970 1579039 (view as bug list)
Environment:
Last Closed: 2018-07-26 18:06:41 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 22530 0 None None None 2018-03-13 16:16:47 UTC
Ceph Project Bug Tracker 23316 0 None None None 2018-05-16 21:07:09 UTC
Github /ceph ceph pull 22050 0 None None None 2020-05-01 12:29:43 UTC
Github ceph ceph pull 21042 0 'None' closed [DNM] jewel: mon/OSDMonitor.cc: fix expected_num_objects interpret error 2020-05-01 12:29:42 UTC
Red Hat Issue Tracker RHCEPH-1556 0 None None None 2021-09-09 13:27:58 UTC
Red Hat Product Errata RHSA-2018:2261 0 None None None 2018-07-26 18:07:49 UTC

Description Mike Hackett 2018-03-13 16:16:48 UTC
Description of problem:
When using pool create command with expected_num_objects PG directories do not get pre-split when created as expected.

Pool created using:
# ceph osd pool create <name> <pg count> <pg count> <pool type> <ruleset> <expected_num_objects> 

Documented here:

https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/2/html-single/storage_strategies_guide/#create_a_pool

Version-Release number of selected component (if applicable):
RHCS 2.5

How reproducible:
Constant

Steps to Reproduce:
1. Disable merge on OSD's
2. Restart OSD's to push disabling Merge change
3. Create new pool on Ceph cluster with no existing pools.
4. Use expected_num_objects parameter when creating the pool and set high number of objects (ex: 1000000).
5. tree the PG directory to validate no precreated directories.

Actual results:
We do not precreate the directories as expected.

Expected results:
We should precreate the directories to prevent future directory splitting.

Additional info:
upstream tracker: tracker.ceph.com/issues/22530

Upstream PR: https://github.com/ceph/ceph/pull/19651

Comment 4 Mike Hackett 2018-03-13 16:29:57 UTC
Reproducer:

- RHEL 7.4 
- RHCS 2.5.1 (10.2.10-17.el7cp)

3 OSD nodes
9 OSDs
3 Monitors

- Merge was disabled on all 9 osd's via global config option:

filestore_merge_threshold = -10

[root@ceph-node2 ceph]# ceph --admin-daemon ceph-osd.0.asok config show |grep filestore_merge
    "filestore_merge_threshold": "-10",


- Pool test1 created with 64 PG's and expected_num_objects 1000000. No prior pools existed in the cluster prior to pool creation.

[root@ceph-node1 ceph-ansible]# ceph osd pool create test 64 64 replicated replicated_ruleset 1000000
pool 'test' created

- Validated on OSD.0 in newly created PG that directory structure was not precreated:

[root@ceph-node2 1.10_head]# pwd
/var/lib/ceph/osd/ceph-0/current/1.10_head

[root@ceph-node2 1.10_head]# tree /var/lib/ceph/osd/ceph-0/current/1.10_head/ | more
/var/lib/ceph/osd/ceph-0/current/1.10_head/
└── __head_00000010__1

0 directories, 1 file

- Pool info:

[root@ceph-node2 1.10_head]# ceph osd dump |grep pool
pool 1 'test' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 45 flags hashpspool stripe_width 0

Comment 21 Vasishta 2018-07-10 17:58:58 UTC
Issue is reproduced -

Version - 10.2.10-22.el7cp
Container image - ceph-2-rhel-7-containers-candidate-79858-20180629034923

---------------------------------------------
$ sudo docker exec ceph-mon-<mon-node> ceph daemon mon.<mon-node> config show |grep filestore_merge_threshold
    "filestore_merge_threshold": "-10"

$ sudo docker exec ceph-mon-<mon-node> ceph osd pool create test_1 64 64 replicated replicated_ruleset 100000
pool 'test_1' created

$ sudo docker exec ceph-mon-<mon-node> ceph df|grep test_1
    test_1                  11         0         0         2633G           0 


$ sudo docker exec ceph-mon-<mon-node> ceph pg map 11.0
osdmap e43 pg 11.0 (11.0) -> up [1,3,5] acting [1,3,5]


$ sudo docker exec ceph-osd-<osd-node>-sdd ceph-disk list |grep osd.1
 /dev/sdd1 ceph data (dmcrypt LUKS /dev/dm-0), cluster ceph, osd.1, journal /dev/sdd2

# tree /var/lib/ceph/osd/ceph-1/current/11.0_head/
/var/lib/ceph/osd/ceph-1/current/11.0_head/
`-- __head_00000000__b

0 directories, 1 file

-------------------------------------------------------

Based on Comment 4, I think issue was seen in ceph-10.2.10-19.el7cp also.

I'm moving the bug to ASSIGNED state, Please let me know if I have missed anything.


Regards,
Vasishta Shatsry
QE, Ceph

Comment 22 Josh Durgin 2018-07-12 21:50:13 UTC
(In reply to Vasishta from comment #21)
> Issue is reproduced -
> 
> Version - 10.2.10-22.el7cp
> Container image - ceph-2-rhel-7-containers-candidate-79858-20180629034923
> 
> ---------------------------------------------
> $ sudo docker exec ceph-mon-<mon-node> ceph daemon mon.<mon-node> config
> show |grep filestore_merge_threshold
>     "filestore_merge_threshold": "-10"

This is an osd config setting - what's on the monitor is irrelevant - was it set on the osds? If it wasn't the directories would not be split.

Comment 23 Vasishta 2018-07-13 18:57:40 UTC
Hi Josh,

I'm really sorry, I was confused.
Checked for both EC pool and replicated pool, Working fine, moving to VERIFIED state.

# tree /var/lib/ceph/osd/ceph-1/current/15.0s2_head/
/var/lib/ceph/osd/ceph-1/current/15.0s2_head/
`-- DIR_0
    |-- DIR_0
    |   `-- __head_00000000__f_ffffffffffffffff_2
    |-- DIR_4
    |-- DIR_8
    `-- DIR_C

# tree /var/lib/ceph/osd/ceph-4/current/13.0_head/
/var/lib/ceph/osd/ceph-4/current/13.0_head/
`-- DIR_0
    |-- DIR_0
    |   |-- DIR_0
    |   |   `-- __head_00000000__d
    |   |-- DIR_1
    |   |-- DIR_2
    |   |-- DIR_3
    |   |-- DIR_4
    |   |-- DIR_5

Regards,
Vasishta

Comment 25 errata-xmlrpc 2018-07-26 18:06:41 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:2261


Note You need to log in before you can comment on or make changes to this bug.