Bug 1595896

Summary: osd pool create should warn when 'expect_num_objects' is being ignored
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: John Harrigan <jharriga>
Component: RADOSAssignee: Douglas Fuller <dfuller>
Status: CLOSED ERRATA QA Contact: Tiffany Nguyen <tunguyen>
Severity: medium Docs Contact:
Priority: medium    
Version: 3.1CC: ceph-eng-bugs, dfuller, dzafman, kchai, kdreyer, vakulkar
Target Milestone: rc   
Target Release: 3.1   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: RHEL: ceph-12.2.5-38.el7cp Ubuntu: 12.2.5-23redhat1xenial Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-09-26 18:22:08 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1592497    
Bug Blocks: 1581350    

Description John Harrigan 2018-06-27 17:28:42 UTC
Description of problem:
The 'expected_num_objects' parameter is only applied to pool creation if the filestore_split_threshold value is set to negative value, which is not the default. If a user creates a pool specifying 'expected_num_objects' it may be silently ignored.

Version-Release number of selected component (if applicable):
RHCEPH-3.1-RHEL-7-20180530.ci.0

Steps to Reproduce:
1. Pool creation with default value of "filestore_merge_threshold"
   # ceph daemon osd.105 config get filestore_merge_threshold  <<< 10
   # ceph osd crush rule create-erasure default.rgw.buckets.data myprofile
   # ceph osd pool create default.rgw.buckets.data 4096 erasure myprofile \
     default.rgw.buckets.data 50000000"
   Command issues no warning or error, however no splits are performed

2. Pool creation with negative value of "filestore_split_threshold"
   # ceph tell osd.* injectargs --filestore_merge_threshold -1
   # ceph daemon osd.105 config get filestore_merge_threshold  <<< -1
   # ceph osd crush rule create-erasure default.rgw.buckets.data myprofile
   # ceph osd pool create default.rgw.buckets.data 4096 erasure myprofile \
     default.rgw.buckets.data 50000000"
   Splits are performed as seen by examining the filestore backend on an OSD
   by doing "ls /var/lib/ceph/osd/ceph-<osdnum>/current"
   You'll see now that each PG has been exploded into a bunch of collection
   directories and a subdirectory tree

Actual results:
pool create cmd does not advise user that 'expected_num_objects' was ignored

Expected results:
pool create cmd should notify user that 'expected_num_objects' is ignored
since 'filestore_split_threshold' is not set to a negative value

Comment 3 John Harrigan 2018-07-03 13:58:12 UTC
question:
We know that expected_objects will be ignored if filestore_split_threshold is 
not a negative value. What other conditions cause it to be ignored?

For instance, if the directories already exist on the OSDs?
I know the expected_num_objects is defined at pool creation time but is it
really a 'per pool' operation?
In other words, if a user specifies expected_num_objects on more than one
pool creation action what happens?  Will the precreate happen on both pool
creation cmds or only the first one?

Comment 4 Douglas Fuller 2018-07-11 18:18:32 UTC
Code is ready pending 1592497

Comment 5 Douglas Fuller 2018-07-16 14:30:17 UTC
https://github.com/ceph/ceph/pull/23072

Comment 8 Tiffany Nguyen 2018-08-24 05:36:25 UTC
Verified with build 12.2.5-39.el7cp.  When setting filestore_merge_threshold=10 then create a pool, there is a warning message display: 

[root@b08-h03-r620 ceph]# ceph daemon mon.b08-h03-r620 config set filestore_merge_threshold 10
{
    "success": "filestore_merge_threshold = '10' (not observed, change may require restart) "
}
[root@b08-h03-r620 ceph]# ceph osd pool create testpool 100 100 replicated replicated_rule 102400

Error EINVAL: 'expected_num_objects' requires 'filestore_merge_threshold < 0'

Comment 10 errata-xmlrpc 2018-09-26 18:22:08 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:2819