Bug 1305283

Summary: [CACHE-TIER]: cache tier doesn't obey exact target_max_object number of objects to be retained in cache pool
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: shylesh <shmohan>
Component: RADOSAssignee: Samuel Just <sjust>
Status: CLOSED NOTABUG QA Contact: ceph-qe-bugs <ceph-qe-bugs>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 1.3.2CC: ceph-eng-bugs, dzafman, kchai
Target Milestone: rc   
Target Release: 1.3.3   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-02-23 22:18:39 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description shylesh 2016-02-06 17:25:25 UTC
Description of problem:
Cache pool always makes sure that number of objects is less than target_max_object but not exactly the same. suppose if I set target_max_object as 50 no matter how many objects are pushed to the fast pool , around 48 objects will remain in the pool. 

Version-Release number of selected component (if applicable):

ceph-0.94.5-4.el7cp.x86_64
How reproducible:

always
Steps to Reproduce:
1.create a slow pool and a fast pool 
2.make fast pool as cache tier for slow pool
here is the config 

[ubuntu@magna105 ~]$ sudo ceph osd pool get fast hit_set_period
hit_set_period: 120  
[ubuntu@magna105 ~]$ sudo ceph osd pool get fast hit_set_count
hit_set_count: 0
[ubuntu@magna105 ~]$ sudo ceph osd pool get fast hit_set_fpp
hit_set_fpp: 0.05
[ubuntu@magna105 ~]$ sudo ceph osd pool get fast target_max_objects
target_max_objects: 50
[ubuntu@magna105 ~]$ sudo ceph osd pool get fast target_max_bytes
target_max_bytes: 0
[ubuntu@magna105 ~]$ sudo ceph osd pool get fast cache_target_dirty_ratio
cache_target_dirty_ratio: 0.4
[ubuntu@magna105 ~]$ sudo ceph osd pool set fast cache_target_dirty_ratio 1
set pool 415 cache_target_dirty_ratio to 1
[ubuntu@magna105 ~]$ sudo ceph osd pool get fast cache_target_full_ratio
cache_target_full_ratio: 1
[ubuntu@magna105 ~]$ sudo ceph osd pool get fast cache_min_flush_age
cache_min_flush_age: 0
[ubuntu@magna105 ~]$ sudo ceph osd pool get fast cache_min_flush_age
cache_min_flush_age: 0
[ubuntu@magna105 ~]$ sudo ceph osd pool get fast min_read_recency_for_promote
min_read_recency_for_promote: 0
[ubuntu@magna105 ~]$ sudo ceph osd pool get fast write_fadvise_dontneed
write_fadvise_dontneed: false

3. create 100 objects by writing it to the slow pool and observer that approximately target_max_objects will stay in fast pool.

4. Now read 100 objects from slow pool and observe the number of objects in fast pool 

[ubuntu@magna105 ~]$ sudo rados -p fast ls   | wc -l
47


only 47 objects are preserved in fast pool . why the number of objects in the fast pool is always less than target_max_objects


Actual results:
cache tier will never have exact target_max_objects of objects.

Expected results:
fast pool should have exact target_max_objects of objects.

Additional info:

Comment 2 Samuel Just 2016-02-23 22:18:39 UTC
This really isn't a bug.  The PGs round down to avoid over promoting.  Those settings are a guide for the pool, not a guarantee.