Bug 1242504

Summary: [Data Tiering]: Frequency Counters of un-selected file in the DB wont get clear after a promotion/demotion cycle
Product: [Community] GlusterFS Reporter: Joseph Elwin Fernandes <josferna>
Component: tieringAssignee: Joseph Elwin Fernandes <josferna>
Status: CLOSED CURRENTRELEASE QA Contact: bugs <bugs>
Severity: high Docs Contact:
Priority: unspecified    
Version: mainlineCC: bugs, dlambrig, sankarshan
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-3.8rc2 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-06-16 13:23:25 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1260923    

Description Joseph Elwin Fernandes 2015-07-13 12:56:03 UTC
Description of problem:

Change Time Recorder increments the write/read frequency counters on a read or write of a file, if the "features.record-counters" is "on". It is the responsibility of the tiering migrator to reset these counters to zero for un-selected files to reset them to zero as frequency counters are function of promotion/Demotion cycles. If the counters are not set to zero then,

1) the counters may overflow in the DB
2) The file may be wrongly promoted or demoted.

To test if the counters are cleared for un-selected after a promotion/demotion cycle is by executing the following sqlite3 query.

$>  echo "select GF_ID, WRITE_FREQ_CNTR,READ_FREQ_CNTR from GF_FILE_TB;" | sqlite3 <brick_path>/.glusterfs/<brick_name>.db;


Version-Release number of selected component (if applicable):


How reproducible:

1) Create a dist-rep volume 
2) Start volume
3) attach a dist-rep hot tier
4) gluster volume set features.ctr-enabled on ; gluster volume  features.record-counters on;
5) gluster volume set test cluster.read-freq-threshold <same_value> 
6) gluster volume set test cluster.write-freq-threshold <same_value>
7) create some files, heat up some files and leave other files so that cold files get demoted.
8) After a demotion cycle execute the above mentioned sqlite query on hot brick dbs, you will observe that the counters are not cleared.


Actual results:

The Frequecy counters of un-selected files are NOT set to zero after a promotion/demotion cycle.


Expected results:

The Frequecy counters of un-selected files should be set to zero after a promotion/demotion cycle.

Additional info:

Comment 1 Anand Avati 2015-07-13 13:22:44 UTC
REVIEW: http://review.gluster.org/11648 (tier/libgfdb : Setting Freq counters of un-selected files to zero) posted (#1) for review on master by Joseph Fernandes

Comment 2 Anand Avati 2015-08-12 13:47:58 UTC
COMMIT: http://review.gluster.org/11648 committed in master by Dan Lambright (dlambrig) 
------
commit b5a98df6343da6229b1b102883d8e992cd4a55a5
Author: Joseph Fernandes <josferna>
Date:   Mon Jul 13 18:45:11 2015 +0530

    tier/libgfdb : Setting Freq counters of un-selected files to zero
    
    Change Time Recorder increments the write/read frequency counters
    on a read or write of a file, if the "features.record-counters" is
    "on". It is the responsibility of the tiering migrator to reset
    these counters to zero for un-selected files to reset them to zero
    as frequency counters are function of promotion/Demotion cycles.
    If the counters are not set to zero then,
    
    1) the counters may overflow in the DB
    2) The file may be wrongly promoted or demoted.
    
    This fix will reset the freq counters of un-selected files to zero
    after promotion/demotion frequency.
    
    Change-Id: Ideea2c76a52d421a7e67c37fb0c823f552b3da7a
    BUG: 1242504
    Signed-off-by: Joseph Fernandes <josferna>
    Reviewed-on: http://review.gluster.org/11648
    Tested-by: Joseph Fernandes
    Tested-by: NetBSD Build System <jenkins.org>
    Reviewed-by: Dan Lambright <dlambrig>

Comment 3 Niels de Vos 2016-06-16 13:23:25 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report.

glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user