Description of problem: When quota limit is exceeded, demotion of files are not happening. Version-Release number of selected component (if applicable): [root@localhost ec01]# rpm -qa | grep glusterfs glusterfs-client-xlators-3.7.5-5.el7rhgs.x86_64 glusterfs-api-3.7.5-5.el7rhgs.x86_64 glusterfs-cli-3.7.5-5.el7rhgs.x86_64 glusterfs-libs-3.7.5-5.el7rhgs.x86_64 glusterfs-3.7.5-5.el7rhgs.x86_64 glusterfs-fuse-3.7.5-5.el7rhgs.x86_64 glusterfs-server-3.7.5-5.el7rhgs.x86_64 glusterfs-geo-replication-3.7.5-5.el7rhgs.x86_64 How reproducible: 1/1 Steps to Reproduce: 1. Created 2*2 distribute replicate volume 2. Fuse mount the volume 3. Set quota on volume 4. Create file on the so that disk quota exceeds 5. Now attach 2*2 distribute replicate tier 6. Modify the quota limit 7. Enable ctr, tier-promote, tier-demote frequency 7. Create some files from mount point. 8. Wait for demotion to happen 9. Read one of the file from Mount point 10. Wait for demotion to happen after specified time =========================== log for tier file ============================= r-dht: ERROR in current lookup [2015-11-03 22:15:45.119783] E [MSGID: 109037] [tier.c:1488:tier_start] 0-testvol-tier-dht: Demotion failed The message "E [MSGID: 109037] [tier.c:463:tier_migrate_using_query_file] 0-testvol-tier-dht: ERROR in current lookup " repeated 2 times between [2015-11-03 22:15:45.119374] and [2015-11-03 22:17:15.268892] The message "E [MSGID: 109037] [tier.c:1488:tier_start] 0-testvol-tier-dht: Demotion failed" repeated 2 times between [2015-11-03 22:15:45.119783] and [2015-11-03 22:17:15.269204] Actual results: Demotion doesn't happen Expected results: Demotion of file should happen, Restarted the volume and demotion started to happen Additional info: [root@localhost ec01]# gluster v info Volume Name: testvol Type: Tier Volume ID: fbee6a2e-39ef-4388-8239-8a148dafdba9 Status: Started Number of Bricks: 8 Transport-type: tcp Hot Tier : Hot Tier Type : Distributed-Replicate Number of Bricks: 2 x 2 = 4 Brick1: 10.70.47.3:/rhs/brick2/ec04 Brick2: 10.70.47.2:/rhs/brick2/ec03 Brick3: 10.70.47.145:/rhs/brick2/ec02 Brick4: 10.70.47.143:/rhs/brick2/ec01 Cold Tier: Cold Tier Type : Distributed-Replicate Number of Bricks: 2 x 2 = 4 Brick5: 10.70.47.143:/rhs/brick1/b01 Brick6: 10.70.47.145:/rhs/brick1/b02 Brick7: 10.70.47.2:/rhs/brick1/b03 Brick8: 10.70.47.3:/rhs/brick1/b04 Options Reconfigured: features.barrier: disable cluster.tier-promote-frequency: 45 cluster.tier-demote-frequency: 45 cluster.write-freq-threshold: 0 cluster.read-freq-threshold: 0
Hi, We are not able to reproduce the problem in the current build. Can you try to reproduce it in the current build? -- Regards, Manikandan Selvaganesh.
Hi, if the bug is not reproducible, we must be moving it to "works for me". Please contact the reporter and find out what was the exact scenario and setup used for raising this bug. If you are still not able to reproduce, but the pertaining logs and move to "works for me" only Bugs which have code fix through a patch for the same problem must be moved to on_QA
Not able to reproduce this bug on latest build. Seeing demotions happening on files. Will open a new bug once encountered issue again. Hence closing this bug as of now.
This has been closed by QE as not reproducible, but this can be a "potential risk" Also, if it was moved to post, why not have the fix in?