Description of problem: When promotion and demotion of files are happening, while IO operations are going on files, two files are created with same names for few files on mount point. Version-Release number of selected component (if applicable): [root@rhs001 b1]# rpm -qa | grep glusterfs glusterfs-client-xlators-3.7.5-10.el7rhgs.x86_64 glusterfs-fuse-3.7.5-10.el7rhgs.x86_64 glusterfs-cli-3.7.5-10.el7rhgs.x86_64 glusterfs-geo-replication-3.7.5-10.el7rhgs.x86_64 glusterfs-libs-3.7.5-10.el7rhgs.x86_64 glusterfs-3.7.5-10.el7rhgs.x86_64 glusterfs-api-3.7.5-10.el7rhgs.x86_64 glusterfs-server-3.7.5-10.el7rhgs.x86_64 How reproducible: 100% Steps to Reproduce: 1. Created 2*2 distribute replicate volume 2. Fuse mount the volume 3. Set quota on volume and limit-usage 4. Create file on the so that disk quota exceeds 5. Now attach 2*2 distribute replicate tier 6. Set cluster tier mode to test and set promote and demote frequency 7. Append on all the files from mount point in loop 3-4 time and wait for promotion and demotions to happen. Actual results: After write operation, two files with same name are created. ======================================================== Actual file and T file, both are in cold-tier replica set. Expected results: There should not be any two file with same name. Additional info: [root@rhs001 b1]# gluster v info Volume Name: tiervol Type: Tier Volume ID: c874e469-0e57-4962-bb62-2a323e8b3308 Status: Started Number of Bricks: 8 Transport-type: tcp Hot Tier : Hot Tier Type : Distributed-Replicate Number of Bricks: 2 x 2 = 4 Brick1: 10.70.47.3:/rhs/brick5/b04 Brick2: 10.70.47.2:/rhs/brick5/b03 Brick3: 10.70.47.145:/rhs/brick5/b02 Brick4: 10.70.47.143:/rhs/brick5/b01 Cold Tier: Cold Tier Type : Distributed-Replicate Number of Bricks: 2 x 2 = 4 Brick5: 10.70.47.143:/rhs/brick4/b1 Brick6: 10.70.47.145:/rhs/brick4/b2 Brick7: 10.70.47.2:/rhs/brick4/b3 Brick8: 10.70.47.3:/rhs/brick4/b4 Options Reconfigured: cluster.tier-promote-frequency: 45 cluster.tier-demote-frequency: 45 cluster.tier-mode: test features.ctr-enabled: on features.quota-deem-statfs: on features.inode-quota: on features.quota: on performance.readdir-ahead: on
This can happen when any of the cold tier subvol reaches min free disk capacity. With out exceeding quota or reaching min_free_disk limit, I couldn't reproduce the bug
This issue is already fixed with patch http://review.gluster.org/#/c/12948/ . Karthik, Can you please verify this ?
Targeting this BZ for 3.2.0.
Removing this from 3.2 tracker as it will be verified against RHGS 3.1.3. If it Fails QA, we can reconsider this for 3.2.
Moving this to ON_QA
As tier is not being actively developed, I'm closing this bug. Feel free to open it if necessary.
clearing stale needinfos.