Description of problem: Have a tiered volume with both cold and hot tiers as dist-rep with bit rot enabled. In Cold tier when one of the sub volume has a good copy and another has bad one, doing I/O on the mount point does not promote the file to hot tier. There will be a wastage of space when files which are marked bad stay in the hot tier until the file is been recovered manually. Version-Release number of selected component (if applicable): glusterfs-3.7.5-9.el7rhgs.x86_64 How reproducible: Always Steps to Reproduce: 1. Create a tiered volume with both hot and cold tiers as dist-rep (2*2) volume. 2. Mount the volume using fuse 3. Create some files in the volume. 4. Corrupt the file in one of the sub volume and do I/O on the mount point since there is another good copy of the file in another sub volume. Actual results: Doing I/O on the good copy of the file does not promote the file. Expected results: Since there is a good copy of the file in another subvolume, I/O on the file should promote the file. Additional info:
sos reports can be found in the link below. http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/1288490/
Once the bad copy is recovered in the volume by deleting the file and its gfid from the backend and triggering a self heal on the volume, doing I/O from the mount point promotes the volume.
Should be fixed with this Downstream patch https://code.engineering.redhat.com/gerrit/#/c/64239/
I have tested this bug and this still exists ,so moving this bug back to assigned. Below are the steps i did for verifying. 1) Created a replicate volume. 2) Enabled bitrot on the volume. 3) Mounted using fuse mount and created some files. 4) edited file in one of the subvolume in replica pair so that it gets marked as corrupted file. 5) Now ran attach-tier command. 6) started doing mount point in the I/O on the file which was corrupted. since there is another good copy of the file present in another subvolume of the replica pair, doing I/O on the file should have promoted the file. gluster volume info : ===================== [root@rhs-client33 h2]# gluster vol info vol_rep Volume Name: vol_rep Type: Tier Volume ID: 938581fa-3480-4fd2-9073-57eaecd78e50 Status: Started Number of Bricks: 4 Transport-type: tcp Hot Tier : Hot Tier Type : Replicate Number of Bricks: 1 x 2 = 2 Brick1: rhs-client33.lab.eng.blr.redhat.com:/bricks/brick4/h2 Brick2: rhs-client24.lab.eng.blr.redhat.com:/bricks/brick4/h1 Cold Tier: Cold Tier Type : Replicate Number of Bricks: 1 x 2 = 2 Brick3: rhs-client24.lab.eng.blr.redhat.com:/bricks/brick3/b1 Brick4: rhs-client33.lab.eng.blr.redhat.com:/bricks/brick3/b2 Options Reconfigured: cluster.tier-mode: cache features.ctr-enabled: on features.scrub-freq: hourly features.scrub: Active features.bitrot: on performance.readdir-ahead: on
sos reports can be found in the link below: http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/1288490/
Sure will look into it
RCA: Tier process will be running on both replica set on each node, so when a file is ready to migrate, to avoid migrating both copy from replica set at same time, we use a virtual getxattr called "node id". From tier process we request a node id to the child subvolume. If the getxattr return the node id for the running process it will migrate the file, otherwise it will simple skip assuming the file will be migrated from the other replica set. Now when a file is marked as bad in Node say N1 and the good copy for the same was in say N2, Then all the reads/write will got to N2, Once the file got heated N2 will pick the file to promote, before migrating we request node id, the afr will always send that node id request to first up child process and will wind back with that node id if it was returned with successful. Here in this case let us say first up child was N1 and will get node id for N1, then tier process running on N2 will skip the file assuming that N1 will migrate the file. But from node N1, Since there is no ops winded to that brick as it was a bad file, the file will not be listed for promotion/demotion , hence it wont migrate the file. So the file will never get migrated.
Moving this to bitrot.
(In reply to Mohammed Rafi KC from comment #12) > RCA: > > Tier process will be running on both replica set on each node, so when a > file is ready to migrate, to avoid migrating both copy from replica set at > same time, we use a virtual getxattr called "node id". From tier process we > request a node id to the child subvolume. If the getxattr return the node id > for the running process it will migrate the file, otherwise it will simple > skip assuming the file will be migrated from the other replica set. > > Now when a file is marked as bad in Node say N1 and the good copy for the > same was in say N2, Then all the reads/write will got to N2, Once the file > got heated N2 will pick the file to promote, before migrating we request > node id, the afr will always send that node id request to first up child > process and will wind back with that node id if it was returned with > successful. Here in this case let us say first up child was N1 and will get > node id for N1, then tier process running on N2 will skip the file assuming > that N1 will migrate the file. > > But from node N1, Since there is no ops winded to that brick as it was a bad > file, the file will not be listed for promotion/demotion , hence it wont > migrate the file. So the file will never get migrated. Returning EIO for node-uuid getxattr() should cause AFR to query other replicas.
(In reply to Mohammed Rafi KC from comment #12) > RCA: > > Tier process will be running on both replica set on each node, so when a > file is ready to migrate, to avoid migrating both copy from replica set at > same time, we use a virtual getxattr called "node id". From tier process we > request a node id to the child subvolume. If the getxattr return the node id > for the running process it will migrate the file, otherwise it will simple > skip assuming the file will be migrated from the other replica set. > > Now when a file is marked as bad in Node say N1 and the good copy for the > same was in say N2, Then all the reads/write will got to N2, Once the file > got heated N2 will pick the file to promote, before migrating we request > node id, the afr will always send that node id request to first up child > process and will wind back with that node id if it was returned with > successful. Here in this case let us say first up child was N1 and will get > node id for N1, then tier process running on N2 will skip the file assuming > that N1 will migrate the file. > > But from node N1, Since there is no ops winded to that brick as it was a bad > file, the file will not be listed for promotion/demotion , hence it wont > migrate the file. So the file will never get migrated. [ignore comment #14, submitted too quickly] Returning EIO for node-uuid getxattr() should cause AFR to query other replicas. If not, then then that looks like a fix for this. But OTOH, looks like geo-replication might too have similar side effects currently - changelog on one of the replica would not record the data operation and if this node happens to be the "active" node (which synchronized data), then such objects would be skipped for replication to the slave.
Upstream Patch: http://review.gluster.org/13116
Downstream Patch: https://code.engineering.redhat.com/gerrit/#/c/64733/
Verified and works fine with build glusterfs-3.7.5-15.el7rhgs.x86_64. When a file is marked as bad in one of the subvolume in replica pair, doing an I/O from the mount point promotes good file to hot tier when bitrot is enabled on the volume.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2016-0193.html