+++ This bug was initially created as a clone of Bug #1340032 +++ Description of problem: created script for 50 files 5mb each and during the creation rebooted 2 nodes arbiter and data brick. while one brick was alive. Version-Release number of selected component (if applicable): How reproducible: 1 time Steps to Reproduce: 1.create a 1x3 volume -core(name) 2. mounted on the client using fuse at /mnt/core 3. ran this Script for 1 min. for (( i=1;i<=50;i++ )) do dd if=/dev/urandom of=corefile$i bs=5M count=5 status=progress done 4. rebooted the arbiter and one the data brick. 5. files 15 to 26 were only touched. No data was written. 0 byte file. 6. Files from arbiter and data weren't able to heal [root@dhcp43-192 core]# gluster volume heal core info Brick dhcp43-157.lab.eng.blr.redhat.com:/rhs/brick1/core Status: Connected Number of entries: 0 Brick dhcp43-192.lab.eng.blr.redhat.com:/rhs/brick1/core /corefile16 /corefile17 /corefile18 /corefile19 /corefile20 /corefile21 /corefile22 /corefile23 /corefile24 /corefile25 /corefile26 Status: Connected Number of entries: 11 Brick dhcp43-153.lab.eng.blr.redhat.com:/rhs/brick1/core /corefile16 /corefile17 /corefile18 /corefile19 /corefile20 /corefile21 /corefile22 /corefile23 /corefile24 /corefile25 /corefile26 Status: Connected Number of entries: 11 Actual results: The files should weren't healed. Expected results: The files should have been healed. Additional info: logs kept at rhsqe-repo.lab.eng.blr.redhat.com:/var/www/html/sosreports/<bug> --- Additional comment from Karan Sandha on 2016-05-31 02:18 EDT --- --- Additional comment from Karan Sandha on 2016-05-31 02:19 EDT --- --- Additional comment from Karan Sandha on 2016-05-31 02:20 EDT --- --- Additional comment from Karan Sandha on 2016-05-31 02:22 EDT --- --- Additional comment from Karan Sandha on 2016-06-07 08:11 EDT --- --- Additional comment from Karan Sandha on 2016-06-07 08:38:33 EDT --- Steps To Reproduce:- 1) Create 1x3 Arbiter volume 2) bricks B1 ,B2, B3(A) 3) bring down B1 4) Create 50 files 500MB each on fuse mount from client. 5) after 30 files are created 6) bring up the B1 and bring down B3 Check gluster volume heal info ls for the files on bricks there will be multiple 0 byte files and gluster heal info shows mulitple files to be healed. --- Additional comment from Vijay Bellur on 2016-06-20 04:18:01 EDT --- REVIEW: http://review.gluster.org/14769 (afr: Do not mark arbiter as data source during newentry_mark) posted (#1) for review on master by Ravishankar N (ravishankar) --- Additional comment from Vijay Bellur on 2016-06-24 07:42:49 EDT --- REVIEW: http://review.gluster.org/14769 (afr: Do not mark arbiter as data source during newentry_mark) posted (#2) for review on master by Ravishankar N (ravishankar) --- Additional comment from Ravishankar N on 2016-06-24 07:44:56 EDT --- Moved BZ state by mistake
http://review.gluster.org/14769 posted upstream for review.
Increasing the priority of this bug as i am hitting this issue on pretty much every brick down scenarios.
Bipin, Idea is not about we WONTFIX always, but that was done with looking the activity on the bugzilla, and we had not picked the particular bug for previous 2 releases. We will keep it as an open bug in Upstream and fix it, and will get it to downstream when we get it in releases as backports.