Description of problem: writes succeed when only good brick is down in 1x3 volume Version-Release number of selected component (if applicable): rhgs-3.4.0 How reproducible: Always Steps to Reproduce: *Create a file in a 1x3 replicate volume mounted using fuse *Disable shd and client side data-self-heal *open an fd on the file for writing *Kill B1 (brick1) and write to file. *Bring it back up using volume start force, then bring B2 down *write to file. *Bring B2 back up, and kill B3. *The next write should fail (since the only good brick B3 is down) but it succeeds. Actual results: Writes succeed. Expected results: Writes must fail with EIO. Additional info: https://review.gluster.org/#/c/20036 is the fix for the issue.
upstream patch link is in the bug description.
Update: ========== verified with build: glusterfs-3.12.2-13.el7rhgs.x86_64 Scenario: 1) Create 1 * 3 replicate volume and start 2) Disable shd and client side healing 3) create file from mount point 4) Kill B1 (brick1) and append data to same file. 5) Bring it back up using volume start force, then bring B2 down 6) Append data to the same file. 7) Bring B2 back up, and kill B3. 8) Try to append data to the same file and it should fail (since the only good brick B3 is down) step 8, o/p [root@dhcp35-125 13]# echo "test_after_b3_down" >>f -bash: echo: write error: Input/output error [root@dhcp35-125 13]# Changing status to Verified.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2018:2607