Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1581057 - writes succeed when only good brick is down in 1x3 volume
writes succeed when only good brick is down in 1x3 volume
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: replicate (Show other bugs)
3.4
Unspecified Unspecified
unspecified Severity unspecified
: ---
: RHGS 3.4.0
Assigned To: Ravishankar N
Vijay Avuthu
:
Depends On:
Blocks: 1503137
  Show dependency treegraph
 
Reported: 2018-05-22 01:38 EDT by Ravishankar N
Modified: 2018-09-16 08:05 EDT (History)
4 users (show)

See Also:
Fixed In Version: glusterfs-3.12.2-12
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1581548 (view as bug list)
Environment:
Last Closed: 2018-09-04 02:48:11 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2018:2607 None None None 2018-09-04 02:49 EDT

  None (edit)
Description Ravishankar N 2018-05-22 01:38:44 EDT
Description of problem:
writes succeed when only good brick is down in 1x3 volume

Version-Release number of selected component (if applicable):
rhgs-3.4.0

How reproducible:
Always

Steps to Reproduce:
*Create a file in a 1x3 replicate volume mounted using fuse
*Disable shd and client side data-self-heal
*open an fd on the file for writing
*Kill B1 (brick1) and write to file.
*Bring it back up using volume start force, then bring B2 down
*write to file.
*Bring B2 back up, and kill B3.
*The next write should fail (since the only good brick B3 is down) but it succeeds.

Actual results:
Writes succeed.

Expected results:
Writes must fail with EIO.


Additional info:
https://review.gluster.org/#/c/20036 is the fix for the issue.
Comment 2 Ravishankar N 2018-05-22 01:46:59 EDT
upstream patch link is in the bug description.
Comment 9 Vijay Avuthu 2018-07-12 06:06:10 EDT
Update:
==========

verified with build: glusterfs-3.12.2-13.el7rhgs.x86_64


Scenario:

1) Create 1 * 3 replicate volume and start
2) Disable shd and client side healing
3) create file from mount point
4) Kill B1 (brick1) and append data to same file.
5) Bring it back up using volume start force, then bring B2 down
6) Append data to the same file.
7) Bring B2 back up, and kill B3.
8) Try to append data to the same file and it should fail (since the only good brick B3 is down)

step 8, o/p

[root@dhcp35-125 13]# echo "test_after_b3_down" >>f
-bash: echo: write error: Input/output error
[root@dhcp35-125 13]# 


Changing status to Verified.
Comment 10 errata-xmlrpc 2018-09-04 02:48:11 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:2607

Note You need to log in before you can comment on or make changes to this bug.