Bug 1277924 - Though files are in split-brain able to perform writes to the file [NEEDINFO]
Though files are in split-brain able to perform writes to the file
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: replicate (Show other bugs)
Unspecified Unspecified
high Severity unspecified
: ---
: RHGS 3.4.0
Assigned To: Pranith Kumar K
Vijay Avuthu
: ZStream
Depends On:
Blocks: 1503134 1294051 1315140
  Show dependency treegraph
Reported: 2015-11-04 06:18 EST by RajeshReddy
Modified: 2018-04-16 14:05 EDT (History)
9 users (show)

See Also:
Fixed In Version: glusterfs-3.12.2-2
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 1294051 (view as bug list)
Last Closed: 2018-04-16 14:05:32 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
ravishankar: needinfo? (nbalacha)

Attachments (Terms of Use)

  None (edit)
Description RajeshReddy 2015-11-04 06:18:45 EST
Description of problem:
Though files are in split-brain able to perform writes to the file

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:
1. Create 1x2 volume and mount it on client using fuse and disable self-heal-daemon
2. From the mount create few files and do continuous IO's to file and while IO is going on perfrom pkill gluster on node1 
3. After some time bring back the node1 and perfrom pkill on node2 (while IO is going on to the same files)
4. Now files are in split-brian and gluster vol heal info is shwoing the same 
5. From the mount do append to the file using echo "data" >> file (Don't give full file path )

Actual results:

Expected results:
IO should fail and if i disable performance.write-behind getting IO error while writing to the file

Additional info:
[root@rhs-client18 data]# gluster vol info afr1x2 
Volume Name: afr1x2
Type: Replicate
Volume ID: 8bdcc83a-f7a5-4440-a0be-13f26ab72ae8
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Brick1: rhs-client18.lab.eng.blr.redhat.com:/rhs/brick1/afr1x2
Brick2: rhs-client19.lab.eng.blr.redhat.com:/rhs/brick1/afr1x2
Options Reconfigured:
performance.write-behind: on
cluster.data-self-heal: off
cluster.entry-self-heal: off
cluster.metadata-self-heal: off
features.scrub: Active
features.bitrot: on
cluster.self-heal-daemon: off
performance.readdir-ahead: on
Comment 4 Mike McCune 2016-03-28 19:18:47 EDT
This bug was accidentally moved from POST to MODIFIED via an error in automation, please see mmccune@redhat.com with any questions
Comment 12 Vijay Avuthu 2018-04-04 04:55:00 EDT

Verified with Build: glusterfs-3.12.2-6.el7rhgs.x86_64

1) Create 1 * 2 volume and start
2) set cluster.self-heal-daemon to off
3) create files from mount point
4) continuously append to files from different sessions
5) After few minutes, kill gluster on Node 1
6) After few min, start glusterd on NOde 1 and immediatly kill gluster on Node2
7) After few min, start glusterd on Node 2
8) IO's will fail and files will be in split-brain
9) Try to append on a file which was in split-brain and it should fail

# echo "LAST APPENDING" >>f1
-bash: echo: write error: No such file or directory

Changing status to Verified

Note You need to log in before you can comment on or make changes to this bug.