Bug 1224709 - Read operation on a file which is in split-brain condition is successful
Summary: Read operation on a file which is in split-brain condition is successful
Keywords:
Status: CLOSED WORKSFORME
Alias: None
Product: GlusterFS
Classification: Community
Component: replicate
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
Assignee: Ravishankar N
QA Contact:
URL:
Whiteboard:
Depends On: 1220347 1229226
Blocks: 1223758
TreeView+ depends on / blocked
 
Reported: 2015-05-25 12:14 UTC by Ravishankar N
Modified: 2018-11-20 05:41 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of: 1220347
Environment:
Last Closed: 2018-11-20 05:41:37 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)

Description Ravishankar N 2015-05-25 12:14:17 UTC
+++ This bug was initially created as a clone of Bug #1220347 +++

Description of problem:
------------------------

`cat' on a file that was in split-brain condition was successful. This should ideally fail with `Input/output error'.

Version-Release number of selected component (if applicable):
--------------------------------------------------------------
glusterfs-3.7.0beta1-0.69.git1a32479.el6.x86_64

How reproducible:
------------------
Always

Steps to Reproduce:
--------------------

1. Create a distributed-replicate volume and mount it via fuse.
2. Create a file `1' on the mount point -
# touch 1
3. Bring down one brick in the replica pair where `1' resides.
#kill -9 <pid-of-brick-process>
4. Write to the file -
# echo "Hello" > 1
5. Start volume with force option.
6. Bring down the other brick in the replica pair and write to the file again -
# echo "World" > 1
7. `cat' the file -
# cat 1

Actual results:
----------------

# cat 1
World

Expected results:
------------------

`cat' should fail with `Input/output error'.

Additional info:
-----------------

The volume configuration -

# gluster volume info 2-test
 
Volume Name: 2-test
Type: Distributed-Replicate
Volume ID: 0e312bd3-0473-4fdc-ba2f-7df53b9e9683
Status: Started
Number of Bricks: 6 x 2 = 12
Transport-type: tcp
Bricks:
Brick1: dhcp37-126.lab.eng.blr.redhat.com:/rhs/brick4/b1
Brick2: dhcp37-123.lab.eng.blr.redhat.com:/rhs/brick4/b1
Brick3: dhcp37-98.lab.eng.blr.redhat.com:/rhs/brick4/b1
Brick4: dhcp37-54.lab.eng.blr.redhat.com:/rhs/brick4/b1
Brick5: dhcp37-210.lab.eng.blr.redhat.com:/rhs/brick4/b1
Brick6: dhcp37-59.lab.eng.blr.redhat.com:/rhs/brick4/b1
Brick7: dhcp37-126.lab.eng.blr.redhat.com:/rhs/brick5/b1
Brick8: dhcp37-123.lab.eng.blr.redhat.com:/rhs/brick5/b1
Brick9: dhcp37-98.lab.eng.blr.redhat.com:/rhs/brick5/b1
Brick10: dhcp37-54.lab.eng.blr.redhat.com:/rhs/brick5/b1
Brick11: dhcp37-210.lab.eng.blr.redhat.com:/rhs/brick5/b1
Brick12: dhcp37-59.lab.eng.blr.redhat.com:/rhs/brick5/b1
Options Reconfigured:
performance.readdir-ahead: on
cluster.self-heal-daemon: off
cluster.entry-self-heal: off
cluster.data-self-heal: off
cluster.metadata-self-heal: off
features.uss: enable
features.quota: on
performance.write-behind: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
performance.quick-read: off
performance.open-behind: off
features.bitrot: on
features.scrub: Active
diagnostics.client-log-level: DEBUG

--- Additional comment from Ravishankar N on 2015-05-11 08:44:51 EDT ---

Observations from debugging the setup.

When debugging the mount process with gdb, it was observed that in afr_lookup_done, we do afr_inode_read_subvol_reset() and consequently when afr_read_txn(), afr_read_txn_refresh_done()  is called, we bail out because there are no read subvols and the client gets EIO.

When no gdb was attached, the client again began reading stale data. On further examination, it was observed that fuse sends the following FOPS when 'cat' was performed on the mount:

1)fuse_fop_resume-->fuse_lookup_resume
2)fuse_fop_resume-->fuse_open_resume
3)fuse_fop_resume-->fuse_getattr_resume--->afr_fstat-->afr_read_txn-->bail out with EIO.
4)fuse_fop_resume-->fuse_flush_resume


However when 'cat' was done in rapid succession, (3) was not being called. i.e only fuse_lookup_resume, fuse_open_resume and fuse_flush_resume were being called. Since the getattr was not sent by fuse, it did not get the EIO and was serving data from kernel cache. It was noted that this data returned was always the one written to the latest brick, "World" in this case.

I don't think we should hit the issue if we perform a 1) drop_caches on the existing mount, or 2) do a remount or 3)mount with the options  attribute-timeout and entry-timeout set to zero to begin with.

--- Additional comment from Shruti Sampat on 2015-05-11 11:06:39 EDT ---


> 
> I don't think we should hit the issue if we perform a 1) drop_caches on the
> existing mount, or 2) do a remount or 3)mount with the options 
> attribute-timeout and entry-timeout set to zero to begin with.

Tried each of the above 3 and did not hit the issue.

--- Additional comment from Raghavendra Talur on 2015-05-19 09:45:32 EDT ---

Can be closed now that it is proved it kernel cache in action? or can be this
taken as a feature?

Ravi, I guess you can decide.

--- Additional comment from Ravishankar N on 2015-05-19 10:09:32 EDT ---

Raghavendra G has suggested a fix where we can set attribute-timeout to zero for the files that are in split-brain forcing fuse to send a fuse_getattr_resume(). I'll send a patch for it, let  us see if it is acceptable. Keeping the bug open until then.

Comment 1 Ravishankar N 2015-05-25 12:18:58 UTC
http://review.gluster.org/#/c/10905/

Comment 2 Anand Avati 2015-05-29 11:27:50 UTC
REVIEW: http://review.gluster.org/10905 (afr/fuse: set attribute-timeout to 0 for files in split-brain) posted (#2) for review on master by Ravishankar N (ravishankar@redhat.com)

Comment 3 Anand Avati 2015-05-30 10:42:47 UTC
REVIEW: http://review.gluster.org/10905 (afr/fuse: set attribute-timeout to 0 for files in split-brain) posted (#3) for review on master by Niels de Vos (ndevos@redhat.com)

Comment 4 Mike McCune 2016-03-28 23:23:30 UTC
This bug was accidentally moved from POST to MODIFIED via an error in automation, please see mmccune@redhat.com with any questions

Comment 5 Ravishankar N 2018-11-20 05:41:37 UTC
I'm not able to recreate this is the latest master running fedora kernel 4.16.11-100.fc26.x86_64 . Closing it for now.


Note You need to log in before you can comment on or make changes to this bug.