Bug 1194546 - Write behind returns success for a write irrespective of a conflicting lock held by another application
Summary: Write behind returns success for a write irrespective of a conflicting lock h...
Keywords:
Status: CLOSED UPSTREAM
Alias: None
Product: GlusterFS
Classification: Community
Component: write-behind
Version: mainline
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: ---
Assignee: Sanju
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-02-20 06:48 UTC by Anoop C S
Modified: 2020-03-17 03:30 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-03-17 03:30:50 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)
Contains the required patch and test program (2.61 KB, application/zip)
2015-02-20 06:48 UTC, Anoop C S
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Gluster.org Gerrit 23224 0 None Abandoned performance/write-behind: lk and write calls should be ordered 2019-11-12 19:30:48 UTC

Description Anoop C S 2015-02-20 06:48:42 UTC
Created attachment 993827 [details]
Contains the required patch and test program

Description of problem:
When mandatory-locking is enabled for a particular volume and a write is performed on a particular file, write-behind immediately returns success irrespective of a conflicting lock held by another application.

Version-Release number of selected component (if applicable):
mainline

How reproducible:
Always

Steps to Reproduce:
Apply the attached patch, install and perform the following steps.
1. Create and start a basic distributed volume with mandatory-locks enabled.
   Note:- volume set option for enabling mandatory-locks is as follows
          gluster volume set <VOLNAME> mandatory-locks on 
2. Fuse mount the volume at two different mount-points.
3. Create an empty file from mount-point.
4. Run app1.c [see attachment] from one mount-point, with path to the empty file as command-line argument till it acquires a shared lock on it.
5. Run app2.c [see attachment] with path to the empty file as command-line argument.

Actual results:
Write returned success.

Expected results:
Write should wait[blocking mode] or should fail[non-blocking mode]

Additional info:
Compile the attached programs and run as per the above mentioned steps.

Comment 1 Anand Avati 2015-04-23 11:59:03 UTC
REVIEW: http://review.gluster.org/10350 (performance/write-behind: lk and write calls should be ordered) posted (#1) for review on master by Raghavendra Talur (rtalur)

Comment 2 Anand Avati 2015-04-23 12:27:12 UTC
REVIEW: http://review.gluster.org/10350 (performance/write-behind: lk and write calls should be ordered) posted (#2) for review on master by Raghavendra Talur (rtalur)

Comment 3 Anand Avati 2015-04-28 12:39:27 UTC
REVIEW: http://review.gluster.org/10350 (performance/write-behind: lk and write calls should be ordered) posted (#3) for review on master by Raghavendra Talur (rtalur)

Comment 4 Raghavendra G 2015-11-30 04:17:13 UTC
Just an observation. With [1], write can be successful. Only fsync or close make sure that writes are synced to disk. So, If necessary please make relevant changes to your tests.

[1] review.gluster.org/12594

Comment 5 Vijay Bellur 2016-07-05 09:51:14 UTC
REVIEW: http://review.gluster.org/10350 (performance/write-behind: lk and write calls should be ordered) posted (#4) for review on master by Raghavendra Talur (rtalur)

Comment 6 Yaniv Kaul 2019-04-22 14:06:15 UTC
None of the above patches were merged. What's the status?

Comment 7 Raghavendra Talur 2019-04-22 21:51:49 UTC
The patch posted(http://review.gluster.org/10350) for review handles the case where:
If both process A and B are on the same Gluster client machine, then it ensures write-behind orders write and lock requests from both the processes in the right order.


On review, Raghavendra G commented with the following example and review:
A write w1 is done and is cached in write-behind.
A mandatory lock is held by same thread which conflicts with w1 (Is that even a valid case? If not, probably we don't need this patch at all). This mandatory lock goes through write-behind and locks xlator grants this lock.
Now write-behind flushes w1 and posix-locks fails w1 as a conflicting mandatory lock is held.
But now that I think of it, it seems like an invalid (exotic at its best) use-case.


Anoop/Raghavendra G,

From mandatory locking and write-behind perspective, is it still an exotic case? If so, we can close this bug.

Comment 8 Raghavendra G 2019-04-23 01:14:59 UTC
(In reply to Raghavendra Talur from comment #7)
> The patch posted(http://review.gluster.org/10350) for review handles the
> case where:
> If both process A and B are on the same Gluster client machine, then it
> ensures write-behind orders write and lock requests from both the processes
> in the right order.
> 
> 
> On review, Raghavendra G commented with the following example and review:
> A write w1 is done and is cached in write-behind.
> A mandatory lock is held by same thread which conflicts with w1 (Is that
> even a valid case? If not, probably we don't need this patch at all). This
> mandatory lock goes through write-behind and locks xlator grants this lock.
> Now write-behind flushes w1 and posix-locks fails w1 as a conflicting
> mandatory lock is held.
> But now that I think of it, it seems like an invalid (exotic at its best)
> use-case.

What I missed above is when write and lock requests happen from two different processes on same mount point (which the commit msg says). For that case, this patch is still required.

> 
> 
> Anoop/Raghavendra G,
> 
> From mandatory locking and write-behind perspective, is it still an exotic
> case? If so, we can close this bug.

No. I was wrong. This patch is required for multiple process scenario.

Comment 9 Raghavendra G 2019-04-23 03:31:12 UTC
I've restored the patch, but it ran into conflict. Can you refresh?

Comment 10 Worker Ant 2019-08-13 19:14:55 UTC
REVIEW: https://review.gluster.org/23224 (performance/write-behind: lk and write calls should be ordered) posted (#1) for review on master by Rishubh Jain

Comment 12 Xavi Hernandez 2019-11-19 14:22:07 UTC
Rishubh, any progress on this ?

Comment 13 Xavi Hernandez 2019-11-19 14:23:23 UTC
Moving the bug back to assigned because the patch is abandoned.

Comment 14 Worker Ant 2020-03-17 03:30:50 UTC
This bug is moved to https://github.com/gluster/glusterfs/issues/1113, and will be tracked there from now on. Visit GitHub issues URL for further details


Note You need to log in before you can comment on or make changes to this bug.