Bug 1643195 - Glusterfs: Add fencing support to facilitate ALUA feature in Gluster Block for HA support
Summary: Glusterfs: Add fencing support to facilitate ALUA feature in Gluster Block fo...
Keywords:
Status: CLOSED DEFERRED
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterfs
Version: ocs-3.11
Hardware: Unspecified
OS: Unspecified
urgent
high
Target Milestone: ---
: ---
Assignee: Susant Kumar Palai
QA Contact: Bala Konda Reddy M
URL:
Whiteboard:
Depends On:
Blocks: 1643231
TreeView+ depends on / blocked
 
Reported: 2018-10-25 17:40 UTC by Prasanna Kumar Kalever
Modified: 2020-02-28 10:22 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-01-03 12:02:44 UTC
Embargoed:


Attachments (Terms of Use)

Description Prasanna Kumar Kalever 2018-10-25 17:40:40 UTC
Description of problem:

* Right now, we do not have a way to revoke the acquired lock on a file from another client.
* We need a new API something like glfs_lock_revoke() or glfs_lock_break() to revoke/break an acquired lock from a different client. 


See https://github.com/gluster/gluster-block/issues/53 for discussion around it.

Comment 5 Soumya Koduri 2018-10-26 10:51:06 UTC
(In reply to Prasanna Kumar Kalever from comment #0)
> Description of problem:
> 
> * Right now, we do not have a way to revoke the acquired lock on a file from
> another client.
> * We need a new API something like glfs_lock_revoke() or glfs_lock_break()
> to revoke/break an acquired lock from a different client. 
> 
> 
> See https://github.com/gluster/gluster-block/issues/53 for discussion around
> it.

My two cents -

In general by revocation() word, the assumption is that something which server does (lock/state revocation etc)  when client misbehaves or doesnt respond but not done by another client. 
 WRT the functionality which is being described here as missing, we had had discussions about the same in past for multi-protocol and HA support and we used to refer to it as reclaim_lock(..).

Design discussions can be found below -
https://review.gluster.org/#/c/glusterfs-specs/+/15053/

So what we were trying to achieve is that , when nodeA (gfapi clientA) goes down and fails over to nodeB (gfapi clientB), nodeB sends reclaim_lock request same as original posix/madatory_lock request but with a flag marking it as a reclaim request.

On the server-side, glusterfs lk xlator does conflict checks based on client-id+lk_owner. But on receiving reclaim_lk request, it relaxes that check , which means it checks for only lk-owner and updates the client-id to the nodeB client-id. 

This way, nodeB can reclaim the original lk by a single request and can avoid multiple round-trips (to first revoke, and then re-send lock request). Thoughts?

Comment 6 Susant Kumar Palai 2018-10-26 11:09:34 UTC
(In reply to Soumya Koduri from comment #5)
> (In reply to Prasanna Kumar Kalever from comment #0)
> > Description of problem:
> > 
> > * Right now, we do not have a way to revoke the acquired lock on a file from
> > another client.
> > * We need a new API something like glfs_lock_revoke() or glfs_lock_break()
> > to revoke/break an acquired lock from a different client. 
> > 
> > 
> > See https://github.com/gluster/gluster-block/issues/53 for discussion around
> > it.
> 
> My two cents -
> 
> In general by revocation() word, the assumption is that something which
> server does (lock/state revocation etc)  when client misbehaves or doesnt
> respond but not done by another client. 
>  WRT the functionality which is being described here as missing, we had had
> discussions about the same in past for multi-protocol and HA support and we
> used to refer to it as reclaim_lock(..).
> 
> Design discussions can be found below -
> https://review.gluster.org/#/c/glusterfs-specs/+/15053/
> 
> So what we were trying to achieve is that , when nodeA (gfapi clientA) goes
> down and fails over to nodeB (gfapi clientB), nodeB sends reclaim_lock
> request same as original posix/madatory_lock request but with a flag marking
> it as a reclaim request.
> 
> On the server-side, glusterfs lk xlator does conflict checks based on
> client-id+lk_owner. But on receiving reclaim_lk request, it relaxes that
> check , which means it checks for only lk-owner and updates the client-id to
> the nodeB client-id. 
> 
> This way, nodeB can reclaim the original lk by a single request and can
> avoid multiple round-trips (to first revoke, and then re-send lock request).
> Thoughts?

Soumya,
 I have posted a patch here. https://review.gluster.org/#/c/glusterfs/+/21457/

The logic is similar i.e. get the lk-owner,client info and do unlock.

Comment 8 Sahina Bose 2020-01-03 12:02:44 UTC
Since this is no longer a priority, closing this


Note You need to log in before you can comment on or make changes to this bug.