Bug 1214654 - Self-heal: Migrate lease_locks as part of self-heal process
Summary: Self-heal: Migrate lease_locks as part of self-heal process
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: GlusterFS
Classification: Community
Component: upcall
Version: mainline
Hardware: All
OS: All
unspecified
medium
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-04-23 10:04 UTC by Soumya Koduri
Modified: 2019-05-06 07:01 UTC (History)
6 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2019-05-06 07:01:16 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Soumya Koduri 2015-04-23 10:04:15 UTC
Description of problem:
At present, similar to posix locks, we do not have lease_locks migration support during self-heal process which may lead to data corruption.

Unlike in posix locks case, application(NFS-Ganesha/SMB) clients are not aware of lease_locks taken by the application server, which makes this issue more serious as there could be data corruption without clients know about it.

Maybe we could use the same approach taken to migrate state during rebalance/tiering (BZ1214644). But that will resolve the issue only partially- where in a brick goes down and comes back up again, we can migrate lock state from the active brick process. Later before/during the migration, if the source brick goes down too, we shall lose lock state again.

This bug is to analyze, design and track the changes required to have this support.

Comment 1 Yaniv Kaul 2019-04-28 07:39:52 UTC
Status?

Comment 2 Soumya Koduri 2019-05-06 07:01:16 UTC
This is a Day1 issue (i.e, rebalance and self-healing does not happen for most of the state maintained at the server-side) and there are no plans to address it in the near future. Hence closing the bug.


Note You need to log in before you can comment on or make changes to this bug.