Bug 1214644
Summary: | Upcall: Migrate state during rebalance/tiering | ||
---|---|---|---|
Product: | [Community] GlusterFS | Reporter: | Soumya Koduri <skoduri> |
Component: | upcall | Assignee: | bugs <bugs> |
Status: | CLOSED WONTFIX | QA Contact: | |
Severity: | medium | Docs Contact: | |
Priority: | unspecified | ||
Version: | mainline | CC: | ndevos, pgurusid, rtalur, srangana |
Target Milestone: | --- | Keywords: | Triaged |
Target Release: | --- | ||
Hardware: | All | ||
OS: | All | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2019-05-06 07:00:44 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Soumya Koduri
2015-04-23 09:49:31 UTC
This bug is opened to design and track the changes we need to address the above mentioned issue. Another approach suggested by Shyam - The solution outline is for the rebalance process to migrate the locks, with some additional coordination with the locks/lease/upcall xlators. The problem however is _mapping_ all of the lock information across the 2 different storage node brick processes. (i.e client_t information) >>>>>>>> More details in the below threads. http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/9079 http://www.gluster.org/pipermail/gluster-devel/2014-December/043284.html We may primarily need to check if we can make client_t mapping uniform across the bricks and the issues with it. Status? This is a Day1 issue (i.e, rebalance and self-healing does not happen for most of the state maintained at the server-side) and there are no plans to address it in the near future. Hence closing the bug. |