Bug 1397406 - glfs_fini does not send parent down on inactive graphs.
Summary: glfs_fini does not send parent down on inactive graphs.
Keywords:
Status: CLOSED UPSTREAM
Alias: None
Product: GlusterFS
Classification: Community
Component: libgfapi
Version: mainline
Hardware: Unspecified
OS: Unspecified
high
medium
Target Milestone: ---
Assignee: Mohammed Rafi KC
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-11-22 13:16 UTC by rjoseph
Modified: 2020-03-12 12:38 UTC (History)
7 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2020-03-12 12:38:32 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description rjoseph 2016-11-22 13:16:30 UTC
Description of problem:
At a time we maintain three graphs - active_subvol, next_subvol and 
mip_subvol (migration-in-progress). A new graph move from next_subvol -> 
mip_subvol -> active_subvol when glfs_active_subvol function is called. Before 
every fop we call glfs_active_subvol to make sure the fop goes to the correct 
graph.

when glfs_init is called the new graph is assigned to next_subvol and from
glfs_init we call glfs_active_subvol. Which will migrate the graph. But if
for some reason the mirgation fails, e.g. the first lookup on the "/" dir
fails then we would be left with a state where mip_subol is pointed to the
current graph and both active_subvol and next_subvol is NULL.

In glfs_fini we only send GF_EVENT_PARENT_DOWN to active_subvol and if it
is NULL we go ahead and delete fs and the corresponding ctx. This could
create some problem and also we might not free up used resources.


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Worker Ant 2016-11-22 13:22:04 UTC
REVIEW: http://review.gluster.org/15902 (gfapi: send parent down on non-active graphs as well) posted (#1) for review on master by Rajesh Joseph (rjoseph)

Comment 2 Worker Ant 2016-12-05 12:38:31 UTC
REVIEW: http://review.gluster.org/15902 (gfapi: send parent down on non-active graphs as well) posted (#2) for review on master by Rajesh Joseph (rjoseph)

Comment 3 Worker Ant 2016-12-05 12:47:07 UTC
REVIEW: http://review.gluster.org/15902 (gfapi: send parent down on non-active graphs as well) posted (#3) for review on master by Rajesh Joseph (rjoseph)

Comment 4 Worker Ant 2016-12-05 14:42:12 UTC
REVIEW: http://review.gluster.org/15902 (gfapi: send parent down on non-active graphs as well) posted (#4) for review on master by Rajesh Joseph (rjoseph)

Comment 5 Worker Ant 2017-02-17 09:05:24 UTC
REVIEW: https://review.gluster.org/15902 (gfapi: Cleanup non-active graphs on graph switch) posted (#5) for review on master by Rajesh Joseph (rjoseph)

Comment 6 Worker Ant 2017-02-17 09:05:27 UTC
REVIEW: https://review.gluster.org/16656 (dht: Deallocate memory allocated by DHT in fini) posted (#1) for review on master by Rajesh Joseph (rjoseph)

Comment 7 Worker Ant 2017-03-10 09:47:11 UTC
REVIEW: https://review.gluster.org/16656 (dht: Deallocate memory allocated by DHT in fini) posted (#2) for review on master by Rajesh Joseph (rjoseph)

Comment 8 Worker Ant 2017-03-10 09:47:21 UTC
REVIEW: https://review.gluster.org/15902 (gfapi: Cleanup non-active graphs on graph switch) posted (#6) for review on master by Rajesh Joseph (rjoseph)

Comment 10 Amar Tumballi 2019-07-15 05:53:04 UTC
Rafi, looks like with many cleanup on client side fini() you did for SHD, I believe this is done, Feel free to close it NEXTRELEASE if done.

Comment 11 Mohammed Rafi KC 2019-07-16 09:44:30 UTC
(In reply to Amar Tumballi from comment #10)
> Rafi, looks like with many cleanup on client side fini() you did for SHD, I
> believe this is done, Feel free to close it NEXTRELEASE if done.

Amar, The graph cleanup changes were done only for shd, but the framework is available to use for glfs clients as well. We can iterate through the inactive graph and perform the graph cleanup

Comment 12 Worker Ant 2020-03-12 12:38:32 UTC
This bug is moved to https://github.com/gluster/glusterfs/issues/917, and will be tracked there from now on. Visit GitHub issues URL for further details


Note You need to log in before you can comment on or make changes to this bug.