Bug 1303177 - Enhancement: Allow self-heal to continue from another daemon
Summary: Enhancement: Allow self-heal to continue from another daemon
Keywords:
Status: CLOSED UPSTREAM
Alias: None
Product: GlusterFS
Classification: Community
Component: replicate
Version: mainline
Hardware: Unspecified
OS: Unspecified
medium
low
Target Milestone: ---
Assignee: Pranith Kumar K
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-01-29 18:57 UTC by Joe Julian
Modified: 2018-11-20 09:41 UTC (History)
2 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2018-11-20 09:10:02 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Joe Julian 2016-01-29 18:57:11 UTC
When a self-heal is started, that client will continue to do the self-heal until it's complete. If, however, that client is stopped (unmounted client, restarted shd, etc), the heal starts over from the beginning. When you're healing files that take many days to heal, this behavior is undesirable.

Since the bricks already have the lock data that shows how far along the heal is, there should be a way to keep track (metadata? brick memory?) and allow another client to continue.

If this is integrated with throttling, this could even be a pooled process queue that could be picked up by whichever shd has free tokens, allowing the entire cluster to progress the heal and, potentially, take over the background heal from a fuse client.

This should be controllable from the cli where the admin could stop the self-heal from continuing on a specific client and it would continue from another shd, optionally the specific shd could be specified as part of the instruction. This would be useful where the fuse client begins a background self-heal across the slower client network, but the admin wants the self-heal to be run by a shd across a faster backend connection.

Comment 1 Pranith Kumar K 2016-03-31 12:44:03 UTC
I think this is possible with Granular entry/data self-heal feature that is coming up. Will keep you updated :-).

Comment 2 Kaushal 2017-03-08 10:50:07 UTC
This bug is getting closed because GlusteFS-3.7 has reached its end-of-life.

Note: This bug is being closed using a script. No verification has been performed to check if it still exists on newer releases of GlusterFS.
If this bug still exists in newer GlusterFS releases, please reopen this bug against the newer release.

Comment 3 Vijay Bellur 2018-11-20 09:41:41 UTC
Migrated to github:

https://github.com/gluster/glusterfs/issues/599

Please follow the github issue for further updates on this bug.


Note You need to log in before you can comment on or make changes to this bug.