Bug 1432004 - device remove should check there are no pending heals before proceeding with the brick replacement
Summary: device remove should check there are no pending heals before proceeding with ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: heketi
Version: cns-3.5
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: CNS 3.6
Assignee: Raghavendra Talur
QA Contact: Apeksha
URL:
Whiteboard:
Depends On: 1432969 1467822
Blocks: 1432435 1445447
TreeView+ depends on / blocked
 
Reported: 2017-03-14 09:55 UTC by krishnaram Karthick
Modified: 2019-01-12 13:15 UTC (History)
12 users (show)

Fixed In Version: heketi-5.0.0-8 rhgs-volmanager-docker-5.0.0-10
Doc Type: Bug Fix
Doc Text:
Previously, to execute a remove device command, checking for ongoing self-heal operations and waiting for them to be completed to ensure data consistency was mandatory. With this update, the remove device operation has been enhanced to check for ongoing self-heal and aborts with an error if any are found.
Clone Of:
: 1432435 (view as bug list)
Environment:
Last Closed: 2017-10-11 07:07:22 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2017:2879 0 normal SHIPPED_LIVE heketi bug fix and enhancement update 2017-10-11 11:07:06 UTC

Description krishnaram Karthick 2017-03-14 09:55:50 UTC
Description of problem:
On a replica 3 volume, which is the default volume type supported for CNS, when device remove is run on all three devices (before self-heal is completed) under which the volume is carved, we might end up in a data loss. 

As we are doing a replace brick force internally, there is no check on whether there is an on-going heal and hence when the last brick with latest data is also replaced forcefully, we'll lose the data.

So, heketi has to validate that there are no active healing going on before proceeding with the replace brick.


Version-Release number of selected component (if applicable):
heketi-client-4.0.0-1.el7rhgs.x86_64

How reproducible:
Always

Steps to Reproduce:
1. create a 100Gb pvc and write 50 Gb of data.
2. Find out the three devices on which the volume is built.
3. Run heketi device remove <node1device1> && heketi device remove <node1device2> && heketi device remove <node1device3>. This can also be run serially, waiting for each command to complete, but before self-heal is done.

Actual results:
remove brick will proceed without checking on-going selfheal

Expected results:
remove brick should not proceed when there is a self-heal going on in a volume.

Additional info:

Comment 5 Michael Adam 2017-03-15 23:51:30 UTC
Note that it would *actually* be Gluster's job to do this protection!
We should file a RFE for gluster itself. (And then possibly still add the protection to heketi until that's available).

Comment 6 Raghavendra Talur 2017-03-16 13:25:59 UTC
Filed a RHGS bug https://bugzilla.redhat.com/show_bug.cgi?id=1432969

Comment 8 Raghavendra Talur 2017-07-25 12:39:30 UTC
https://github.com/heketi/heketi/pull/718

Comment 10 Mohamed Ashiq 2017-08-16 07:00:58 UTC
(In reply to Raghavendra Talur from comment #8)
> https://github.com/heketi/heketi/pull/718

Merged upstream.

Comment 11 Apeksha 2017-09-14 11:19:14 UTC
verified on build  heketi-client-5.0.0-11.el7rhgs.x86_64, cns-deploy-5.0.0-37.el7rhgs.x86_64

Comment 13 Raghavendra Talur 2017-09-26 07:38:42 UTC
I have provided doc text,  please review.

Comment 15 Raghavendra Talur 2017-10-04 08:51:23 UTC
doc text looks good to me

Comment 17 errata-xmlrpc 2017-10-11 07:07:22 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2017:2879


Note You need to log in before you can comment on or make changes to this bug.