Bug 1432004

Summary: device remove should check there are no pending heals before proceeding with the brick replacement
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: krishnaram Karthick <kramdoss>
Component: heketiAssignee: Raghavendra Talur <rtalur>
Status: CLOSED ERRATA QA Contact: Apeksha <akhakhar>
Severity: high Docs Contact:
Priority: unspecified    
Version: cns-3.5CC: asriram, hchiramm, madam, mliyazud, pprakash, rcyriac, rhs-bugs, rreddy, rtalur, srmukher, sselvan, storage-qa-internal
Target Milestone: ---   
Target Release: CNS 3.6   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: heketi-5.0.0-8 rhgs-volmanager-docker-5.0.0-10 Doc Type: Bug Fix
Doc Text:
Previously, to execute a remove device command, checking for ongoing self-heal operations and waiting for them to be completed to ensure data consistency was mandatory. With this update, the remove device operation has been enhanced to check for ongoing self-heal and aborts with an error if any are found.
Story Points: ---
Clone Of:
: 1432435 (view as bug list) Environment:
Last Closed: 2017-10-11 07:07:22 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1432969, 1467822    
Bug Blocks: 1432435, 1445447    

Description krishnaram Karthick 2017-03-14 09:55:50 UTC
Description of problem:
On a replica 3 volume, which is the default volume type supported for CNS, when device remove is run on all three devices (before self-heal is completed) under which the volume is carved, we might end up in a data loss. 

As we are doing a replace brick force internally, there is no check on whether there is an on-going heal and hence when the last brick with latest data is also replaced forcefully, we'll lose the data.

So, heketi has to validate that there are no active healing going on before proceeding with the replace brick.


Version-Release number of selected component (if applicable):
heketi-client-4.0.0-1.el7rhgs.x86_64

How reproducible:
Always

Steps to Reproduce:
1. create a 100Gb pvc and write 50 Gb of data.
2. Find out the three devices on which the volume is built.
3. Run heketi device remove <node1device1> && heketi device remove <node1device2> && heketi device remove <node1device3>. This can also be run serially, waiting for each command to complete, but before self-heal is done.

Actual results:
remove brick will proceed without checking on-going selfheal

Expected results:
remove brick should not proceed when there is a self-heal going on in a volume.

Additional info:

Comment 5 Michael Adam 2017-03-15 23:51:30 UTC
Note that it would *actually* be Gluster's job to do this protection!
We should file a RFE for gluster itself. (And then possibly still add the protection to heketi until that's available).

Comment 6 Raghavendra Talur 2017-03-16 13:25:59 UTC
Filed a RHGS bug https://bugzilla.redhat.com/show_bug.cgi?id=1432969

Comment 8 Raghavendra Talur 2017-07-25 12:39:30 UTC
https://github.com/heketi/heketi/pull/718

Comment 10 Mohamed Ashiq 2017-08-16 07:00:58 UTC
(In reply to Raghavendra Talur from comment #8)
> https://github.com/heketi/heketi/pull/718

Merged upstream.

Comment 11 Apeksha 2017-09-14 11:19:14 UTC
verified on build  heketi-client-5.0.0-11.el7rhgs.x86_64, cns-deploy-5.0.0-37.el7rhgs.x86_64

Comment 13 Raghavendra Talur 2017-09-26 07:38:42 UTC
I have provided doc text,  please review.

Comment 15 Raghavendra Talur 2017-10-04 08:51:23 UTC
doc text looks good to me

Comment 17 errata-xmlrpc 2017-10-11 07:07:22 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2017:2879