Bug 1718789

Summary: After Upgrade to 3.11.3, all bricks in BHV went offline
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Sri Vignesh Selvan <sselvan>
Component: heketiAssignee: John Mulligan <jmulligan>
Status: CLOSED DUPLICATE QA Contact: Prasanth <pprakash>
Severity: urgent Docs Contact:
Priority: unspecified    
Version: ocs-3.11CC: atumball, hchiramm, knarra, kramdoss, madam, pkarampu, pprakash, prasanna.kalever, rhs-bugs, rtalur, sabose, sankarshan, storage-qa-internal, vbellur, xiubli
Target Milestone: ---Keywords: ZStream
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-06-12 05:05:58 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Comment 2 Prasanna Kumar Kalever 2019-06-10 08:34:27 UTC
What makes you feel that this is a bug in gluster-block component ?

Comment 5 Raghavendra Talur 2019-06-12 05:05:58 UTC

*** This bug has been marked as a duplicate of bug 1700662 ***

Comment 6 Raghavendra Talur 2019-06-12 05:32:27 UTC
Root cause:
When the OCS uninstall playbook is run, it fails to clean the /var/lib/glusterd directory on the nodes where glusterfs pods were deployed. If a new OCS is deployed with *at least* one glusterfs pod on the old nodes then the volumes created in the previous deployment are seen in the Gluster command outputs. These volumes are seen neither in OpenShift nor in Heketi thereby causing confusion.

Impact:
This has no impact on dynamic provisioning of the PVCs and usage of PVs created in new deployment. However device/node replace operations of Heketi might fail.

Workaround:
Identify the list of volumes that were created in the previous deployment and delete the relevant directories and files from /var/lib/glusterd/ and /var/lib/heketi/fstab.