Bug 988769

Summary: Incorrect volume information is seen with RHS Node, which went down, when the volume is stopped and deleted from other peer
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: SATHEESARAN <sasundar>
Component: glusterdAssignee: Bug Updates Notification Mailing List <rhs-bugs>
Status: CLOSED EOL QA Contact: SATHEESARAN <sasundar>
Severity: medium Docs Contact:
Priority: medium    
Version: 2.1CC: nlevinki, rhinduja, rhs-bugs, spandura, vbellur
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2015-12-03 17:15:43 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description SATHEESARAN 2013-07-26 10:28:31 UTC
Description of problem:
When one of the RHS Node is shutdown and the volume is stopped and deleted from other node, then the earlier powered-down Node after powered up, still shows volume information.(which was earlier deleted)

Tried,"gluster volume sync <node-which-haven't-went-for-power-cycle>", and that doesn't help

Version-Release number of selected component (if applicable):
RHS2.1-glusterfs-3.4.0.12beta6-1

How reproducible:
Always

Steps to Reproduce:
1. Create a trusted storage pool of 4 Nodes, say NODE1, NODE2, NODE3, NODE4

2. Create 1X2 replicate volume
(i.e) gluster volume create repvol replica 2 NODE1:brick1 NODE2:brick1

3. Start the volume
(i.e) gluster volume start repvol

4. Check for gluster volume info and gluster volume status from all the nodes of cluster

5. Powerdown (force-off) one of the NODE, which is a replica pair

6. When NODE2 is down, from NODE1, stop the volume and delete the volume

7. Power-on the NODE2, which was powered down in step 5

8. Once the NODE2 comes up, check for gluster volume info and gluster volume status

Actual results:
Even though, the volume is stopped and deleted ( when NODE2 was down ),when NODE2 comes up, it show the volume related information, with "gluster volume info" and "gluster volume status" command

Expected results:
Since the volume is no longer present, as soon as NODE2 comes up, should sync up with other nodes in cluster and it should not show any information related to that volume

Additional info:

Comment 2 Vivek Agarwal 2015-12-03 17:15:43 UTC
Thank you for submitting this issue for consideration in Red Hat Gluster Storage. The release for which you requested us to review, is now End of Life. Please See https://access.redhat.com/support/policy/updates/rhs/

If you can reproduce this bug against a currently maintained version of Red Hat Gluster Storage, please feel free to file a new report against the current release.