Bug 1344625
Summary: | fail delete volume operation if one of the glusterd instance is down in cluster | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Atin Mukherjee <amukherj> |
Component: | glusterd | Assignee: | Atin Mukherjee <amukherj> |
Status: | CLOSED ERRATA | QA Contact: | Byreddy <bsrirama> |
Severity: | unspecified | Docs Contact: | |
Priority: | unspecified | ||
Version: | rhgs-3.1 | CC: | amukherj, bugs, kaushal, lbailey, rcyriac, rhinduja, rhs-bugs, storage-qa-internal, vbellur |
Target Milestone: | --- | Keywords: | ZStream |
Target Release: | RHGS 3.1.3 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | glusterfs-3.7.9-10 | Doc Type: | Bug Fix |
Doc Text: |
The 'volume delete' operation succeeded even when an instance of glusterd was down. This meant that when the glusterd instance recovered, it re-synced the deleted volume to the cluster. This update ensures that 'volume delete' operations fail when an instance of glusterd is not available.
|
Story Points: | --- |
Clone Of: | 1344407 | Environment: | |
Last Closed: | 2016-06-23 05:26:33 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1344407, 1344631, 1344634 | ||
Bug Blocks: | 1311817, 1344239 |
Description
Atin Mukherjee
2016-06-10 08:24:32 UTC
Downstream patch https://code.engineering.redhat.com/gerrit/76322 posted for review. Laura, this needs your attention and hence I am raising a need_info now :) ~Atin Laura, Mentioning node is unavailable will not be technically correct. This indicates that the node could be down or under maintenance. The issue is all about when one or more glusterd instances are down. Can you please reword? ~Atin LGTM :) Verified this bug using the build "glusterfs-3.7.9-10" Fix is working good, it's not allowing to delete the volume when the peer nodes are down and able to delete once offline nodes comes up and able to create new volume. Test cases verified for this fix are: ===================================== 1. Stop and Delete volume when one of the node is down - Pass 2. Delete the volume by starting the shutdown node - Pass 3. Stop and delete the volume when nodes are down - Pass 4. Bring up one node out of two offline nodes and delete the volume - Pass 5. Bring up all the offline nodes and delete the volume - Pass 6. Delete the volume when one of the peer node which is not hosting volume bricks is offline -Pass 7. Stop the volume when all the nodes are online and move one of node to offline and delete the volume - Pass 8. Stop the volume when one the peer node is down and probe new node and delete the volume - Pass 9. Create a volume (don't start ) and down one of the node and delete the volume - Pass 10. Have multiple volumes and down one of the peer node and delete the volumes - Pass 11. Delete the volume(s) when offline node(s) comes up - Pass 12. Delete the volume by powering off one of the peer node - Pass 13. when one of the node is down, create the volume and try to delele - Pass 14. Create a volume, down one of the node and create new volume using online node bricks -Pass With all above details moving to verified state. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2016:1240 |