Bug 1275633
Summary: | Clone creation should not be successful when the node participating in volume goes down. | |||
---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Shashank Raj <sraj> | |
Component: | snapshot | Assignee: | Avra Sengupta <asengupt> | |
Status: | CLOSED ERRATA | QA Contact: | Shashank Raj <sraj> | |
Severity: | high | Docs Contact: | ||
Priority: | high | |||
Version: | rhgs-3.1 | CC: | asengupt, asrivast, rhinduja, rhs-bugs, rjoseph, sankarshan, sashinde, storage-qa-internal | |
Target Milestone: | --- | Keywords: | ZStream | |
Target Release: | RHGS 3.1.2 | |||
Hardware: | x86_64 | |||
OS: | Linux | |||
Whiteboard: | SNAPSHOT | |||
Fixed In Version: | glusterfs-3.7.5-10 | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 1276023 (view as bug list) | Environment: | ||
Last Closed: | 2016-03-01 05:45:38 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 1260783, 1276023, 1288030 |
Description
Shashank Raj
2015-10-27 11:47:04 UTC
Patch sent for master (upstream). http://review.gluster.org/12490 Master URL : http://review.gluster.org/#/c/12490/ Release 3.7 URL : http://review.gluster.org/#/c/12869/ RHGS 3.1.2 URL : https://code.engineering.redhat.com/gerrit/63012 Verified this bug with latest glusterfs-3.7.5-10 build and its working as expected. Steps followed are as below: 1) Create a 4 node cluster, create a tiered volume using all the nodes and start it. 2) Create a snapshot of this volume 3) Create a clone of this snapshot using below commands: [root@dhcp35-141 ~]# gluster snapshot clone clone1 snap1 snapshot clone: success: Clone clone1 created successfully [root@dhcp35-141 ~]# gluster snapshot clone clone2 snap1 snapshot clone: success: Clone clone2 created successfully [root@dhcp35-141 ~]# gluster snapshot clone clone3 snap1 snapshot clone: success: Clone clone3 created successfully 4) shutdown one of the node. 5) Check the volume status which doesn't list the bricks for the down node. Status of volume: tiervolume Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Hot Bricks: Brick 10.70.35.142:/bricks/brick3/b3 49155 0 Y 15541 Brick 10.70.35.141:/bricks/brick3/b3 49155 0 Y 15564 Brick 10.70.35.228:/bricks/brick3/b3 49155 0 Y 15676 Cold Bricks: Brick 10.70.35.228:/bricks/brick0/b0 49152 0 Y 15474 Brick 10.70.35.141:/bricks/brick0/b0 49152 0 Y 15400 Brick 10.70.35.142:/bricks/brick0/b0 49152 0 Y 15376 Brick 10.70.35.228:/bricks/brick1/b1 49153 0 Y 15493 Brick 10.70.35.141:/bricks/brick1/b1 49153 0 Y 15419 Brick 10.70.35.142:/bricks/brick1/b1 49153 0 Y 15395 Brick 10.70.35.228:/bricks/brick2/b2 49154 0 Y 15512 Brick 10.70.35.141:/bricks/brick2/b2 49154 0 Y 15438 Brick 10.70.35.142:/bricks/brick2/b2 49154 0 Y 15414 NFS Server on localhost 2049 0 Y 15696 Self-heal Daemon on localhost N/A N/A Y 15704 Quota Daemon on localhost N/A N/A Y 15712 NFS Server on 10.70.35.142 2049 0 Y 15561 Self-heal Daemon on 10.70.35.142 N/A N/A Y 15569 Quota Daemon on 10.70.35.142 N/A N/A Y 15577 NFS Server on 10.70.35.141 2049 0 Y 15584 Self-heal Daemon on 10.70.35.141 N/A N/A Y 15592 Quota Daemon on 10.70.35.141 N/A N/A Y 15600 Task Status of Volume tiervolume 6) Try to create a clone from the snapshot from different nodes and observe that it fails with the message "quorum is not met" [root@dhcp35-141 ~]# gluster snapshot clone clone4 snap1 snapshot clone: failed: quorum is not met Snapshot command failed [root@dhcp35-228 ~]# gluster snapshot clone clone5 snap1 snapshot clone: failed: quorum is not met Snapshot command failed Based on the above observations, marking this bug as Verified. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2016-0193.html |