Bug 1101514 - [SNAPSHOT]: Able to delete/restore even when glusterd quorum doesnt meet
Summary: [SNAPSHOT]: Able to delete/restore even when glusterd quorum doesnt meet
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: snapshot
Version: rhgs-3.0
Hardware: x86_64
OS: Linux
urgent
urgent
Target Milestone: ---
: RHGS 3.0.0
Assignee: Raghavendra Bhat
QA Contact: Rahul Hinduja
URL:
Whiteboard: SNAPSHOT
Depends On: 1101561
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-05-27 11:36 UTC by Rahul Hinduja
Modified: 2016-09-17 12:53 UTC (History)
6 users (show)

Fixed In Version: glusterfs-3.6.0.16-1
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1101561 (view as bug list)
Environment:
Last Closed: 2014-09-22 19:39:24 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2014:1278 0 normal SHIPPED_LIVE Red Hat Storage Server 3.0 bug fix and enhancement update 2014-09-22 23:26:55 UTC

Description Rahul Hinduja 2014-05-27 11:36:21 UTC
Description of problem:
=======================

As per the design the snapshot delete and restore should fail when the glusterd quorum doesn't match but currently it is successful.

[root@inception ~]# gluster peer status
Number of Peers: 3

Hostname: rhs-arch-srv2.lab.eng.blr.redhat.com
Uuid: e6092fa0-0891-4199-90db-ea1e7469bccf
State: Peer in Cluster (Disconnected)

Hostname: rhs-arch-srv3.lab.eng.blr.redhat.com
Uuid: 75a28a27-c052-45cb-81dd-41bfb8857d4a
State: Peer in Cluster (Connected)

Hostname: rhs-arch-srv4.lab.eng.blr.redhat.com
Uuid: d3761d79-90bc-47ae-8faf-f2d39f9f5677
State: Peer in Cluster (Disconnected)
[root@inception ~]# gluster snapshot delete vol0
Deleting snap will erase all the information about the snap. Do you still want to continue? (y/n) y
snapshot delete: vol0: snap removed successfully
[root@inception ~]#


Version-Release number of selected component (if applicable):
==============================================================

glusterfs-3.6.0.7-1.el6rhs.x86_64


How reproducible:
=================
5/5


Steps to Reproduce:
===================
1. Create 4 node cluster
2. Create and start volume
3. Create a snapshot of a volume
4. Kill glusterd on 2 of the nodes in a cluster
5. Delete a snapshot

Actual results:
===============

Deleting snapshot is successful.


Expected results:
=================

Deleting snapshot should fail.

Comment 3 Raghavendra Bhat 2014-06-02 06:02:06 UTC
https://code.engineering.redhat.com/gerrit/#/c/26062/

Comment 6 Rahul Hinduja 2014-06-12 12:11:23 UTC
Verified with build: glusterfs-3.6.0.16-1.el6rhs.x86_64

[root@snapshot13 ~]# gluster peer status
Number of Peers: 3

Hostname: snapshot14.lab.eng.blr.redhat.com
Uuid: 7d912161-adc6-477f-8e0e-213c0f6b69bd
State: Peer in Cluster (Disconnected)

Hostname: snapshot15.lab.eng.blr.redhat.com
Uuid: 7cf50756-94dc-4f6c-81ab-547c082cb522
State: Peer in Cluster (Connected)

Hostname: snapshot16.lab.eng.blr.redhat.com
Uuid: 04ce5d3b-de42-49f9-9b96-87575cb472e5
State: Peer in Cluster (Disconnected)
[root@snapshot13 ~]#

[root@snapshot13 ~]# gluster snapshot delete snap1
Deleting snap will erase all the information about the snap. Do you still want to continue? (y/n) y
snapshot delete: failed: glusterds are not in quorum
Snapshot command failed
[root@snapshot13 ~]#
[root@snapshot13 ~]# gluster snapshot restore snap1
snapshot restore: failed: glusterds are not in quorum
Snapshot command failed
[root@snapshot13 ~]#

Marking the bug as verified

Comment 8 errata-xmlrpc 2014-09-22 19:39:24 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHEA-2014-1278.html


Note You need to log in before you can comment on or make changes to this bug.