Description of problem: ======================= To resolve the problem mentioned in bz 1122064, it is good to have a force option along with snapshot deactivate. In some cases where we have inconsistent state of snapshot (i.e deactivated on some nodes and activated on other nodes), the deactivate force will deactivate the snap on all the nodes. Other wise to deactivate on all the nodes, we need to first activate using force and than use deactivate. Version-Release number of selected component (if applicable): ============================================================= glusterfs-3.6.0.25-1.el6rhs.x86_64 How reproducible: ================= 1/1 Steps to Reproduce: =================== 1. Have a volume from multi cluster node 2. Create a snapshot 3. Bring down one of the node in cluster 4. Deactivate the snapshot, which should be successful 5. Bring back the node UP 6. Check the status of the snapshot Actual results: =============== snapshot status is inconsistent on the cluster (activated on few and deactivated on others). No deactivate force option is available to deactivate forcefully on all the nodes. Expected results: ================= deactivate force option should be available to deactivate forcefully on all the nodes where we have inconsistent state(activated and deactivated) of snap.
After fix of https://bugzilla.redhat.com/show_bug.cgi?id=1122064 is this issue still reproducible in any scenario. If so, then we can implement force option for deactivate, if not we should close this bug.
As described in comment #3, this RFE is no longer needed, as snapshots do handshake now in cases where a node was down. Closing the bug.