Bug 1302284

Summary: Offline Bricks are starting after probing new node
Product: [Community] GlusterFS Reporter: Byreddy <bsrirama>
Component: glusterd2Assignee: bugs <bugs>
Status: CLOSED DEFERRED QA Contact:
Severity: high Docs Contact:
Priority: unspecified    
Version: mainlineCC: amukherj, atumball, bugs, sasundar
Target Milestone: ---Keywords: Triaged
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-05-09 10:06:17 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Byreddy 2016-01-27 11:33:15 UTC
Description of problem:
======================
Have two node cluster with Dis-Rep volume (bricks spread across both the nodes ) and in one of the node glusterd was down and volume status on other running node showed bricks (all) in offline state after glusterd restart. in this state if i peer probe a new node from active running node, the offline bricks will start running.


Version-Release number of selected component (if applicable):
=============================================================
glusterfs-3.7.5-17


How reproducible:
=================
Every time


Steps to Reproduce:
===================
1.Have two nodes (node-1 and node-2) cluster with Distribute-Replica volume with bricks spread across both the nodes
2.Stop glusterd on node-2
3.Restart glusterd on node-1
4.Check volume status on node-1 //bricks will be in offline state
5.Peer probe  node-3 from node-1
6.Check volume status on node-1 //bricks will be running. 

Actual results:
===============
Offline bricks are starting after probing new node.


Expected results:
=================
Offline bricks should not start after probing new node.


Additional info:

Comment 1 Atin Mukherjee 2016-01-28 06:39:40 UTC
IMO, this is difficult to fix with having store replicated across all the nodes. However with GlusterD 2.0 this should be doable. I'd like to keep this bug open and track it for 4.0.

Comment 2 Amar Tumballi 2019-05-09 10:06:17 UTC
We will mark this as DEFERRED as we are not working on this. Will revisit this based on time and resource after couple of releases.