Bug 1302284 - Offline Bricks are starting after probing new node
Summary: Offline Bricks are starting after probing new node
Keywords:
Status: CLOSED DEFERRED
Alias: None
Product: GlusterFS
Classification: Community
Component: glusterd2
Version: mainline
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-01-27 11:33 UTC by Byreddy
Modified: 2019-05-09 10:06 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-05-09 10:06:17 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Byreddy 2016-01-27 11:33:15 UTC
Description of problem:
======================
Have two node cluster with Dis-Rep volume (bricks spread across both the nodes ) and in one of the node glusterd was down and volume status on other running node showed bricks (all) in offline state after glusterd restart. in this state if i peer probe a new node from active running node, the offline bricks will start running.


Version-Release number of selected component (if applicable):
=============================================================
glusterfs-3.7.5-17


How reproducible:
=================
Every time


Steps to Reproduce:
===================
1.Have two nodes (node-1 and node-2) cluster with Distribute-Replica volume with bricks spread across both the nodes
2.Stop glusterd on node-2
3.Restart glusterd on node-1
4.Check volume status on node-1 //bricks will be in offline state
5.Peer probe  node-3 from node-1
6.Check volume status on node-1 //bricks will be running. 

Actual results:
===============
Offline bricks are starting after probing new node.


Expected results:
=================
Offline bricks should not start after probing new node.


Additional info:

Comment 1 Atin Mukherjee 2016-01-28 06:39:40 UTC
IMO, this is difficult to fix with having store replicated across all the nodes. However with GlusterD 2.0 this should be doable. I'd like to keep this bug open and track it for 4.0.

Comment 2 Amar Tumballi 2019-05-09 10:06:17 UTC
We will mark this as DEFERRED as we are not working on this. Will revisit this based on time and resource after couple of releases.


Note You need to log in before you can comment on or make changes to this bug.