Bug 1302284 - Offline Bricks are starting after probing new node
Offline Bricks are starting after probing new node
Status: NEW
Product: GlusterFS
Classification: Community
Component: glusterd2 (Show other bugs)
mainline
x86_64 Linux
unspecified Severity high
: ---
: ---
Assigned To: bugs@gluster.org
: Triaged
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2016-01-27 06:33 EST by Byreddy
Modified: 2016-06-22 01:12 EDT (History)
3 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed:
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Byreddy 2016-01-27 06:33:15 EST
Description of problem:
======================
Have two node cluster with Dis-Rep volume (bricks spread across both the nodes ) and in one of the node glusterd was down and volume status on other running node showed bricks (all) in offline state after glusterd restart. in this state if i peer probe a new node from active running node, the offline bricks will start running.


Version-Release number of selected component (if applicable):
=============================================================
glusterfs-3.7.5-17


How reproducible:
=================
Every time


Steps to Reproduce:
===================
1.Have two nodes (node-1 and node-2) cluster with Distribute-Replica volume with bricks spread across both the nodes
2.Stop glusterd on node-2
3.Restart glusterd on node-1
4.Check volume status on node-1 //bricks will be in offline state
5.Peer probe  node-3 from node-1
6.Check volume status on node-1 //bricks will be running. 

Actual results:
===============
Offline bricks are starting after probing new node.


Expected results:
=================
Offline bricks should not start after probing new node.


Additional info:
Comment 1 Atin Mukherjee 2016-01-28 01:39:40 EST
IMO, this is difficult to fix with having store replicated across all the nodes. However with GlusterD 2.0 this should be doable. I'd like to keep this bug open and track it for 4.0.

Note You need to log in before you can comment on or make changes to this bug.