Bug 1319084 - glusterd does not consistently start bricks after reboot.
Summary: glusterd does not consistently start bricks after reboot.
Keywords:
Status: CLOSED INSUFFICIENT_DATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterd
Version: rhgs-3.1
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: Atin Mukherjee
QA Contact: Byreddy
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-03-18 15:48 UTC by Tupper Cole
Modified: 2023-09-14 03:19 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-08-31 05:05:05 UTC
Embargoed:


Attachments (Terms of Use)

Description Tupper Cole 2016-03-18 15:48:01 UTC
Description of problem:
When rebooting a gluster node the brick does not come online consistently. After several rounds of testing on 3.7.5 we see fewer than 50% that the brick comes online after a reboot. Nothing is logged, and restarting glusterd fixes the issue. It looks like maybe a timing issue? 

Unfortunately this is a secure site, and no logs can be provided (even if they had anything to show). 

Version-Release number of selected component (if applicable):RHGS 3.1.2\glusterfs 3.7.5


How reproducible:50%


Steps to Reproduce:
1.Reboot a running gluster node. (simulating failure).
2.Check gluster volume status


Actual results:Often brick(s) are still down.
# gluster volume status test-volume
Status of volume: test-volume
Gluster process                        Port    Online   Pid
------------------------------------------------------------
Brick arch:/export/rep1                24010   Y       18474
Brick arch:/export/rep2                24011   N      


Expected results:All bricks are online after reboot
# gluster volume status test-volume
Status of volume: test-volume
Gluster process                        Port    Online   Pid
------------------------------------------------------------
Brick arch:/export/rep1                24010   Y       18474
Brick arch:/export/rep2                24011   Y       18479

Comment 2 Atin Mukherjee 2016-03-21 04:22:10 UTC
We need some more details. Could you please provide the output of following commands:

gluster peer status
gluster volume info

Were you running a single node cluster, if not which node did you reboot?

Comment 3 Atin Mukherjee 2016-03-29 07:07:31 UTC
Can we get the details sought for in comment 2?

Comment 4 Atin Mukherjee 2016-08-31 05:05:05 UTC
I am closing this BZ as we didn't have sufficient data for analysing this issue.

Comment 7 Red Hat Bugzilla 2023-09-14 03:19:49 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days


Note You need to log in before you can comment on or make changes to this bug.