Bug 1282334 - Starting volumes with large no. of bricks fails
Summary: Starting volumes with large no. of bricks fails
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: core
Version: rhgs-3.1
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: Bug Updates Notification Mailing List
QA Contact: Anoop
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-11-16 07:22 UTC by Anush Shetty
Modified: 2018-02-07 04:22 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-02-07 04:22:16 UTC
Embargoed:


Attachments (Terms of Use)

Description Anush Shetty 2015-11-16 07:22:46 UTC
Description of problem: With Heketi, when a volume is created with a large number of bricks ( > 300), glusterd timeouts volume start operation, during which there is race condition between heketi and glusterd operation. 


Version-Release number of selected component (if applicable): heketi-1.0.1-1.el7rhgs.x86_64


How reproducible: Always


Steps to Reproduce:
1. Create a volume through heketi of a large size: In our case, it was 640G where 300+ bricks were created for a single gluster volume


Actual results: Volume creation fails due to glusterd timeout.

Additional info: A solution for the same is available upstream now: https://github.com/heketi/heketi/issues/248

Comment 2 Luis Pabón 2015-11-20 15:44:43 UTC
This looks to me like a Glusterd issue that may be fixed in GlusterD 2.0.

Comment 3 Amar Tumballi 2018-02-07 04:22:16 UTC
Thank you for the bug report. 

This particular bug was fixed and a update package was published (RHGS 3.3.1+). Please feel free to report any further bugs you find, or make further reports if this bug is not fixed after you install the update.


Note You need to log in before you can comment on or make changes to this bug.