Bug 1282334

Summary: Starting volumes with large no. of bricks fails
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Anush Shetty <ashetty>
Component: coreAssignee: Bug Updates Notification Mailing List <rhs-bugs>
Status: CLOSED CURRENTRELEASE QA Contact: Anoop <annair>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: rhgs-3.1CC: rhs-bugs
Target Milestone: ---Keywords: ZStream
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-02-07 04:22:16 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Anush Shetty 2015-11-16 07:22:46 UTC
Description of problem: With Heketi, when a volume is created with a large number of bricks ( > 300), glusterd timeouts volume start operation, during which there is race condition between heketi and glusterd operation. 


Version-Release number of selected component (if applicable): heketi-1.0.1-1.el7rhgs.x86_64


How reproducible: Always


Steps to Reproduce:
1. Create a volume through heketi of a large size: In our case, it was 640G where 300+ bricks were created for a single gluster volume


Actual results: Volume creation fails due to glusterd timeout.

Additional info: A solution for the same is available upstream now: https://github.com/heketi/heketi/issues/248

Comment 2 Luis Pabón 2015-11-20 15:44:43 UTC
This looks to me like a Glusterd issue that may be fixed in GlusterD 2.0.

Comment 3 Amar Tumballi 2018-02-07 04:22:16 UTC
Thank you for the bug report. 

This particular bug was fixed and a update package was published (RHGS 3.3.1+). Please feel free to report any further bugs you find, or make further reports if this bug is not fixed after you install the update.