Bug 1652461

Summary: With 1800+ vol and simultaneous 2 gluster pod restarts, running gluster commands gives issues once all pods are up
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Neha Berry <nberry>
Component: glusterdAssignee: Mohit Agrawal <moagrawa>
Status: CLOSED ERRATA QA Contact: Bala Konda Reddy M <bmekala>
Severity: high Docs Contact:
Priority: low    
Version: cns-3.10CC: moagrawa, nberry, rcyriac, rhinduja, rhs-bugs, storage-qa-internal, vbellur, vdas
Target Milestone: ---Keywords: Reopened, ZStream
Target Release: RHGS 3.5.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-6.0-2 Doc Type: Bug Fix
Doc Text:
If a user configured more than 1500 volumes in a 3 node cluster, and a node or glusterd service became unavailable, then during reconnection there was too much volume information to gather before the handshake process timed out. This issue is resolved by adding several optimizations to the volume information gathering process.
Story Points: ---
Clone Of:
: 1652465 1699339 (view as bug list) Environment:
Last Closed: 2019-10-30 12:20:13 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1699339    
Bug Blocks: 1696807, 1710994    

Comment 26 errata-xmlrpc 2019-10-30 12:20:13 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2019:3249