Bug 1652461 - With 1800+ vol and simultaneous 2 gluster pod restarts, running gluster commands gives issues once all pods are up
Summary: With 1800+ vol and simultaneous 2 gluster pod restarts, running gluster comma...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterd
Version: cns-3.10
Hardware: Unspecified
OS: Unspecified
low
high
Target Milestone: ---
: RHGS 3.5.0
Assignee: Mohit Agrawal
QA Contact: Bala Konda Reddy M
URL:
Whiteboard:
Depends On: 1699339
Blocks: 1696807 1710994
TreeView+ depends on / blocked
 
Reported: 2018-11-22 07:37 UTC by Neha Berry
Modified: 2019-11-20 07:54 UTC (History)
8 users (show)

Fixed In Version: glusterfs-6.0-2
Doc Type: Bug Fix
Doc Text:
If a user configured more than 1500 volumes in a 3 node cluster, and a node or glusterd service became unavailable, then during reconnection there was too much volume information to gather before the handshake process timed out. This issue is resolved by adding several optimizations to the volume information gathering process.
Clone Of:
: 1652465 1699339 (view as bug list)
Environment:
Last Closed: 2019-10-30 12:20:13 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2019:3249 0 None None None 2019-10-30 12:20:36 UTC

Comment 26 errata-xmlrpc 2019-10-30 12:20:13 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2019:3249


Note You need to log in before you can comment on or make changes to this bug.