Bug 1711249 - bulkvoldict thread is not handling all volumes while brick multiplex is enabled
Summary: bulkvoldict thread is not handling all volumes while brick multiplex is enabled
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterd
Version: rhgs-3.5
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: RHGS 3.5.0
Assignee: Mohit Agrawal
QA Contact: Bala Konda Reddy M
URL:
Whiteboard:
Depends On:
Blocks: 1696809 1711250
TreeView+ depends on / blocked
 
Reported: 2019-05-17 11:17 UTC by Mohit Agrawal
Modified: 2019-11-20 07:25 UTC (History)
5 users (show)

Fixed In Version: glusterfs-6.0-4
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1711250 (view as bug list)
Environment:
Last Closed: 2019-10-30 12:21:27 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2019:3249 0 None None None 2019-10-30 12:21:46 UTC

Description Mohit Agrawal 2019-05-17 11:17:40 UTC
Description of problem:
In commit ac70f66c5805e10b3a1072bd467918730c0aeeb4 one condition was missed 
to handle volumes by bulkvoldict thread so at the time of getting friend request
from peer, glusterd is not sending all volumes updates to peers.

Version-Release number of selected component (if applicable):


How reproducible:
Always

Steps to Reproduce:
1. Setup 500 volumes(test1..test500 1x3)
2. Enable brick_multiplex
3. Stop glusterd on one node
4. Update "performance.readdir-ahead on" for volumes periodically
   like test1,test20,test40,test60,test80...test500
5. Start glusterd on
6. Wait 2 minutes to finish handshake and then check the value of performance.readdir-ahead for volumes (test1,test20,test40,....test500)
   The value should be sync by peer nodes
Actual results:
  For some of the volumes value is not synced.

Expected results:
  For all the volumes value should be synced

Additional info:

Comment 9 errata-xmlrpc 2019-10-30 12:21:27 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2019:3249


Note You need to log in before you can comment on or make changes to this bug.