Bug 1711249

Summary: bulkvoldict thread is not handling all volumes while brick multiplex is enabled
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Mohit Agrawal <moagrawa>
Component: glusterdAssignee: Mohit Agrawal <moagrawa>
Status: CLOSED ERRATA QA Contact: Bala Konda Reddy M <bmekala>
Severity: medium Docs Contact:
Priority: unspecified    
Version: rhgs-3.5CC: rhs-bugs, sheggodu, storage-qa-internal, vbellur, vdas
Target Milestone: ---   
Target Release: RHGS 3.5.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-6.0-4 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1711250 (view as bug list) Environment:
Last Closed: 2019-10-30 12:21:27 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1696809, 1711250    

Description Mohit Agrawal 2019-05-17 11:17:40 UTC
Description of problem:
In commit ac70f66c5805e10b3a1072bd467918730c0aeeb4 one condition was missed 
to handle volumes by bulkvoldict thread so at the time of getting friend request
from peer, glusterd is not sending all volumes updates to peers.

Version-Release number of selected component (if applicable):


How reproducible:
Always

Steps to Reproduce:
1. Setup 500 volumes(test1..test500 1x3)
2. Enable brick_multiplex
3. Stop glusterd on one node
4. Update "performance.readdir-ahead on" for volumes periodically
   like test1,test20,test40,test60,test80...test500
5. Start glusterd on
6. Wait 2 minutes to finish handshake and then check the value of performance.readdir-ahead for volumes (test1,test20,test40,....test500)
   The value should be sync by peer nodes
Actual results:
  For some of the volumes value is not synced.

Expected results:
  For all the volumes value should be synced

Additional info:

Comment 9 errata-xmlrpc 2019-10-30 12:21:27 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2019:3249