Bug 1509644
Summary: | rpc: make actor search parallel | ||
---|---|---|---|
Product: | [Community] GlusterFS | Reporter: | Milind Changire <mchangir> |
Component: | rpc | Assignee: | Milind Changire <mchangir> |
Status: | CLOSED CURRENTRELEASE | QA Contact: | |
Severity: | unspecified | Docs Contact: | |
Priority: | unspecified | ||
Version: | mainline | CC: | bugs, nbalacha |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | glusterfs-4.0.0 | Doc Type: | If docs needed, set a value |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2018-03-15 11:19:42 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Milind Changire
2017-11-05 12:16:18 UTC
REVIEW: https://review.gluster.org/18543 (rpc: make actor search parallel) posted (#4) for review on master by Milind Changire COMMIT: https://review.gluster.org/18543 committed in master by ------------- rpc: make actor search parallel Problem: On a service request, the actor is searched using an exclusive mutex lock which is not really necessary since most of the time the actor list is going to be searched and not modified. Solution: Use a read-write lock instead of a mutex lock. Only modify operations on a service need to be done under a write-lock which grants exclusive access to the code. Change-Id: Ia227351b3f794bd8eee70c7a76d833cc716ab113 BUG: 1509644 Signed-off-by: Milind Changire <mchangir> This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-4.0.0, please open a new bug report. glusterfs-4.0.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/announce/2018-March/000092.html [2] https://www.gluster.org/pipermail/gluster-users/ |