Bug 1549505
| Summary: | Backport patch to reduce duplicate code in server-rpc-fops.c | ||
|---|---|---|---|
| Product: | [Community] GlusterFS | Reporter: | Mohit Agrawal <moagrawa> |
| Component: | rpc | Assignee: | Mohit Agrawal <moagrawa> |
| Status: | CLOSED CURRENTRELEASE | QA Contact: | |
| Severity: | low | Docs Contact: | |
| Priority: | low | ||
| Version: | 3.12 | CC: | bugs, rhinduja, rhs-bugs |
| Target Milestone: | --- | ||
| Target Release: | --- | ||
| Hardware: | All | ||
| OS: | All | ||
| Whiteboard: | |||
| Fixed In Version: | glusterfs-3.12.7 | Doc Type: | If docs needed, set a value |
| Doc Text: | Story Points: | --- | |
| Clone Of: | 1549501 | Environment: | |
| Last Closed: | 2018-04-06 11:06:55 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | 1549501 | ||
| Bug Blocks: | |||
|
Description
Mohit Agrawal
2018-02-27 09:49:24 UTC
REVIEW: https://review.gluster.org/19638 (protocol/server: Backport patch to reduce duplicate code in server-rpc-fops.c) posted (#1) for review on release-3.12 by MOHIT AGRAWAL COMMIT: https://review.gluster.org/19638 committed in release-3.12 by "MOHIT AGRAWAL" <moagrawa> with a commit message- protocol/server: Backport patch to reduce duplicate code in server-rpc-fops.c > Signed-off-by: Amar Tumballi <amarts> > (cherry picked from commit a81c0c2b9abdcb8ad73d0a226b53120d84082a09) BUG: 1549505 Change-Id: Ifad0a88245fa6fdbf4c43d813b47c314d2c50435 Signed-off-by: Mohit Agrawal <moagrawa> This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.12.7, please open a new bug report. glusterfs-3.12.7 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/maintainers/2018-March/004303.html [2] https://www.gluster.org/pipermail/gluster-users/ |