Bug 1164559

Summary: writev, fsync callback use truncate_rsp for decoding
Product: [Community] GlusterFS Reporter: rudrasiva11
Component: rpcAssignee: rudrasiva11
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: low Docs Contact:
Priority: unspecified    
Version: mainlineCC: bugs, gluster-bugs, ndevos, vbellur
Target Milestone: ---Keywords: Triaged
Target Release: ---   
Hardware: All   
OS: All   
Whiteboard:
Fixed In Version: glusterfs-3.7.0 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2015-05-14 17:28:28 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description rudrasiva11 2014-11-16 12:51:52 UTC
Description of problem:

writev_cbk, fsync_cbk utilize truncate_rsp for message decoding in response callback. This is not a big problem at this time since the structures are one and the same. Should the response diverge in the future this could show up as a bad bug.

Comment 1 Anand Avati 2014-11-16 13:39:09 UTC
REVIEW: http://review.gluster.org/9134 (client: writev,fsync to use correct rsp structure) posted (#1) for review on master by Rudra Siva (rudrasiva11)

Comment 2 Anand Avati 2014-11-17 06:13:58 UTC
COMMIT: http://review.gluster.org/9134 committed in master by Vijay Bellur (vbellur) 
------
commit 73be0be8149398b68213cb158cf94313169b5006
Author: Rudra Siva <rudrasiva11>
Date:   Sun Nov 16 08:37:40 2014 -0500

    client: writev,fsync to use correct rsp structure
    
    Presently writev_cbk and fsync_cbk pass truncate_rsp for decoding, this
    should not create any problems as they are structurally the same. Should
    they diverge in the future this could show up as a bug.
    
    Change-Id: Id7da7b6a20f468ca943ceb7926de64b7692f7ec8
    BUG: 1164559
    Signed-off-by: Rudra Siva <rudrasiva11>
    Reviewed-on: http://review.gluster.org/9134
    Reviewed-by: Niels de Vos <ndevos>
    Tested-by: Gluster Build System <jenkins.com>

Comment 3 Niels de Vos 2015-05-14 17:28:28 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 4 Niels de Vos 2015-05-14 17:35:43 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 5 Niels de Vos 2015-05-14 17:38:05 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 6 Niels de Vos 2015-05-14 17:44:50 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user