Bug 1377290 - The GlusterFS Callback RPC-calls always use RPC/XID 42
Summary: The GlusterFS Callback RPC-calls always use RPC/XID 42
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: rpc
Version: 3.8
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
Assignee: Niels de Vos
QA Contact:
URL:
Whiteboard:
Depends On: 1377097
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-09-19 11:40 UTC by Niels de Vos
Modified: 2016-10-20 14:03 UTC (History)
3 users (show)

Fixed In Version: glusterfs-3.8.5
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1377097
Environment:
Last Closed: 2016-10-20 14:03:22 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)

Description Niels de Vos 2016-09-19 11:40:06 UTC
+++ This bug was initially created as a clone of Bug #1377097 +++

Description of problem:
The RPC/XID for callbacks has been hardcoded to GF_UNIVERSAL_ANSWER. In
Wireshark these RPC-calls are marked as "RPC retransmissions" because of
the repeating RPC/XID. This is most confusing when verifying the
callbacks that the upcall framework sends. There is no way to see the
difference between real retransmissions and new callbacks.


Version-Release number of selected component (if applicable):
all

How reproducible:
100%

Steps to Reproduce:
1. enable features.cache-invalidation on a volume
2. start a capture of network traffic (tcpdump -iany -s0 -w/tmp/out.pcap tcp)
3. create and delete some files on the mountpoint
4. inspect the .pcap file with Wireskark and filter on 'glusterfs.cbk'
5. notice the "RPC retransmission of #..." on all callback packets

Actual results:
Wireshark thinks all callback packets are a retransmission. This is not the case, the contents of the packets are different (except for the rpc.xid).

Expected results:
The rpc.xid should increase for each callback that gets sent (per client).

Additional info:

--- Additional comment from Worker Ant on 2016-09-18 14:15:34 CEST ---

REVIEW: http://review.gluster.org/15524 (rpc: increase RPC/XID with each callback) posted (#1) for review on master by Niels de Vos (ndevos@redhat.com)

--- Additional comment from Niels de Vos on 2016-09-18 14:16 CEST ---

tcpdump with the patch applied, showing the RPC/XID increasing per client (different ports, all on the same localhost)

--- Additional comment from Worker Ant on 2016-09-19 11:32:08 CEST ---

COMMIT: http://review.gluster.org/15524 committed in master by Raghavendra G (rgowdapp@redhat.com) 
------
commit e9b39527d5dcfba95c4c52a522c8ce1f4512ac21
Author: Niels de Vos <ndevos@redhat.com>
Date:   Fri Sep 16 17:29:21 2016 +0200

    rpc: increase RPC/XID with each callback
    
    The RPC/XID for callbacks has been hardcoded to GF_UNIVERSAL_ANSWER. In
    Wireshark these RPC-calls are marked as "RPC retransmissions" because of
    the repeating RPC/XID. This is most confusing when verifying the
    callbacks that the upcall framework sends. There is no way to see the
    difference between real retransmissions and new callbacks.
    
    This change was verified by create and removal of files through
    different Gluster clients. The RPC/XID is increased on a per connection
    (or client) base. The expectations of the RPC protocol are met this way.
    
    Change-Id: I2116bec0e294df4046d168d8bcbba011284cd0b2
    BUG: 1377097
    Signed-off-by: Niels de Vos <ndevos@redhat.com>
    Reviewed-on: http://review.gluster.org/15524
    Smoke: Gluster Build System <jenkins@build.gluster.org>
    NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
    CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
    Reviewed-by: Raghavendra G <rgowdapp@redhat.com>

Comment 1 Worker Ant 2016-09-19 11:44:01 UTC
REVIEW: http://review.gluster.org/15528 (rpc: increase RPC/XID with each callback) posted (#1) for review on release-3.8 by Niels de Vos (ndevos@redhat.com)

Comment 2 Worker Ant 2016-09-21 03:11:34 UTC
COMMIT: http://review.gluster.org/15528 committed in release-3.8 by Raghavendra G (rgowdapp@redhat.com) 
------
commit e9478b620fbcbc2bdca9e8a34e5b47e93926f0d2
Author: Niels de Vos <ndevos@redhat.com>
Date:   Fri Sep 16 17:29:21 2016 +0200

    rpc: increase RPC/XID with each callback
    
    The RPC/XID for callbacks has been hardcoded to GF_UNIVERSAL_ANSWER. In
    Wireshark these RPC-calls are marked as "RPC retransmissions" because of
    the repeating RPC/XID. This is most confusing when verifying the
    callbacks that the upcall framework sends. There is no way to see the
    difference between real retransmissions and new callbacks.
    
    This change was verified by create and removal of files through
    different Gluster clients. The RPC/XID is increased on a per connection
    (or client) base. The expectations of the RPC protocol are met this way.
    
    > Change-Id: I2116bec0e294df4046d168d8bcbba011284cd0b2
    > BUG: 1377097
    > Signed-off-by: Niels de Vos <ndevos@redhat.com>
    > Reviewed-on: http://review.gluster.org/15524
    > Smoke: Gluster Build System <jenkins@build.gluster.org>
    > NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
    > CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
    > Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
    (cherry picked from commit e9b39527d5dcfba95c4c52a522c8ce1f4512ac21)
    
    Change-Id: I2116bec0e294df4046d168d8bcbba011284cd0b2
    BUG: 1377290
    Signed-off-by: Niels de Vos <ndevos@redhat.com>
    Reviewed-on: http://review.gluster.org/15528
    Smoke: Gluster Build System <jenkins@build.gluster.org>
    NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
    CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
    Reviewed-by: Raghavendra G <rgowdapp@redhat.com>

Comment 3 Niels de Vos 2016-10-20 14:03:22 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.5, please open a new bug report.

glusterfs-3.8.5 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://www.gluster.org/pipermail/announce/2016-October/000061.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.